As Artificial Intelligence, or AI, has been thrust into the public consciousness, many parents have become increasingly worried about its potentially negative effects on their children. In response to this, many school districts have been creating policies to ensure that teachers and students use AI responsibly. With input from key stakeholders such as administrators, teachers, students, IT departments, and the community an AI use policy should seek to enable teachers to supplement their students’ learning rather than supplant it. Any sensible AI policy should also seek to emphasis AI with humans in the loop.
A cohesive AI policy should focus on ensuring that any AI will be used responsibly, equitably, and ethically. Policies must leverage automation to advance learning outcomes while protecting human decision making and judgment. They should also question the underlying data quality in AI models to ensure fair and unbiased pattern recognition and decision making in educational applications, based on accurate information appropriate to the current lesson plan. In order to ensure that all students are accurately represented in any AI data sets, a responsible policy should enable examination of how particular AI technologies, as part of larger edtech or educational systems, may increase or undermine equity for students. Lastly, the policy must take steps to safeguard and advance equity, including providing for human checks and balances and limiting any AI systems and tools that undermine equity.
When developing an AI policy, it may also be helpful to think about exactly how educators wish to utilize this new technology. Key stakeholders should ask themselves “What is our collective vision of a desirable and achievable educational system that leverages automation to advance learning while protecting and centering human agency?” and “How and on what timeline will we be ready with necessary guidelines and guardrails, as well as convincing evidence of positive impacts, so that constituents can ethically and equitably implement this vision widely?” Policy makers should also consider the fact that AI will bring privacy and other risks that are hard to address only via individual decision making. As a result, privacy protections should be carefully considered in any comprehensive policy. There should be clear limits on the ability to collect, use, transfer, and maintain personal data, including limits on targeted advertising. These limits should put the burden on platforms to minimize how much information they collect.
As previously mentioned, data privacy and safety should be a central part of any detailed AI policy. The development and deployment of AI requires access to detailed data. As discussed by Paul Trapani of LISTnet in a recent Cerini Connection, “One of the downsides of the chat engine is what you’re putting up there is not private data … you have to realize you are effectively uploading this to this other company…” This data goes beyond conventional student records (roster and gradebook information) to detailed information about what students do as they learn with technology and what teachers do as they use technology to teach. AI’s dependence on data requires renewed and strengthened attention to data privacy, security, and governance. As AI models are not generally developed in consideration of educational usage or student privacy, the educational application of these models may not be aligned with the educational institution’s efforts to comply with federal student privacy laws, such as FERPA, or state privacy laws.
Above all else, a proper AI policy should emphasize that teachers, learners, and others need to retain their agency to decide what patterns mean and to choose courses of action for their lesson plans. We must be careful not to hand over our control to the “black boxes” of AI wherein we don’t understand how our inputs are used to generate AI responses. As Paul Trapani warned, “We also don’t know the full capabilities and the full dangers at the moment…[because] what happens is it has an input and has an output..[but] we don’t really know exactly how it’s doing what it’s doing.” Teachers should teach their students to think critically about AI generated responses and not take them at face value. Questioning AI will help to ensure that students continue to take an active role in their educational journey rather than taking a back seat and letting AI do all the heavy lifting. Educators should ask themselves “How precise are the AI models? Do they accurately capture what is most important? How well do the recommendations made by an AI model fit educational goals? What are the broader implications of using AI models at scale in educational processes?”
Like with any new technology, AI raises a number of concerns for society as whole. Educators and other key stakeholders such as administrators, students, IT departments, however, have a particularly difficult task of addressing society’s various AI concerns while also ensuring that AI is properly integrated into any future curriculum to keep students properly educated about this emerging technology. When developing an AI policy, school districts, and others, should focus on a few key areas. A well-developed AI policy should, first and foremost, be people centered while ensuring that AI is used responsibly, equitably, and ethically. Data privacy concerns are also a significant issue that must be addressed. A comprehensive policy must carefully consider what information is collected and set limits on the ability to collect, use, transfer, and maintain such data. Lastly, such policies should also promote transparency and allow, and even promote, questioning of the information generated by AI. Creating an effective AI policy will safeguard all users while also allowing AI to be used as a valuable educational resource.
Adam Brigandi, CPA, MBA
Supervisor
Adam is a Supervisor who works with both nonprofit and special education clients. His auditing experience allows him to assist in vital audit functions such as systems testing and analysis.