The first time I witnessed AI use in the classroom was in my sophomore English class, when a classmate of mine tried to answer a question using ChatGPT. Our teacher asked a question about the themes of a novel we were supposed to have read in the summer prior to that school year. Upon getting called on, the classmate talked about the ‘theme’ of freedom, citing several examples of it. However, nothing she was talking about ever happened in the book. The other students and I looked around at each other in confusion, some sifting through their copy to try and identify what she could possibly be talking about. Despite the fact that her answer was false, and the wording she used seemed to be an indicator of AI use, our teacher agreed with her and went on to praise her analysis of the text.
Since that incident, the amount of experiences I’ve had involving AI in the classroom have only increased. ChatGPT is an up-and-coming AI that has increased in popularity and use over the last two years. It’s highlighted for its ability to give thorough answers to questions, to generate images, and to help organize and plan one’s schedule, along with the capacity for far more. On the ChatGPT official website, the developers address the potential use of AI in schools, suggesting that students can use the program “for personalized tutoring and study guides, as well as for career prep, like resume reviews or mock interviews.” Although the concept is certainly convenient, there could be many downfalls of such an audacious goal. That leaves the question- does AI help or hinder education?
On paper, the inclusion of AI in schools doesn’t sound harmful. There are benefits to its use in classrooms. For example, according to the College of Education at the University of Illinois Urbana-Champaign, the use of AI could lead to more individualized learning. AI could track how students learn, and adjust their coursework to adapt to the areas they may struggle with. AI can also be helpful for disabled students. CNN recently reported on how AI can be used to help blind people complete tasks like hailing a taxi without having to be reliant on others. Systems can be trained to be more inclusive as well, as they are trained to understand those with disabilities that impact speech. “That means if a person needs to consume information in a certain medium, AI can act as a go-between; for instance, turning a piece of audio into written text for a hearing-impaired user,” the article suggests. This idea of switching between mediums of information can also be helpful even for non-disabled students, as they can adjust the way they are learning in school to their own personalized way of understanding the content.
AI can also be beneficial to non-native English-speaking students. I’ve witnessed an English speaker at the front desk of a doctor’s office assist a patient still learning the language using an AI-generated translation app. This same innovation can also be applied to students. The National Library of Medicine conducted a study on AI use for non-native English speaking scientists writing scientific literature. They concluded, “The AI-based programs can assist with various aspects of the scientific writing process, including searching for relevant papers, generating summaries, correcting grammatical errors, improving writing style, and creating bibliographies.” I myself have had to write scientifically in some of my classes. So, having this tool to make those assignments more accessible to those who need it could be revelational when it comes to AI assistance for non-native English speakers.
However, despite all of the potential values and the convenience of AI, there are many apparent downfalls as well. The Office of Educational Technology has shown concerns about privacy in AI usage in schools. Using students to train AI is morally questionable at best. What happens when an AI bases the information it repeats on what a student says? AI isn’t always trustworthy, and its technology is so advanced that convincing deep fakes and voice mimicry are commonly generated online. In fact, in 2023 at Westfield High School, AI was used to spread sexually explicit images of students across the school. If AI is normalized in schools, and privacy is an issue, the things that AI can create with the data a student provides could be dangerous to their safety both inside and outside of school.
Furthermore, bias in AI is also brought into question. In 2018, Amazon had to shut down a new AI system being used for job recruitments because the system showed a bias against women. Although this happened years ago, and technology has evolved exponentially in that time, the legitimacy of the concern remains. Chapman University explores how stereotyping bias can occur when AI analyzes information based on harmful ideals, and can make harmful assumptions based on the skewed data it is provided with. A student receiving inaccurate information due to the bias that can be shown in AI can not only harm the student as an individual and make them feel unsafe in school, but can harm their education as a whole.
With all of this information in mind, it may be difficult to come to a clear conclusion as to what to do about AI in schools. When the question of ethics and education makes having an unbiased opinion nearly impossible, it’s best to go to someone with firsthand experience with teaching students in a generation that uses artificial intelligence. Kelly Bales, an AP Government teacher at Tates Creek High School, spoke to me about her views on the matter.
“It’s a quick fix for something that needs more serious thought,” she said. “The learning is missed out when kids want to just get the quickest answer.” She finds the lines to be blurred when it comes to AI use in schools, and that instead of allowing it without supervision or banning it altogether, that a middle ground could potentially be reached. “Maybe have a class where all kids are required to take it, and they do certain checkpoints, certain exercises on how to check resources,” she suggested. “As a history teacher, I have to worry about when kids are throwing out big words at me and they have no idea what they mean.”
As a history teacher, Bales also struggles with helping students grasp the concept of context as it relates to history, and how their idea of how things happened is skewed. Because students are now being so heavily trained to survey things to see whether they are AI generated or not, they’ve become hypervigilant to the point of discrediting history. Bales herself has even had some of her students question whether credible textbook images are just random fakes. This just goes to show how AI use has grown to impact all students, not just those who use it for school.
But, there is a potential solution, one that could balance out the benefits of AI whilst also taking its negatives into account. A class on AI generation would allow students to learn how to find credible sources instead of relying on technology to provide the answers automatically. This could not only encourage independence from AI, but learning as a whole. The usage of programs like ChatGPT may be more convenient, but when it comes to academics, the issue of privacy and bias makes fully trusting the information it spouts to be nearly impossible. To suppress AI usage would put certain students at a disadvantage, but by providing all students with the necessary education about how to use AI ethically and beneficially, all students will be put on an even playing field for navigating this ever-evolving technology once more.