Team of Teachers Draft New A.I. Policy
Guidelines for ethical uses of A.I. technology / Credits: Sara Weinrod
Artificial intelligence (A.I.) can be a valuable tool for students. Keeping in mind that usage of A.I. can both help and hinder students, a team of teachers at Walls have worked to create an official school A.I. policy.
The team of teachers building the policy has included heads of each department who have worked to tailor the policy to their individual subject. The humanities department might focus on using A.I. as a “tool for research,” whereas, for the arts, “it’s much more complicated and complex,” taking into consideration the varying definitions of art.
The new policy, implemented this year, requires students to cite generative A.I. tools — such as ChatGPT — “just like any other source,” according to Humanities teacher Carolyn Schulz.
The policy will be enforced using the A.I. detector on Turnitin.com, which indicates whether A.I. has been used without citation to write student work. According to Ms. Schulz, this method has been successful in detecting A.I. usage in student work this year.
If use of A.I. is detected, students will be subject to the standard plagiarism policy. As Ms. Schulz explained, “the first instance is a zero, the second is a zero plus a verbal reprimand and you meet with the administration, and then the third one is elevated to you possibly might be removed from school.”
ChatGPT, an A.I chatbot with the potential to help students with assignments, offers important educational benefits that Walls does not want to discount. “[Students] have the ability to go onto ChatGPT, for instance, and just say ‘can you give me a list of topics that are controversial topics around healthcare?’ and it could be really good for the brainstorming phase.”
She cited Grammarly, a program that uses A.I. to correct spelling, grammar, and punctuation errors, as well as help edit and revise, and improve writing as an example of a positive use of A.I. technology. “Grammar is good to know, but if you have a grammar editor, how amazing is that?” she said.
ChatGPT is “kinda a necessity for a school that has such an intense workload,” added Kate O’Brien (‘24), “A.I. in a way has become a life saver. I use it weekly.”
Conversely, for Theodore Mores (‘26), ChatGPT is often more of a hindrance than a resource. “I’ve used it before and found it to be a hassle,” he said, “and none of the answers have come to aid in my work.”
Regardless, excessive use of ChatGPT may hinder students’ academic development. Ms. Schulz explained that the technology could prevent students from “developing strong research skills because [they’re] limiting those skills to one search engine.” She added that has created a “gray area of understanding what is authentically your own work and what is somebody else’s work . . . It dumbs it down for students, so it’s not creating those critical thinking skills.”
For students like Mores who don’t often use ChatGPT, the policy will have a relatively small impact. “I don’t know, it might be good, it might be bad,” he said, “or it might just be a useless answer to an already unsolvable problem.”
“For a lot of people, the easy way out would be abusing platforms like [ChatGPT],” O’Brien said. “Although it’s kinda a double-edged sword, I would definitely be excited to see how this policy can make a positive change on the future of Walls.”
For Ms. Schulz, the crux of the issue is that “we’re going to lose authenticity, genuineness, [the] voice of humans, right? The cost [of reliance on ChatGPT] is your ability to develop as a human being.” In crafting this policy, she hopes to prevent AI from restricting “the skills that [a student is] developing as a researcher, as an investigator, as someone that’s going to be an informed human being.”