By Chloe Hsieh
In the Granada Hills Charter (GHC) Parent-Student Handbook of school policies, one major section, labeled “Academic Integrity Policy,” is dedicated to the prevalent issue every school needs to address: cheating. During fall last year, the Handbook was updated to address artificial intelligence (AI) use, especially as it pertains to cheating.
“YOU ARE CHEATING IF you use Artificial Intelligence (AI) such as ChatGPT or other AI tools to complete assignments of any kind, including consulting at any step in the process, without the explicit permission and supervision of the teacher,” reads a section in GHC’s schoolwide anti-cheating policy.
And true to this mandate, GHC teachers have attempted to prevent any AI-based cheating in their classrooms. Teachers promise instant zeroes on assignments that have AI-generated responses, use sites like Turnitin.com to catch student use of AI, and some even have GoGuardian block ChatGPT on student chromebooks during their class period.
However, there is an interesting double standard for this “no-AI” rule. Although students are prohibited from using this tool, this rule doesn’t apply to teachers.
“I’ve had multiple teachers who’ve used AI for school,” junior Amelia Verceles said. “One of my teachers has a seemingly firm stance against students using AI, but it’s very clear that many, if not all, of the questions on some of the assignments I’ve received were created with AI.”
Verceles went on to describe the AI-generated worksheets she’d been given in the past. Some of these worksheets even included invented “quotes” from the assigned article, as AI is commonly known to do. Several questions were also repetitive, merely being restated, again a common feature of AI.
Many teachers have an anti-AI use policy in their syllabus. So why is it that they are able to use it in order to reduce their effort in creating class materials, while students are obligated to treat shallow, potentially-nonsensical problems as legitimate analytical questions?
“If you’re going use AI to create assignments, at least have the decency to actually look at and revise the questions to make sure the assignment is actually intellectually stimulating and not just busy work,” Verceles said. “I’ve had entire worksheets that have the exact copy-and-pasted ChatGPT style. It’s honestly really infuriating.”
To students, it isn’t just an issue about the quality of assignments they are given. To many, it’s an issue of teacher-to-student respect.
“At the start of the year, we’ve always done a ‘social contract’ in every teacher’s class at Granada,” junior Nola Lew said. “And it always says something along the lines of mutual respect and trust, treating others the way you want to be treated. But as students, we aren’t allowed to use AI, since it shows a lack of effort and learning. Yet our teachers are allowed to make any assignment they want using AI. They are even encouraged to do so by the administration. It’s an unfair double standard.”
Teachers should be modeling the effort and authenticity they want to see from their students.
“We are here to learn from a teacher, expecting a well thought out, unique approach to a subject that they are an expert on,” Lew said. “If I wanted to learn from an algorithm designed to give me questions, I’d just go on an AI site myself and never go to class. Seeing AI influence in my assignments discourages me from trying– why should I try if my teacher doesn’t?”
AI in schools isn’t going away any time soon, but one thing is clear. Double standards undermine trust. If GHC expects students to uphold academic integrity by avoiding AI shortcuts, teachers must also hold themselves to that same standard, or at least be transparent about when and how they use AI.
It’s time for GHC to have a real conversation about what responsible AI use looks like on both sides of the classroom.