By Nancy Azzam
As the world continues to grow and expand, artificial intelligence (AI) has expanded with it causing increased concerns regarding its use.
Easily accessible AI technology and websites enable common computers to simulate human learning, comprehension, problem-solving, creativity and much more. Common AI websites include Chat GPT, QuillBot, and Grammarly.
AI usage provides numerous benefits such as reducing human error, saving time, assisting with tasks, generating ideas, and making unbiased decisions.
However, AI has also had an outsized impact on education. Many students now take advantage of AI platforms to complete assignments, sometimes violating the academic integrity policy by submitting work that is not their own when solving homework problems, taking tests or quizzes, or writing essays for their courses.
As a result, teachers turned to AI detection tools like Originality AI, GPT Zero, and Turnitin’s detection services, aiming to identify work completed with the help of AI and to ensure academic honesty.
Despite teachers’ efforts, these tools are not always reliable. AI detectors work by analyzing linguistic and structural features to determine whether a piece of writing was likely created by a human or AI. Yet, AI detecting AI can make mistakes too, often mislabeling human writing as AI-generated.
“I think that AI detectors can have some inaccuracies when attempting to distinguish human writing from AI-generated writing,” senior Elianna Sogomonov said. “I do think teachers should approach using detectors with caution so students don’t have to face punishment for something that they didn’t do.”
False positives, where a detector incorrectly flags human-written work as AI-generated, is an ongoing problem. For instance, Turnitin, a widely used platform for academic integrity checks, acknowledged in June 2023 that its AI detection tool has a higher false positive rate than the company originally reported. Though Turnitin claims this rate is under 1 percent, the potential for wrongful accusations has led some universities, including Yale, Vanderbilt, and the University of Maryland, to restrict or ban the use of such tools.
These issues highlight a larger problem: current AI detectors are inconsistent and sometimes unreliable. This can lead to students being falsely accused of misconduct without substantial evidence.
The International Journal for Educational Integrity suggests that although AI tools can offer useful insights, their limited accuracy means they should be combined with teacher review and evaluation. Teachers should continue to review work on their own and not only rely on AI detection platforms.