"You cheated!" - When the AI catches the wrong person
Picture: Iryna Imago / shutterstock.com
The university hunts fraudsters - and ruins careers in the process
The training is almost complete, the applications are in progress and the dream job seems within reach - and suddenly an email with the subject "Academic Integrity Concern" flutters into your inbox. As the Australian news portal ABC News reports, this is exactly what happened to 22-year-old Madeleine, a trainee nurse at the Australian Catholic University (ACU). The accusation: cheating with AI.
Not because there was solid evidence. But because an AI detector such as Turnitin's Tool considered her essay to be allegedly 84 percent AI-generated. This was enough for the university to initiate months of proceedings. During this time, Madeleine's degree certificate was blocked with the note "Results withheld". Application failed. Career put on hold. Life planning on pause.
What happened next? After six months it came out: it was all a mistake.
6,000 allegations of fraud - 90% due to alleged AI use
As ABC writes, according to internal documents, the ACU recorded almost 6,000 suspected cases of academic misconduct in 2024 alone - around 90 percent of which were related to AI. In many cases, the only "evidence" was a report from an automated detector.
And that's not just a problem with the software - it's a structural problem. The students didn't have to defend their rights - they had to prove their innocence.
Handwritten notes, browser histories, digital drafts: anyone who couldn't provide a folder full of documents was in a bad position. Data protection? Not a chance. Stress? Guaranteed.
The trick: even the provider itself writes on its website that Turnitin "should not be used as the sole basis for decision-making" - because it is often wrong. Nevertheless, the ACU trusted the program so much that it apparently relied on its statements for months.
AI competence? More like chaos!
The students are frustrated - but the teaching staff are also sounding the alarm. Leah Kaufmann, Vice President of the teachers' union and an ACU lecturer herself, speaks of "overburdened staff" and constantly changing rules. What is allowed today may be forbidden tomorrow - and no one really knows how to deal with AI in everyday university life.
Professor Danny Liu from the University of Sydney believes a ban is the wrong approach. Instead, his university has introduced a "two-track system": Students are allowed to use AI for certain tasks - but in a transparent, reflective and controlled way. His motto: "We check whether someone is learning - not whether they are cheating."
In the age of AI, this approach is more important than ever. Because it's clear that anyone who simply has entire assignments generated has no business studying. But those who are prejudged because of an unreliable detector are not only not examined - they are simply ignored.
Critical classification
You can't preach digitalization and at the same time blindly trust software that has been proven to make mistakes. Anyone who tests students with automatic cudgels is not only damaging their future - but also the credibility of the entire education system. Anyone who sees red with 84 percent blue in their essay should start all over again. With real AI expertise - and common sense.
Are you the victim of a false AI accusation? Get legal assistance and fight against unjustified accusations!