Article

The role of AI checkers in schools

14 March 2025

AI Checkers in Schools

With the unstoppable rise of generative AI artificial intelligence (“AI”) tools, schools are wise to be cautious about students using AI for their essays and coursework. Some schools have turned to AI checkers to help detect AI-generated content. This has not been without its problems and is not a foolproof solution.

Benefits of AI checkers

AI checkers can reinforce academic integrity by identifying AI-generated content, ensuring that students submit original work. This can help maintain the credibility of schools, particularly where that work is assessed externally by an exam board. However, checking student submissions can be an issue of scale.

In 2024 there were nearly 5.7m GCSEs taken and nearly 817,000 A-Level entries. This presents a logistical challenge to try to identify those using AI and submitting work which is not their own. AI checkers can process large volumes of submissions quickly, reducing the workload on staff. By detecting AI-generated content, schools can identify students who may be struggling to learn and who are using AI tools to keep on top of their studies. Rather than punishing the student, the school could warn the student and then provide them with support and resources to guide them through their studies.

Risks of AI checkers

AI checkers are far from infallible and are not necessarily the solution. These programs may produce false positives, flagging genuine work as AI-generated, or false negatives. AI detection companies advertise high accuracy levels with most at 98% or higher: a quick search of three separate companies show one as claiming 99% accuracy, another at 99.12% and a third stating an accuracy level of 99.98%.

Vanderbilt, a private research university in Tennessee, said this is not high enough. In 2023 it announced that it had stopped using Turnitin, which claims 98% accuracy. Turnitin said that 3.5% of submissions were detected to have between 80% and 100% of AI writing. Nevertheless, Vanderbilt said that even a false positive rate as low as 1% was too high. If all UK schools submitted GCSE and A-Level essays or coursework papers with a 1% false positive rate, that would mean it would have incorrectly flagged about 65,000 papers as having some of it written by AI! That number will increase if schools submit more than one paper per student and if they include students in years other than 11 and 13.

It is also worth noting that AI detectors are more likely to label text written by non-native English speakers as AI-written. This might reduce the risk if most A-Level students speak English fluently.

There are also privacy concerns. Schools are responsible for a large amount of student data and submitting this to an AI checker must be done in a manner that complies with GDPR around processing, security and retention of data.

While these can help with the scale of students, it could lead to over reliance. There still needs to be human oversight and judgment to maintain academic standards. This concept of “human in the loop” is present both in the EU AI Act and the new UK Data (Use and Access) Bill.

Next steps for schools

Schools should address artificial intelligence and the companies they use to check work head-on:

  1. Adopt an AI policy: according to a recent survey more than 72% of organisations do not have a company policy on use of generative AI. Schools should adopt a policy and outline whether staff and students may use AI
  2. Implement robust data privacy: schools handle vast quantities of student data and are already aware of their requirements under GDPR. They should factor in the use of AI checkers to ensure compliance with data protection obligations. This includes securing student data, obtaining informed consent, and providing transparency about how data is used
  3. Regularly evaluate and update AI checkers: to cut down on the amount of false positives and negatives, schools should regularly evaluate the performance of AI checkers and update or change as appropriate
  4. Training: staff and students should be given relevant training on the use of AI and AI checkers. In particular, this should include informing them about the capabilities and limitations of AI
  5. Human supervision: it is vital for schools to ensure human oversight of all aspects of AI. They should also establish an appeals process if a student contests their content being flagged as produced by AI.

Related articles

View All