How should England's curriculum & assessment review respond to AI?
Why we have to go backwards to go forward
Just under a year ago, Britain’s new Labour government announced a review of England’s curriculum and assessment system. In that short time, developments in generative AI have obliterated some of our basic assumptions about assessment.
I’ve argued in the past that it’s fine for schools to take time to respond to new technology, and that they don’t have to change everything in response to passing fads.
But there comes a point where new trends are impossible to ignore. We are beyond that point now.
Over the last few weeks, I’ve caught up with a few friends and former colleagues who are teachers in secondary schools. They are seeing these impacts up close and were pretty pessimistic about what is happening. One of them pointed out that the current Year 10s, who will be taking GCSEs in just over a year, were at the start of Year 8 when ChatGPT launched. Generative AI has essentially been a permanent feature of their secondary school experience, and many students have used it extensively to complete written homework tasks. It’s entirely possible that, as a result, this cohort has had less practice writing than any previous equivalent cohort ever. There are similar stories from other countries. Last week, an article in New York magazine called “Everyone is Cheating Their Way Through College” exposed just how widespread AI cheating is in the US.
The research bears out the pessimism. In the past two years, there have been dramatic increases in the numbers of students using generative AI to do their work. In the UK, at university level, the number of students using AI for assessments went up from 53 per cent in 2024 to 88 per cent in 2025. Among 13- to 18-year-olds generative AI use went from 37 per cent in 2023 to 77 per cent in 2024.
Not only that, but you can’t spot its use: AI detectors don’t work. They miss real plagiarism and accuse human work of being plagiarised.
In short, the growing and undetectable use of generative AI poses a huge threat to the integrity of assessments, and by extension to the integrity of education.
In the worst-case scenario, which may already be here, we end up with a kabuki dance where students pretend to write essays and teachers pretend to mark them.
The columnist Duncan Robinson has a theory that lots of big political scandals are not exposed but merely noticed. They hide in plain sight before they “become” public scandals. I think the extent of AI plagiarism is one of these scandals-in-waiting. Everyone in this world knows it’s a big problem – it just hasn’t filtered through to the general public yet.
However, there are possible happier endings. In the best-case scenario, teachers and exam systems use AI in combination with human judgment to speed up providing grades and feedback on work that the students have done themselves.
Here are five things England’s curriculum and assessment review needs to do to make the best-case scenario more likely.
Review the performance of AI marking systems
There is a plethora of new AI marking systems out there. On this Substack, we’ve written a lot about how our system works: you can read more here and here. Ofqual should carry out a research review into how the different types of systems work, with a particular focus on the agreement between human and AI decisions, and the impact AI marking has on student motivation.
Revise initial teacher training content
The widespread use of AI has exposed a number of misconceptions about assessments.
There is a lot of wishful thinking about how it’s fine to use AI for exams or classwork because that is what everyone will be using in the workplace. This is a fundamental category error about the purpose of education and assessment.
What matters in an assessment is not the end product; it’s what the end product tells you about the process the student went through to get there.
If a student turns in a perfect piece of work that’s been generated by AI, it’s like using a truck to move weights at the gym or hailing a taxi to take you round the marathon course. Initial teacher training needs new modules on assessment and AI which explain this point clearly.
Eliminate non-examined written assessments
Around the world, everyone is waking up to the fact that unsupervised writing assessments are no longer viable.
England’s regulated assessment system is mostly based around exams, which makes the review’s job easier. We need to stick with exams, reduce or remove written coursework where it still exists, and not be tempted to reintroduce it.
However, a wider systemic problem is that independent schools can take unregulated qualifications with significant proportions of non-examined assessment of the type that is ripe for AI plagiarism.
Keep handwritten exams
For years now, we’ve heard that exams need to go digital. But do they?
There are important cognitive benefits to handwriting, and if students know the final assessment is handwritten it will make them more likely to practise using that format too, and less likely to use AI. Plus, AI actually makes it easier to process and transcribe handwritten exam scripts. For example, our software allows teachers to easily switch between an image of the original handwritten script and an AI transcription.
Investigate post-qualification university admissions
Currently, students apply to university with predicted grades. It would be much fairer if they applied with their actual results, but in the current system that is a fiendish logistical challenge.
If AI marking does work well, we could keep the exam calendar as is, get quicker results to students, and run a university admissions process using actual grades at the end of the summer term.
Moving backwards to go forwards
What makes change harder is that we are in a paradoxical situation, for two reasons.
First, we are going to have to completely revamp our education system to deal with AI. But most of the ways we need to revamp it are very old-fashioned and analogue - back to in-person handwritten exams.
Second, some of the most interesting new developments in technology are making these old-fashioned and analogue approaches more efficient and effective. The major argument in favour of digital exams is the efficiency / modernisation one. But if newer technologies make it just as efficient to assess handwritten assessments, that all changes.
Many AI futurists have, ironically, been blindsided by these changes. They are stuck with a pre-generative AI narrative where everything has to happen on screen and where it’s relatively simple to set assessment questions that can’t be Googled.
That world is gone and it’s not coming back.
We have to adapt, quickly.
You can read a shorter version of this article on the Schools Week website here.
Couldn't agree more. But it's infuriating work trying to make these simple points in schools at the moment. There's a wilful pollyannishness about unsupervised assessments especially. That somehow a few tweaks here and there will deal with it.
Just on point 5, from outside the UK it seems crazy to use predicted scores. Why not real scores? This has happened fairly seamlessly in Australia for 30+ years, and I cant imagine we're on the bleeding edge. We just have a big clearing house in each state that updates the applications with scores, does some filtering, and passes them along to the unis.
"Keep handwritten exams"
In the US, we've had an increasing number of exams go digital using secure browsers. While this isn't appropriate in all situations or subjects, it's been going better than most folks had predicted. At this point, most states' tests in literacy, math, and science are digital, along with College Board's SAT and AP exams.
As a left-handed person, I appreciate how rarely I'm now in a position of needing to handwrite on paper - my hand is cleaner and my output less smudged!