Earlier this week, I spoke about all the topics in this post with Tom Rogers on Teachers Talk Radio. You can download and listen to the episode here and subscribe to the Teachers Talk Radio newsletter here.
At No More Marking, we’ve assessed nearly 3 million pieces of student writing using Comparative Judgement (CJ).
The vast majority of these pieces of writing have been some kind of English language assessment, where we are making judgements of the quality of students’ writing.
What about other subjects like literature, history, geography, etc, where the focus is not just on the quality of writing but on the understanding displayed by the students?
Removing the tyranny of the mark scheme
We have some experience with these kinds of essays, and in many ways Comparative Judgement lends itself brilliantly to this approach. That’s because CJ lets you respond to each piece holistically, without getting bogged down in very pedantic and tick-box mark schemes. I think these kinds of mark schemes have had a particularly debilitating effect on history and literature assessments.
If CJ is so good at addressing this issue, why haven’t we run more history and literature assessments? The major reason is time. CJ is quicker than traditional marking - but it still takes time, and we still need lots of qualified history teachers to be able to judge. In most schools, there just aren’t as many history teachers as English teachers and that makes running our large-scale national assessments much harder.
Improving efficiency
However all that is now changing with AI. We have successfully added AI judges to our English language assessments, and if we follow the same model for other subjects, we could reduce the amount of time it takes to judge by 90%. This model would still involve every piece of writing being seen twice by humans, and would provide us with live validation of the quality of the AI judging.
1066 and all Chat
We’ve run some positive internal trials and this week we launched our first large-scale national history project - CJ History. It will assess Year 7 responses to the essay question: Why did William win the Battle of Hastings? We think that a teacher will be able to assess their entire class’s essays and get personalised feedback on all of them with just 15 minutes of judging. It is currently free to take part.
You can read more about the project here.
You can read FAQs here.
You can see how the efficiency calculations work here.
The project calendar is here.
Register for our info webinar on Weds 23 June at 4pm here.
Subscribe here.
The project will also provide schools with nationally standardised data on their students’ performance - which is very rare for KS3 history!
Greater validity
One of the issues we’re most interested in exploring is the balance between analysis and narrative. For example, let’s compare two made-up essay extracts responding to the Hastings essay question.
Which is better? You will have your own view, but if you mark with a typical mark scheme, you might say the first, because it is “analysing”. But if you didn’t have to use a mark scheme, you might argue the second is better.
I think there’s an argument that modern history and literature mark schemes have overemphasised generic analysis skills and forgotten the extent to which analysis is underpinned by an understanding of narrative. This project will let us explore this question in more depth.
You can hear me discuss this point at greater length with Tom Rogers (a current history teacher!) in the Teachers Talk Radio episode here at the 54 minute mark.
The second essay is clearly superior. Historians have to sift through numerous possibilities and decide what's most important. The second essay prioritizes a cause and deprioritizes another... while providing a cohesive narrative. The first essay is a list of possibilities with no judgment. For a moment I thought it was a compilation of three different responses.