Discover more from No More Marking
Using innovative assessments to improve students' writing
Combining different assessment types for greater insights
How do we assess writing?
Over the last 6 years, we have used Comparative Judgement to assess nearly 2 million pieces of students’ writing. Comparative Judgement is an innovative assessment technique that provides much more reliable assessments of open tasks like writing. It does not use artificial intelligence. Instead, it combines together hundreds of thousands of human comparative judgements to provide every student with a reliable score.
What have we learned from all of these assessments?
One striking finding that holds true across many of our schools and countries is that students often struggle to write accurate sentences. In particular, students will often write sentences that are incomplete (fragments) or that are too long (run-on sentences). These both make it harder to understand their meaning.
Here are two examples.
How can we learn more about students’ understanding of sentences?
We wanted to find out more about students’ understanding of sentence structure. So we designed a set of 20 simple multiple-choice questions to try and shed some light on why students make these kinds of errors. Here are two questions from the set.
These two questions are examples of the way that two questions targeting the same concept can still have very different challenges - something we have written about before. Both questions are structurally quite similar, in that they are testing student understanding of sentence fragments. But despite this structural similarity, the surface features make a big difference. Students find one question very easy, and one much harder. In our first trial of these two questions, with a couple of thousand Year 5 students in England, 91% got the first question right but only 13% got the second one right.
Why is there such a big discrepancy? We think that students don’t understand what makes a sentence, and instead focus on surface features - in this case, sentence length. The correct answer to option 5 looks like it is about the right length, and all the other options are very short. But in question 6, sentence length leads students astray. The correct answer is very short, and students therefore don’t think it can be a sentence.
What’s more important - CJ writing assessments or MCQs?
Comparative Judgement is an innovative and pioneering assessment technique that lets you assess writing directly and holistically.
MCQs are an older assessment technique that don’t directly assess writing, but they can provide you with more granular and precise information about specific aspects of writing. But I think they have two main weaknesses: they are hard to design well, and if they are used on their own they can become the end goal and mean that schools forget about teaching actual writing.
We think that combining the CJ and MCQs together adds a lot of value, and that CJ can offset the traditional weaknesses of MCQs. As you can see, if we can use actual samples of student writing to design the MCQs, that can provide us with greater insights. And the existence of the CJ writing assessment means we don’t lose focus on the actual writing.
Precise and personalised feedback
We’ve automated the feedback from our MCQs, so students can instantly get precise and personalised feedback on the mistakes they’ve made. Here’s the feedback a student would get if they got Question 6 wrong.
We’ve written on this Substack in the past about the challenges of automating feedback. Written comments, whether they are written by artificial or human intelligence, are often not precise enough to be really useful. In contrast, research by Butler and Roediger shows that precise feedback on multiple-choice questions does help students to improve. We’ve designed our MCQ feedback using this research.
Thanks for reading No More Marking! Subscribe for free to receive new posts and support my work.