9 Comments
User's avatar
Graeme's avatar

That's very interesting. And I can see how, in vocational education, the MCQs bridge the gap between indirect feedback and what apprentices want which is often just told what to do.

John Brown's avatar

This exact experiment was initiated by us at WhatWorked Education with some schools in the north east, so great minds! Our study was giving A/level students AI or human feedback on A Level paper long answers then comparing the gain in marks for the redraft. We felt a concern was how to control for the amount of AI assistance students used e.g. some might get AI to re-write whole sections. Our solution was to ask students to re-draft and resubmit within teacher supervised sessions with no access to AI live, so they couldn't go away and secretly get AI to expand on its points and propose re-wordings. When I moved on from WhatWorked they seem to have dropped the study, seems a shame because the impact of AI writing advice seems central to how AI is going to be of most use. Will watch eagerly for you results.

Ben Rodgers's avatar

Hi daisy,

Is there any support on the handwriting in these assessments?

Daisy Christodoulou's avatar

The tasks will be handwritten and teachers will be able to see instant transcriptions when they judge them https://help.nomoremarking.com/en/article/human-judging-with-ai-transcriptions-b9gojw/

Harriett Janetos's avatar

Fascinating! I'm very interested in this: "These students will get a set of questions entirely designed by AI. The questions will focus on more creative aspects of writing." When I taught high school seniors a million years ago, I developed a revision guide that zeroed in on the features of a given genre (e.g. thesis, concessions, arguments, evidence, counterarguments for the persuasive essay), which I found improved final drafts. It was the subject of my master's thesis: Teaching the 'F' Word: Getting Form without a Formula Using Procedural Facilitation.

Daisy Christodoulou's avatar

We will share some examples of the questions soon. One tricky aspect is that the AI won't know exactly what the students have been taught beforehand / what metalanguage they do and don't know.

Jan's avatar

PS: Definitely not having a dig at your research. Asking questions is everything.

Jan's avatar

I'm quite tempted to answer the question by saying that I'm not sure you can ever know how effective your feedback is. Certainly not in the short-term. I think the most influential factor in improving my writing as a school student, both at primary and secondary phases, was the authors I was reading. Whatever I was reading I'd end up writing in the style of. I'm sure it wasn't uncommon then or now. When I was teaching primary phase kids I was well aware that I had colleagues whose own writing skills weren't that good. They had obviously written their way to a degree and a teaching qualification but many didn't read much for pleasure and certainly didn't write for pleasure. I saw the same when I was monitoring schools for my L.A. I'd read comments in kids' books that I didn't agree with. I guess maybe it's being specific on what are you giving feedback on. If it's on grammatical construction and spelling then maybe they need more input rather than feedback. If it's about the quality of their writing per se, does that put the teacher in the role of critic rather than assessor? There are professional published authors who never put pen to paper but audio record their writing and then have it transcribed. Does that mean they haven't actually written it I wonder.

Andrew Berwick's avatar

Hi Daisy, I agree it’s a really exciting opportunity to test different approaches! What are you seeing as the variable here? Is it whether schools have opted in to the AI features? (So individual ai feedback vs no ai feedback?) Or is it a year on year comparison of the impact of the MCQs?