5 Comments
User's avatar
Colin Brown's avatar

My school has been using NMM for several years now. We spotted the transcription issues last year, which I suspect is particularly an issue with infant writers. However, conscious that we are usually working with a low stakes assessment, have continued to use AI enhanced marking as it is far less burdensome to administrate. We are now thinking that at least four young writers we will use AI for no more than 50% for two reasons, first, it helps to flag the AI/human disagreements that result from weaker transcription models, usually this is also showing up in infit scores too. Additionally, as a team of teachers we are interested in reviewing what writing looks like across our cohort, relying on AI meant we saw very few pieces of student writing so it was challenging to really get a perspective on our year’s progress.

Daisy Christodoulou's avatar

Yes, we definitely feel it is important to keep humans in the loop which is why we've built the system the way it is.

Ed Jones's avatar

Daisy, do you write/talk about how the AI keeps the teacher informed about where each kid is in their writing?

Daisy Christodoulou's avatar

Definitely, we are constantly working on the information we give to teachers and students. This is a summary of our feedback philosophy although it has moved on a bit since then. https://substack.nomoremarking.com/p/bringing-our-feedback-philosophy

Ed Jones's avatar

How do you think this will affect your pricing model going forward? Say in a year, two, three?