In previous posts, we’ve outlined our new Y6 assessment, which uses AI to assess redrafting skills, and we’ve also shared details of the results of our pioneer school, the amazing Shakespeare Primary School in Leeds.
We’ve worked with their deputy head Rebekah Wilson on a qualitative review of how students responded to feedback when redrafting their writing. In this post we will share some of the insights.
Here’s an example of one student who clearly responded to feedback and whose writing improved as a result.
Student A
Student A scored 557 on the original assessment. She was allocated a set of quiz questions on run-on sentences as a result. Here’s an example of one of the questions.
When she came to redraft the assessment, she corrected the run-on sentence in the first paragraph.
She was also given some AI-generated feedback about characterisation.
She responded to that feedback, and altered her story so that the mysterious figure became the mother of the main character! PLOT TWIST!!
Student A’s redrafted piece of writing improved by 69 points - more than the average 12 point improvement.
Student B
Here’s another example of a student who made some attempt to respond to feedback, but didn’t improve.
Student B scored 503 on the original assessment. He was allocated a set of quiz questions on capital letters as a result. Here’s an example of one.
When he came to redraft his assessment, he amended one of the capitalisation errors, changing eiffletower to Eiffle tower, as you can see below. However, he didn’t correct all of the capitalisation errors, as you can also see below.
He was also given some AI-generated feedback about characterisation.
He responded to that feedback by naming the kids, as you can also see in the extracts above (although he also failed to capitalise their names!)
His redrafted piece of writing scored 21 points lower than the original piece.
Conclusion
Responding to feedback is not always easy. Extended writing has a lot of moving parts! Most students were able to use the feedback to make targeted improvements to their writing, but others struggled. We need to make sure that all students get feedback that makes sense to them and helps them improve.
The other important factor to consider is whether the students who do improve their writing sustain that improvement into the next assessment - after all, the point of feedback is not to improve the writing, but to improve the student!
There is still time for schools to take part in this redrafting assessment. The deadline for registering is Tuesday 6th May, and schools can sign up here for free.
I think it's incredibly difficult to judge whether anyone's writing is improved by taking on feedback. Has the piece improved as many do by the student learning over time or has specific feedback stuck in their mind. And will it last. With errors such as spelling or capitalisation I think you can make a good case for saying that either a student knows how to spell particular words or they don't and ditto for capitalisation. It's actually hard to misspell words you know unless you're writing at great speed and make the equivalent in handwriting of typos. I'd also like to suggest that often young writers, certainly at primary phase, are encouraged to over-write everything. I used to refer to it as the Literacy Strategy Syndrome. No piece of writing could be judged even adequate unless adverbs and adjectives had been liberally sprinkled. Most acclaimed professional authors know that often, less is more. Perhaps it comes back to what's the purpose in getting youngsters to write or draw or paint or sing? We certainly want to support them in learning how to do whatever it is but is there a danger that teachers impose their own ideas through feedback what's good.
Will feedback make students' writing better or worse? Well, that will mostly be dependent on the complexity of the text that needs to be revised and the complexity of the idea/image that the student wants to express. It will also depend on the skill of the person providing feedback. We cannot assume that all teachers are skilled enough at writing that the feedback will be useful, particularly as the students get older and begin to attempt to express more sophisticated ideas and stories. It seems like your study is using the AI to go for the low hanging fruit, run-ons, capitalization, etc. Which is fine. In the course that my colleague, Dr. Anna Incognito, and I created, The Craft of Language, for our middle school students focuses on the sentence. Over the years, I was intrigued to see how much trying to make sense of one's sentences leads to meaningful revisions and edits. So, I like looking at sentences, which most AI apps can help with, as low hanging fruits that can quickly be assessed with tools like Grammarly which can also bear fruit for the student in trying to reassess what they are trying to express. Targeting periods, commas, and capitalization does make sense for primary students since those devices are the basics of making ideas clear. However, as the students get older, I concur with your findings that the targeted feedback does not always transfer to consistent self-editing. The problem there is repetition, and, of course, the student’s abilities, interest, and level of perseverance. So, I spiral curriculum that keeps revisiting similar skills but at a higher level is important for student progress over time. I am really enjoying your posts. They really get me thinking about my own practice as a teacher.