Interesting take. I don’t think Progress 8 attainment data is at all comparable. It doesn’t measure intervention. There are many other studies supporting high-dosage, high-quality tutoring besides Bloom, and others still supporting the kind of spaced repetition you can get with high dosage tutoring.
The issue with the UK post covid tutoring investment is that it lacked the high-quality component - in fact the quality was detrimental it was so low. I interviewed a couple of the key players - outsourced to Indian low quality call center type operations. It was scammy. So I wouldn’t
use that as evidence against Bloom. Read Koedinger’s the Astonishing Regularity in student learning. It supports both high dosage tutoring and high quality classroom instruction backed by data. The same goes for a recent Harvard large scale study on the science of reading in classrooms - not being upheld despite training. There are so so many wrongs here stacking up inside and outside the classroom.
The point is the more focussed good stuff moves the needle irrespective of subject. And that’s expensive. So how do we manufacture it at reduced cost.
"1. The reported learning rates are not actually as quantitatively similar as is suggested by the language used to describe them -- the 75th percentile learners learn 2x as fast per opportunity as the 25th percentile, and even the 75th percentile is far lower than the kind of person we have in mind when we think of somebody who is shockingly good at math.
2. The learning rates are measured in a way that rests on a critical assumption that students learn nothing from the initial instruction preceding the practice problems -- i.e., you can have one student who learns a lot more from the initial instruction and requires far fewer practice problems, and when you calculate their learning rate, it can come out the same as for a student who learns a lot less from the initial instruction and requires far more practice problems."
Completely off topic but can I just say how glad I am that you have linked that paper. I read it when it came out then lost the reference and couldn’t find it again despite asking Twitter when that was still useful! I still see people referencing the paper it critiques and I think they need to read it!
I've always been astonished by how uncritical people are about Bloom's claim. It never seems to cross anyone's mind that it might not be entirely true.
The claim is clearly false, at least in the simplistic manner in which it's usually interpreted. Can *every* student experience 2-sigma growth, in *every* subject? Surely not. There are lots of students who will benefit enormously from 1-on-1 tutoring, but there are also students who won't show such progress.
Grim observation: the students who need tutoring and extra assistance the most are generally not able to take advantage of it, because they are so far behind. If you are working with a "B" student, that person is already understanding most of the material, and just needs a little extra help to get up to an "A" level. But if a student is getting "D" and "F" grades, then they are really struggling with the material and a few tutoring sessions probably aren't going to make much difference.
Very interesting. I was always mildly worried about the amount spent on COVID catch up tutoring, but the confidence of experts in its effectiveness was overwhelming. This suggests perhaps they were overconfident in what it could achieve
The whole problem is one of scale and so much of the research on this ignores the dilution you get at scale. One more recent meta analysis of tutoring concedes that the Bloom paper is flawed but says you can still get improvements of about 0.1 / 0.2 in larger scale trials. What does it define as larger scale trials? Those with above 1000 students. There are 8 million kids in school in England alone!!
I hope I won't get completely shot down in flames but I've been a tad sceptical about the claims made about the impact on students of covid in terms of lost learning.
Hi, thanks for this. I like the sporting analogy. My day job is a sports coach and I regularly see the mis-interpretation of stats in poorly reviewed studies.
Great article again and I enjoyed the contrast with Progress 8. It seems to me that the problem with Progress 8 is that while the measure is more robust, it’s hard to know what are the crucial elements of schools that do well.
Brilliant breakdown on the formative vs summative mixup. I've seen this same error play out in corporate training where people celebrate 90% pass rates on module quizzes and then wonder why six months later no one remmebers the content. The keepy-uppy analogy is perfect too, back when I was trying to learn Spanish through an app I'd crush the daily lessons but couldnt hold a basic conversation in real life.
I think you're right about the scalability. A great teaching assistant can really help. Others make little difference. Maybe we should have a growth mindset about training teaching assistants.
There are plenty of training courses available for TAs whether as part of inservice or for those hoping to secure a post. In many areas in the UK this is a sought after post and many TAs are themselves graduates. The days when TAs were recruited mainly from parent helpers I think has all but passed.
I agree absolutely that learning is a change in long term memory and shouldn't be equated with the quick fix that gets someone a high grade in an exam. Yet another pebble on my pile of reasons why I feel the UK education system, particularly the exam component, needs a complete overhaul. I'm also less than convinced that one to one teaching is by definition superior to any other kind. Context is everything. I recall Jamie Oliver referring to being called out of class for one to one extra help. This was a common feature when I started teaching in the 1980s and state funded schools had most of their budget controlled by their LA. A peripatetic teacher would be assigned to the school to support kids identified as needing additional support with reading/ writing. I'll be honest and say that many of these teachers were unqualified for the task. They were qualified teachers but had limited knowledge of SEND even as it was defined then. They could have worked 1 to 1 all day with the kids but I don't believe it would've made a significant difference to their progress or attainment. I think it hugely depends on the quality of the tuition and the relationship between the learner and the teacher. You also mentioned the relationship to class sizes. I would say there's an optimum class size and if you go too small that's often disadvantagous to the best learning. We each learn from others as much as from the teacher.
Agree about optimal class sizes. Though I suspect it's bimodal 1:1 is probably best even if Bloom is massively overblown, but for my subject at least, 20:1 is vastly preferable to 10:1. There are just certain highly effective pedagogy "moves" that feel awkward and stunted without a big group buying in.
Quick additional comment about Bradman. That statistical analysis of ~6 SDs is from a group of batters that scored over 2000 Test runs, ensuring a more likely reliable sample but surely increasing the average of the sample group. If all Test batters were compared in this analysis surely DGB would be even more than his score of 6.66 - probably in the order of 10. (I also don't think he's the greatest cricketer of them all, even he himself agrees Sobers, possessing all round brilliance was more versatile in skill set.)
Interesting take. I don’t think Progress 8 attainment data is at all comparable. It doesn’t measure intervention. There are many other studies supporting high-dosage, high-quality tutoring besides Bloom, and others still supporting the kind of spaced repetition you can get with high dosage tutoring.
The issue with the UK post covid tutoring investment is that it lacked the high-quality component - in fact the quality was detrimental it was so low. I interviewed a couple of the key players - outsourced to Indian low quality call center type operations. It was scammy. So I wouldn’t
use that as evidence against Bloom. Read Koedinger’s the Astonishing Regularity in student learning. It supports both high dosage tutoring and high quality classroom instruction backed by data. The same goes for a recent Harvard large scale study on the science of reading in classrooms - not being upheld despite training. There are so so many wrongs here stacking up inside and outside the classroom.
The point is the more focussed good stuff moves the needle irrespective of subject. And that’s expensive. So how do we manufacture it at reduced cost.
Of interest... Critique of Paper: “An astonishing regularity in student learning rate”
by Justin Skycak / @justinskycak
https://www.justinmath.com/critique-of-paper-an-astonishing-regularity-in-student-learning-rate/
"1. The reported learning rates are not actually as quantitatively similar as is suggested by the language used to describe them -- the 75th percentile learners learn 2x as fast per opportunity as the 25th percentile, and even the 75th percentile is far lower than the kind of person we have in mind when we think of somebody who is shockingly good at math.
2. The learning rates are measured in a way that rests on a critical assumption that students learn nothing from the initial instruction preceding the practice problems -- i.e., you can have one student who learns a lot more from the initial instruction and requires far fewer practice problems, and when you calculate their learning rate, it can come out the same as for a student who learns a lot less from the initial instruction and requires far more practice problems."
Completely off topic but can I just say how glad I am that you have linked that paper. I read it when it came out then lost the reference and couldn’t find it again despite asking Twitter when that was still useful! I still see people referencing the paper it critiques and I think they need to read it!
I've always been astonished by how uncritical people are about Bloom's claim. It never seems to cross anyone's mind that it might not be entirely true.
The claim is clearly false, at least in the simplistic manner in which it's usually interpreted. Can *every* student experience 2-sigma growth, in *every* subject? Surely not. There are lots of students who will benefit enormously from 1-on-1 tutoring, but there are also students who won't show such progress.
Grim observation: the students who need tutoring and extra assistance the most are generally not able to take advantage of it, because they are so far behind. If you are working with a "B" student, that person is already understanding most of the material, and just needs a little extra help to get up to an "A" level. But if a student is getting "D" and "F" grades, then they are really struggling with the material and a few tutoring sessions probably aren't going to make much difference.
Very interesting. I was always mildly worried about the amount spent on COVID catch up tutoring, but the confidence of experts in its effectiveness was overwhelming. This suggests perhaps they were overconfident in what it could achieve
The whole problem is one of scale and so much of the research on this ignores the dilution you get at scale. One more recent meta analysis of tutoring concedes that the Bloom paper is flawed but says you can still get improvements of about 0.1 / 0.2 in larger scale trials. What does it define as larger scale trials? Those with above 1000 students. There are 8 million kids in school in England alone!!
I hope I won't get completely shot down in flames but I've been a tad sceptical about the claims made about the impact on students of covid in terms of lost learning.
Hi, thanks for this. I like the sporting analogy. My day job is a sports coach and I regularly see the mis-interpretation of stats in poorly reviewed studies.
An athlete I coached (who sadly died last month) had a Phd in Stats. He wrote this for me ten years ago! https://excelsiorgroup.co.uk/improper-application-and-interpretation-of-statistics-with-a-focus-on-sport/
Great article again and I enjoyed the contrast with Progress 8. It seems to me that the problem with Progress 8 is that while the measure is more robust, it’s hard to know what are the crucial elements of schools that do well.
Brilliant breakdown on the formative vs summative mixup. I've seen this same error play out in corporate training where people celebrate 90% pass rates on module quizzes and then wonder why six months later no one remmebers the content. The keepy-uppy analogy is perfect too, back when I was trying to learn Spanish through an app I'd crush the daily lessons but couldnt hold a basic conversation in real life.
Well done. Also you may enjoy Paul Von Hippel in Education Next on same topic.
I think you're right about the scalability. A great teaching assistant can really help. Others make little difference. Maybe we should have a growth mindset about training teaching assistants.
There are plenty of training courses available for TAs whether as part of inservice or for those hoping to secure a post. In many areas in the UK this is a sought after post and many TAs are themselves graduates. The days when TAs were recruited mainly from parent helpers I think has all but passed.
I agree absolutely that learning is a change in long term memory and shouldn't be equated with the quick fix that gets someone a high grade in an exam. Yet another pebble on my pile of reasons why I feel the UK education system, particularly the exam component, needs a complete overhaul. I'm also less than convinced that one to one teaching is by definition superior to any other kind. Context is everything. I recall Jamie Oliver referring to being called out of class for one to one extra help. This was a common feature when I started teaching in the 1980s and state funded schools had most of their budget controlled by their LA. A peripatetic teacher would be assigned to the school to support kids identified as needing additional support with reading/ writing. I'll be honest and say that many of these teachers were unqualified for the task. They were qualified teachers but had limited knowledge of SEND even as it was defined then. They could have worked 1 to 1 all day with the kids but I don't believe it would've made a significant difference to their progress or attainment. I think it hugely depends on the quality of the tuition and the relationship between the learner and the teacher. You also mentioned the relationship to class sizes. I would say there's an optimum class size and if you go too small that's often disadvantagous to the best learning. We each learn from others as much as from the teacher.
Agree about optimal class sizes. Though I suspect it's bimodal 1:1 is probably best even if Bloom is massively overblown, but for my subject at least, 20:1 is vastly preferable to 10:1. There are just certain highly effective pedagogy "moves" that feel awkward and stunted without a big group buying in.
Quick additional comment about Bradman. That statistical analysis of ~6 SDs is from a group of batters that scored over 2000 Test runs, ensuring a more likely reliable sample but surely increasing the average of the sample group. If all Test batters were compared in this analysis surely DGB would be even more than his score of 6.66 - probably in the order of 10. (I also don't think he's the greatest cricketer of them all, even he himself agrees Sobers, possessing all round brilliance was more versatile in skill set.)