Exploring the Toolkit: Impact, Evidence Strength and Cost
13 May 2018
Author: Mark Miller
Recent research from the NFER showed that “59% of senior leaders and 23% of teachers use the Sutton Trust/EEF Toolkit to inform evidence-based teaching methods.” There are 46 topics in total in the Teaching and Learning Toolkit and the Early Years Toolkit, ranging from Phonics to School Uniform. In a series of posts, we are exploring how to make the most of the toolkits. This week, we look at the ratings system: Impact; Evidence Strength; Cost.
The Toolkit presents the success of an intervention as ‘Additional Months’ Impact’, which simply compares the average progress of an intervention in a year versus an equivalent group who are performing at the same level at the start of the year, but who do not receive the intervention. This is what they call the effect size.
If we look at the above, One to One Tuition has an average effect size of +5 months. This means that pupils provided with One to One Tuition made, on average, 5 months more progress over the course of a year compared to equivalent students who did not receive this intervention. This is of course an average, and we would not expect this to be exactly the case for individuals!
The Toolkits calculate these effect sizes from systematic reviews and meta-anlayses of many studies. For a study to be included in the impact measure, there must be a suitably robust evidence base. Effect sizes are liable to change as new studies are conducted.
These effect sizes should always be taken merely as a starting point for a further exploration of the evidence. Within each meta-analysis are a range of studies, some of which will measure different things and some will have wildly different results.
The padlock system used in the Toolkits is based on three factors:
- The quantity of available evidence
- The quality of the available evidence
- How consistent the evidence is
The padlock system starts at ‘Very limited’, which means that single studies show quantitative impact but there are no systematic reviews. The ratings then move through Limited, Moderate, Extensive before Very Extensive – there is consistent high quality evidence. The strength rating also gives weight to ‘ecological validity’, which means studies that are based in schools with teachers, rather than researchers.
Our example of One to One Tuition has a strength rating of four padlocks. You can read more about this in the Toolkit: “Overall, the evidence is consistent and strong, particularly for younger learners who are behind their peers in primary schools, and for subjects like reading and mathematics (there are fewer studies at secondary level or for other subjects). Effects on pupils from disadvantaged backgrounds also tend to be particularly positive.” We would always recommend reading the detail of the report to explore exactly where the evidence lies.
The Toolkit bases the costs on what it would take to implement an approach with a class of 25 pupils. It also takes into account the cost of necessary training and professional development. The costs assumed are as follows (per pupil per year):
As with the other measures, this may be a little crude, but it is always a helpful starting point. Some approaches may offer promise of progress but are prohibitively expensive. You can see that One to One Tuition is relatively expensive. It is interesting to compare this to Small Group Tuition, which is cheaper, nearly as effective but with less evidence supporting it.
This blog is a simple summary of these and the technical appendices on the site explain all of this in much fuller detail.
If you would like to explore the evidence further, why not sign up for our Leading Learning course, starting on May 25th at Dixons Trinity Academy?
You can also sign up to our newsletter here.Posted on 13 May 2018
Posted in: Blog