Exploring the Toolkit: 5 Key Questions to Ask
24 June 2018
Recent research from the NFER showed that “59% of senior leaders and 23% of teachers use the Sutton Trust/EEF Toolkit to inform evidence-based teaching methods.” There are 46 topics in total in the Teaching and Learning Toolkit and the Early Years Toolkit, ranging from Phonics to School Uniform. In a series of posts, we are exploring how to make the most of the toolkits.
Last time, we looked at the ratings system. Here we explore a little further, going beyond these headlines and looking at the evidence summaries. Here are 5 questions we always ask.
1) How effective is this intervention?
We already have an answer to this with the ‘Additional Months’ Impact’. But behind this headline figure, there is always a degree of nuance. For example, the rating of +2 months for Sports Participation hides the fact that there is “considerable variation in impact, including some studies which show negative effects.”- we cannot simply start a basketball club and see GCSE results improve! The summaries always start with a section answering this question and it illuminates that overall figure.
2) Which specific approaches are most effective?
Within any area of the toolkits, there are a range of different approaches, some of which will be effective and some which will not. The 8 months progress of feedback, for instance,includes some studies which show negative results. We need to know which of the myriad of approaches work best. It is not unusual for the huge effect size of feedback to be used as an excuse for the most horrendous marking policies, so we should always look for the specific approaches which work best.
Another good example of why asking this question is important is ‘Collaborative Learning’. There are definitely many ways that groups can be organised and many uses for collaborative work. The toolkit points out that “Effective collaborative learning requires much more than just sitting pupils together and asking them to work in a group” and that “structured approaches with well-designed tasks lead to the greatest learning gains.” Later, the advice is that “approaches which promote talk and interaction between learners tend to result in the best gains.” Similarly, ‘Digital Technology’ is associated with moderate gains, but schools which then buy 500 tablets should understand that individualising learning with technology (one to one laptop provision, or individual use of drill and practice) may not be as helpful as small group learning or collaborative use of technology.
Another example is homework. Studies imply that there is an optimum amount of homework of between one and two hours per school day (slightly longer for older pupils), with effects diminishing as the time that students spend on homework increases. Each of the examples above show the need for careful examination of the detail behind the headlines.
3) Who benefits the most from this kind of approach?
There are clear indications in the toolkits that effectiveness is often dependent on a range of factors. One is age/ phase. We see, for example, that homework at Secondary has an estimated effect of +5 months whereas at Primary it is significantly less effective at +2 months. Parental engagement in the Early Years is +4 months whereas for older children it is +3 months. And beyond these figures, we see that particular students are most likely to benefit from particular approaches. The majority of behaviour studies report higher impact with older pupils. Sometimes the fact that there are benefits for one group doesn’t mean that there are no benefits for another. It could be that there are just more studies in a particular context. In the case of behaviour, there is more evidence surrounding the targeting of students with challenging behaviour than there is around general classroom behaviour and low level disruption. There are also particular benefits to disadvantaged students of some interventions such as One to One Tuition.
4) What are the problems/gaps/areas of concern?
The Toolkit doesn’t shirk from explaining where there are gaps in the evidence or where findings are less secure. We see that in the padlock rating, but again this is merely a starting point and a deeper dive into the reports give reasons. Outdoor adventure learning typically involves outdoor experiences, e.g. assault courses; orienteering and there are many studies which show positive benefits on academic learning. However, “understanding why adventure learning interventions appear to improve academic outcomes is not straightforward.“ So it should be considered that these might be worth doing, but without a real understanding of why things seemed to improve students’ academic ability there is a danger that the active ingredients that made things successful may not be replicated.
Sometimes, in organising evidence into these summaries, we run the risk of comparing disparate studies. We also have issues with different people’s understanding of what certain terms. ‘Collaborative Learning’, ‘Play-based Learning’ and Metacognition and Self-regulated Learning’ would come into this category. Fortunately, the EEF are clear to define their definitions in the evidence summaries. And we should always understand that there are extremes. So, for example, ‘Built environment’ has very little impact, but an extremely warm school or noisy one under the flight path of an airport will suffer
5) Any other factors worth considering?
Every evidence summary has aspects that don’t fit neatly into the previous questions. Be aware of them before implementing a particular approach. You should also consider how to implement things using the School’s Guide to Implementation guidance report.
Finally, should you want to find out more about embedding metacognition, you can come along to our Research Roadshow on Monday 9th July, and there is still time to sign up to our Improving Maths at Key Stage 2 and 3 Roadshow too!Posted on 24 June 2018
Posted in: Blog
Tags: EEF, toolkit