Subgroup Analysis of Free School Meal Pupils in Educational Trials
(26 September 2017)
This seminar will take place on Wednesday, 11 October, at 1pm, in ED 130 at the School of Education. The seminar will be led by Dr ZhiMin Xiao, and Professor Steve Higgins, from the School of Education. Everyone is welcome to attend and booking is not required. For more information please contact firstname.lastname@example.org.
Analyses of social interventions need to produce evidence that is relevant to different groups of people in a society. When such a group is not the target group of an intervention, the analytical model that has generated the evidence is called subgroup analysis, which, albeit more relevant to policy and practice, is often regarded as a statistical malpractice amongst academic statisticians, as its findings can be underpowered, unreliable, and prone to over-interpretation at best, or misleading at worst. Meanwhile, researchers of social interventions would be criticised for generating irrelevant evidence and accused of wasting research money if they do not conduct relevant subgroup analysis. As a result, “they are damned if they do, and damned if they don't” (Petticrew et al. 2012).
In this study, we employ a widely-accepted approach to subgroup analysis across 55 educational interventions funded by the Education Endowment Foundation (EEF) in England. Across the 90 independently evaluated estimates of intervention effect considered, the subgroup of interest, namely, Free School Meal (FSM) pupils in English schools, is pre-specified and remains the same in all cases. To be more specific about the analytical approach, we first ran an intervention and FSM status interaction test in each and every outcome to see if the difference-in-effect is statistically significant between FSM and Non-FSM pupils. We then adopted analytical models that are common in the evaluation of EEF trials to estimate separate effect sizes within the two subgroups defined by four possible FSM variables in the data archive we have access to. Finally, we compared the results from the interaction tests with overall effect sizes of individual interventions and the two separate subgroup estimates of effect. We found that, although there are multiple ways to operationalise even a single conceptually pre-specified subgroup variable, the choice of which FSM variable to use made little difference.
We also show that the conventional interaction tests, as commonly practised in education and elsewhere, can produce self-contradictory results. Likewise, subgroup estimates without reference to interaction tests are equally troubling. We therefore argue that, until we have and test a best alternative, caution must be drawn to avoid sweeping conclusions based on any subgroup analysis.