Cookies

We use cookies to ensure that we give you the best experience on our website. You can change your cookie settings at any time. Otherwise, we'll assume you're OK to continue.

Durham University

School of Education

Staff Profile

Publication details for Prof Carole Torgerson

Torgerson, C., Wiggins, A., Torgerson, D., Ainsworth, H. & Hewitt, C. (2013). Every Child Counts: testing policy effectiveness using a randomised controlled trial, designed, conducted and reported to CONSORT standards. Research in Mathematics Education 15(2): 141-153.

Author(s) from Durham

Abstract

We report a randomised controlled trial evaluation of an intensive one-to-one numeracy programme – Numbers Count – which formed part of the previous government's numeracy policy intervention – Every Child Counts. We rigorously designed and conducted the trial to CONSORT guidelines. We used a pragmatic waiting list design to evaluate the intervention in real life settings in diverse geographical areas across England, to increase the ecological validity of the results. Children were randomly allocated within schools to either the intervention (Numbers Count in addition to normal classroom practice) or the control group (normal classroom practice alone). The primary outcome assessment was the Progress in Maths (PIM) 6 test from GL Assessment. Independent administration ensured that outcome ascertainment was undertaken blind to group allocation. The secondary outcome measure was the Sandwell test, which was not undertaken and marked blind to group allocation. At post-test the effect size (standardised mean difference between intervention and control group) on the PIM6 was d = 0.33 95% confidence intervals [0.12, 0.53], indicating strong evidence of a difference between the two groups. The effect size for the secondary outcome (Sandwell test) was d = 1.11 95% CI [0.91, 1.31]. Our results demonstrate a statistically significant effect of Numbers Count on our primary, independently marked, mathematics test. Like many trials, our study had both strengths and limitations. We feel, however, due to our a priori decision to report these in an explicit manner, as advocated by the CONSORT guidelines, that we could maximise rigour (e.g., by using blinded independent testing) and report potential problems (e.g., attrition rates). We have demonstrated that it is feasible to conduct an educational trial using the rigorous methodological techniques required by the CONSORT statement.

Notes

Special Issue: Experimental methods in mathematics education research