Which factors influence the effect of math interventions? Answers from a meta-analysis of 191 studies

Which factors influence the effect of math interventions? Answers from a meta-analysis of 191 studies

Marta Pellegrini, University of Cagliari, Italy

A recent meta-analysis published in the Journal of Research on Educational Effectiveness studied the effects of PreK-12 mathematics interventions in the U.S. from the 1990’s to 2017 with the aim of examining the characteristics of the studies that contribute to the effect heterogeneity. The criteria to select the studies were broad in order to do a comprehensive review of randomized studies on mathematics intervention, written in English, and conducted in the U.S. Therefore, the authors included studies with different methodological qualities (e.g., types of measure, attrition rate, baseline equivalence) and attempted to control for these factors in the analysis. 

A total of 191 studies met the inclusion criteria, with a mean effect size of +0.31. There was substantial heterogeneity, with a 95% prediction interval (range of 95% of true effect sizes) of -0.60 to +1.23. To explain the heterogeneity, the authors tested blocks of moderators. After testing each block independently, a combined model was created including all the moderators that were found to be marginally significant in the previous analyses.

Results from the combined model showed that intervention type, intervention delivery, publication decade, and measure type were significant moderators of the effect. On intervention types, supplemental time interventions (ES = +0.53) were more effective than curricula (ES = +0.34) or pedagogical/instruction interventions (ES = +0.27). On intervention delivery, programs delivered by teachers (ES = +0.37) and external interventionists (ES = +0.39) were more effective than technology (ES = +0.12). Regarding time of publication, studies published in earlier decades had higher effect sizes than recent studies. Finally, measures made by researchers (ES = +0.45) had three times the effect of standardized measures (ES = +0.15).

The authors concluded that much variability between effect sizes of the studies were not explained by the factors included in the analysis. Studies should report as many details as possible about the programs and the methods used to be able to examine and discover which are the factors that influence an intervention’s effectiveness.

Leave a comment