By Ashley Grant, Johns Hopkins University
Small effect sizes from large, expensive studies seem to plague education research. Matthew Kraft’s recent piece in Educational Researcher agrees, but suggests that education decision-makers should pay more attention to the qualities of the research design, rather than only examining the absolute effect size.
”Effect sizes that are equal in magnitude are rarely equal in meaning.” (p.251) Research design choices (such as long vs. short term outcomes, small vs. large and diverse samples) affect what effect sizes come out of a research study. Kraft lays out a series of questions and answers to guide researchers and policymakers in evaluating the effect size of a particular study or a particular intervention. Additionally, he emphasizes consideration of cost-benefit (does the effective, expensive intervention matter if it only serves a small number of students?) and feasibility (can the intervention be easily spread to serve more students and is there the political will to support this?).
Large-scale randomized field trials provide the most robust evidence about potential interventions of interest for policy-makers, but are also most likely to use designs that bring in small effect sizes. Rather than putting down studies with these small findings, we should be reconfiguring our own expectations. Good research designs produce smaller, but more realistic, effect sizes.