Open Science Comes to Meta-Analysis

Open Science Comes to Meta-Analysis

By Claire Chuter, Johns Hopkins University

Recently, a growing interest in transparency and reproducibility has led researchers and journals to lean more intently into the shift towards open science. This shift has been spurred by both the replication crisis in the social sciences and medicine, although meta-analysis has been relatively slow on the uptake.

In a recent meta-review, Polanin et al. randomly selected 150 meta-analyses from the Psychological Bulletin and coded them for criteria that would facilitate reproduction of results. The authors contend that high visibility of data and methodology is important for three primary reasons: to support peer reviewers to check author analyses or run additional analyses on their own, to facilitate future meta-analysts to replicate or update the review with new studies or new statistical methods, and to allow for meta-reviews that may simply summarize the existing meta-analyses, or seek to examine the results from a substantively different angle (e.g. breaking up the results by grade level). Polanin et al. derived a set of guiding principles to ease transparency in meta-analysis.

1.        State hypotheses, objectives, and planned methods publicly in a review protocol before beginning the review.

2.        Report the results for each stage and decision in a clear and reproducible manner.

3.        Acknowledge any discrepancies between the protocol and published review. The application of these principles is mostly defined.

They found that meta-analysts on average reported only 55% of the criteria that Polanin et al. deemed essential for replication. On the other hand, the authors also found that transparency of data and methodology is increasing steadily over time (b = 1.09, SE = 0.24, t = 4.519, p < .001). In 1995, the average meta-analysis contained just 49% of items, while in 2015, the average study included 63% of items. Given that meta-analysis rests on the transparency of primary research, meta-analysts should be the leaders in transparent reporting. The authors urge, at minimum, that meta-analysts report the calculated effect size and variance, sample sizes by condition, means, standard deviations, test statistics, type of design, and any correlation between dependent observations. They hope that meta-analysts will continue to become more open in their reporting, and that data and statistical code sharing will soon become standard practice.

Leave a comment