Poor fidelity may mean effective education strategies never see light of day
New research raises concerns that promising new educational interventions are being ‘unnecessarily scrapped’
The APA journal, Psychological Methods, recently published an article in which researchers found that “Promising new education interventions are potentially being ‘unnecessarily scrapped’ because trials to test their effectiveness are insufficiently faithful to the original research”. They came to this conclusion by running a large-scale computer simulation to examine how much ‘fidelity’ (i.e., how true the used intervention is to the intervention which the original researchers used) compromises the results of school-based trials of new learning innovations and strategies.
They found that even small changes in how true the implementation was to what in the research was done can significantly impact whether the implementation works. For every 5% of fidelity lost there was also a 5% loss of effect size. In real-life, this could mean that “some high-potential innovations are deemed unfit for use because low fidelity is distorting the results. The study notes: “There is growing concern that a substantial number of null findings in educational interventions… could be due to a lack of fidelity, resulting in potentially sound programmes being unnecessarily scrapped.””
Abstract
Failure of replication attempts in experimental psychology might extend beyond p-hacking, publication bias or hidden moderators; reductions in experimental power can be caused by violations of fidelity to a set of experimental protocols. In this article, we run a series of simulations to systematically explore how manipulating fidelity influences effect size. We find statistical patterns that mimic those found in ManyLabs style replications and meta-analyses, suggesting that fidelity violations are present in many replications attempts in psychology. Scholars in intervention science, medicine, and education have developed methods of improving and measuring fidelity, and as replication becomes more mainstream in psychology, the field would benefit from adopting such approaches as well.
Impact Statement
Failure of replication attempts in experimental psychology might extend beyond p-hacking, publication bias or hidden moderators; reductions in experimental power can be caused by violations of fidelity to a set of experimental protocols. In this article, we run a series of simulations to systematically explore how manipulating fidelity influences effect size. We find statistical patterns that mimic those found in ManyLabs style replications and meta-analyses, suggesting that fidelity violations are present in many replications attempts in psychology. Scholars in intervention science, medicine, and education have developed methods of improving and measuring fidelity, and as replication becomes more mainstream in psychology, the field would benefit from adopting such approaches as well.
Ellefson, M. R., & Oppenheimer, D. M. (2022). Is replication possible without fidelity? Psychological Methods. Advance online publication. https://doi.org/10.1037/met0000473