Please use this identifier to cite or link to this item: http://hdl.handle.net/20.500.12323/4407
Full metadata record
DC FieldValueLanguage
dc.contributor.authorKim, James S.-
dc.date.accessioned2020-04-24T09:41:51Z-
dc.date.available2020-04-24T09:41:51Z-
dc.date.issued2019-12-16-
dc.identifier.issn0013-189X,eISSN: 0013189X-
dc.identifier.urihttp://hdl.handle.net/20.500.12323/4407-
dc.description.abstractWhy, when so many educational interventions demonstrate positive impact in tightly controlled efficacy trials, are null results common in follow-up effectiveness trials? Using case studies from literacy, this article suggests that replication failure can surface hidden moderators—contextual differences between an efficacy and an effectiveness trial—and generate new hypotheses and questions to guide future research. First, replication failure can reveal systemic barriers to program implementation. Second, it can highlight for whom and in what contexts a program theory of change works best. Third, it suggests that a fidelity first and adaptation second model of program implementation can enhance the effectiveness of evidence-based interventions and improve student outcomes. Ultimately, researchers can make every study count by learning from both replication success and failure to improve the rigor, relevance, and reproducibility of intervention research.en_US
dc.language.isoenen_US
dc.publisherSage Publicationsen_US
dc.relation.ispartofseriesEducational Researcher;Volume: 48 issue: 9, page(s): 599-607-
dc.subjecteducational policyen_US
dc.subjectevaluationen_US
dc.subjectexperiemental designen_US
dc.subjectexperiemental researchen_US
dc.titleMaking Every Study Count: Learning From Replication Failure to Improve Intervention Researchen_US
dc.typeArticleen_US
Appears in Collections:ePapers

Files in This Item:
File Description SizeFormat 
making every study count.pdf141.38 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.