Please use this identifier to cite or link to this item: http://hdl.handle.net/20.500.12323/4409
Full metadata record
DC FieldValueLanguage
dc.contributor.author. Baird, Matthew D-
dc.contributor.authorPane, John F.-
dc.date.accessioned2020-04-24T10:22:00Z-
dc.date.available2020-04-24T10:22:00Z-
dc.date.issued2019-05-13-
dc.identifier.issn0013-189X, eISSN: 0013189X-
dc.identifier.urihttp://hdl.handle.net/20.500.12323/4409-
dc.description.abstractEvaluators report effects of education initiatives as standardized effect sizes, a scale that has merits but obscures interpretation of the effects’ practical importance. Consequently, educators and policymakers seek more readily interpretable translations of evaluation results. One popular metric is the number of years of learning necessary to induce the effect. We compare years of learning to three other translation options: benchmarking against other effect sizes, converting to percentile growth, and estimating the probability of scoring above a proficiency threshold. After enumerating the desirable properties of translations, we examine each option’s strengths and weaknesses. We conclude that years of learning performs worst, and percentile gains performs best, making it our recommended choice for more interpretable translations of standardized effects.en_US
dc.language.isoenen_US
dc.relation.ispartofseriesEducational Researcher;Volume: 48 issue: 4, page(s): 217-228-
dc.subjecteducation policyen_US
dc.titleTranslating Standardized Effects of Education Programs Into More Interpretable Metricsen_US
dc.typeArticleen_US
Appears in Collections:ePapers

Files in This Item:
File Description SizeFormat 
Translating Standardized Effects of Education_RE.pdf745.54 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.