<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
  <channel>
    <title>DSpace Collection:</title>
    <link>http://hdl.handle.net/20.500.12323/4395</link>
    <description />
    <pubDate>Mon, 13 Apr 2026 22:05:25 GMT</pubDate>
    <dc:date>2026-04-13T22:05:25Z</dc:date>
    <item>
      <title>Translating Standardized Effects of Education Programs Into More Interpretable Metrics</title>
      <link>http://hdl.handle.net/20.500.12323/4409</link>
      <description>Title: Translating Standardized Effects of Education Programs Into More Interpretable Metrics
Authors: . Baird, Matthew D; Pane, John F.
Abstract: Evaluators report effects of education initiatives as standardized effect sizes, a scale that has merits but obscures interpretation of the effects’ practical importance. Consequently, educators and policymakers seek more readily interpretable translations of evaluation results. One popular metric is the number of years of learning necessary to induce the effect. We compare years of learning to three other translation options: benchmarking against other effect sizes, converting to percentile growth, and estimating the probability of scoring above a proficiency threshold. After enumerating the desirable properties of translations, we examine each option’s strengths and weaknesses. We conclude that years of learning performs worst, and percentile gains performs best, making it our recommended choice for more interpretable translations of standardized effects.</description>
      <pubDate>Mon, 13 May 2019 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">http://hdl.handle.net/20.500.12323/4409</guid>
      <dc:date>2019-05-13T00:00:00Z</dc:date>
    </item>
    <item>
      <title>Understanding a Vicious Cycle: The Relationship Between Student Discipline and Student Academic Outcomes</title>
      <link>http://hdl.handle.net/20.500.12323/4408</link>
      <description>Title: Understanding a Vicious Cycle: The Relationship Between Student Discipline and Student Academic Outcomes
Authors: Anderson, Kaitlin P.; Ritter, Garry W.; Zamarro, Gema
Abstract: While numerous studies have demonstrated a correlation between exclusionary discipline and negative student outcomes, this relationship is likely confounded by other factors related to the underlying misbehavior or risk of disciplinary referral. Using 10 years of student-level demographic, achievement, and disciplinary data from all K–12 public schools in Arkansas, we find that exclusionary consequences are related to worse academic outcomes (e.g., test scores and grade retention) than less exclusionary consequences, controlling for type of behavioral infraction. However, despite controlling for a robust set of covariates, sensitivity checks demonstrate that the estimated relationships between consequences and academic outcomes may still be driven by selection bias into consequence type. Implications for policy and practice are discussed.</description>
      <pubDate>Wed, 01 May 2019 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">http://hdl.handle.net/20.500.12323/4408</guid>
      <dc:date>2019-05-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>Making Every Study Count: Learning From Replication Failure to Improve Intervention Research</title>
      <link>http://hdl.handle.net/20.500.12323/4407</link>
      <description>Title: Making Every Study Count: Learning From Replication Failure to Improve Intervention Research
Authors: Kim, James S.
Abstract: Why, when so many educational interventions demonstrate positive impact in tightly controlled efficacy trials, are null results common in follow-up effectiveness trials? Using case studies from literacy, this article suggests that replication failure can surface hidden moderators—contextual differences between an efficacy and an effectiveness trial—and generate new hypotheses and questions to guide future research. First, replication failure can reveal systemic barriers to program implementation. Second, it can highlight for whom and in what contexts a program theory of change works best. Third, it suggests that a fidelity first and adaptation second model of program implementation can enhance the effectiveness of evidence-based interventions and improve student outcomes. Ultimately, researchers can make every study count by learning from both replication success and failure to improve the rigor, relevance, and reproducibility of intervention research.</description>
      <pubDate>Mon, 16 Dec 2019 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">http://hdl.handle.net/20.500.12323/4407</guid>
      <dc:date>2019-12-16T00:00:00Z</dc:date>
    </item>
    <item>
      <title>Reassessing Disparities in Online Learner Student Engagement in Higher Education</title>
      <link>http://hdl.handle.net/20.500.12323/4406</link>
      <description>Title: Reassessing Disparities in Online Learner Student Engagement in Higher Education
Authors: Paulsen, Justin; .McCormick, Alexander C
Abstract: line learning is the fastest growing segment in U.S. higher education and is increasingly adopted in public and private&#xD;
not-for-profit institutions. While the impact of online learning on educational outcomes is becoming more clear, the&#xD;
literature on its connection with student engagement is sparse. Student engagement measures identify key aspects of the&#xD;
learning process that can improve learning and outcomes like retention and achievement. The few studies investigating&#xD;
the link between online learning and student engagement found positive benefits for online learners compared to face-toface learners in terms of perceived academic challenge, learning gains, satisfaction, and better study habits. On the other&#xD;
hand, face-to-face learners reported higher levels of environment support, collaborative learning, and faculty interaction.&#xD;
However, these studies did not effectively account for the differences in background characteristics like age, time spent&#xD;
working or caring for dependents, and enrollment status. Further, they did not consider the increasingly large population&#xD;
of students who enroll in both online and face-to-face courses. In our study, we used propensity score matching on the&#xD;
2015 National Survey of Student Engagement data to account for the disparities in these groups’ demographics variables.&#xD;
After matching, we found that some of the previous literature’s differences diminish or disappear entirely. This suggests&#xD;
differences in supportive environments and learning strategies have more to do with online student characteristics than&#xD;
learning mode. However, online learning still falls well below other modes in terms of collaborative learning and interaction&#xD;
with faculty.</description>
      <pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">http://hdl.handle.net/20.500.12323/4406</guid>
      <dc:date>2020-01-01T00:00:00Z</dc:date>
    </item>
  </channel>
</rss>

