Please use this identifier to cite or link to this item: http://hdl.handle.net/20.500.12323/8232
Full metadata record
DC FieldValueLanguage
dc.contributor.authorCaner, Halime Nuran-
dc.date.accessioned2026-01-23T08:05:05Z-
dc.date.available2026-01-23T08:05:05Z-
dc.date.issued2025-
dc.identifier.issn2223-2621-
dc.identifier.issn2223-2613-
dc.identifier.urihttp://hdl.handle.net/20.500.12323/8232-
dc.description.abstractThe rapid advancement of generative artificial intelligence has significantly transformed academic writing practices, prompting institutions to implement tools designed to verify authorship and uphold academic integrity. Artificial intelligence detection systems have emerged as a prominent, albeit increasingly debated response to these challenges. This systematic review synthesizes empirical evidence to assess the reliability, fairness, and pedagogical implications of artificial intelligence text-detection tools in educational settings. Adhering to PRISMA 2020 standards, this review identified twenty-five peer-reviewed empirical studies, 18 of which were conducted directly within educational settings. This review synthesizes empirical studies published between 2022 and 2025, encompassing quantitative, qualitative, and mixed-methods designs across diverse disciplinary and linguistic contexts. The findings indicate that AI text detection tools are unsuitable for high-stakes academic integrity decisions in their current form. Furthermore, there is substantial variability and instability in detection accuracy across tools, genres, and linguistic backgrounds; a noticeable weakness in paraphrasing, translation, and other adversarial techniques; and systemic biases that disproportionately affect non-native English writers. Human judgment was also found to be inconsistent, reinforcing the difficulty in reliably distinguishing AI-generated text from human-authored text. Collectively, these results raise significant ethical, pedagogical and institutional concerns. This review underscores the need for integrity strategies that prioritize transparency, AI literacy, fairness-aware design, and process-based assessment rather than relying on detection-centered approaches. The findings suggest the necessity of hybrid approaches that combine watermarking and fairness-aware detection algorithms with process-oriented assessment, AI literacy initiatives, and cross-linguistic benchmarking, alongside interpretability-focused and longitudinal research on students’ perceptions of AI detection.en_US
dc.language.isoenen_US
dc.publisherKhazar University Pressen_US
dc.relation.ispartofseriesVol. 28;Khazar Journal of Humanities and Social Sciences, № 4-
dc.subjectAI detection toolsen_US
dc.subjectGenerative AIen_US
dc.subjectAcademic integrityen_US
dc.subjectSystematic reviewen_US
dc.subjectPRISMAen_US
dc.subjectAuthorship verificationen_US
dc.titleAI Detection Tools: A Systematic Review of Empirical Evidence and Their Implications for Educationen_US
dc.typeArticleen_US
Appears in Collections:2025, Vol. 28, № 4



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.