Please use this identifier to cite or link to this item: http://hdl.handle.net/20.500.12323/7791
Full metadata record
DC FieldValueLanguage
dc.contributor.authorAfzal, Tania-
dc.contributor.authorRauf, Sadaf Abdul-
dc.contributor.authorAbbas Malik, Muhammad Ghulam-
dc.contributor.authorImran, Muhammad-
dc.date.accessioned2025-02-11T08:39:05Z-
dc.date.available2025-02-11T08:39:05Z-
dc.date.issued2025-01-23-
dc.identifier.issn2078-2489-
dc.identifier.urihttp://hdl.handle.net/20.500.12323/7791-
dc.description.abstractTransformers have made a significant breakthrough in natural language processing. These models are trained on large datasets and can handle multiple tasks. We compare monolingual and multilingual transformer models for semantic relatedness and verse retrieval. We leveraged data from the original QurSim dataset (Arabic) and used authentic multi-author translations in 22 languages to create a multilingual QurSim dataset, which we released for the research community. We evaluated the performance of monolingual and multilingual LLMs for Arabic and our results show that monolingual LLMs give better results for verse classification and matching verse retrieval. We incrementally built monolingual models with Arabic, English, and Urdu and multilingual models with all 22 languages supported by the multilingual paraphrase-MiniLM-L12-v2 model. Our results show improvement in classification accuracy with the incorporation of multilingual QurSim.en_US
dc.language.isoenen_US
dc.publisherMDPIen_US
dc.relation.ispartofseriesVol. 16;Information, № 2-
dc.subjectsemantic similarityen_US
dc.subjectQuranic verse classificationen_US
dc.subjectverse retrievalen_US
dc.subjectsemantic searchen_US
dc.titleFine-Tuning QurSim on Monolingual and Multilingual Models for Semantic Searchen_US
dc.typeArticleen_US
Appears in Collections:Publications

Files in This Item:
File Description SizeFormat 
Fine-Tuning QurSim on Monolingual and Multilingual Models or Semantic Search.pdf292.89 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.