Speakers
Description
This paper explores the ethical boundaries of plagiarism detection in the age of artificial intelligence (AI), focusing on the rise of AI-generated text and its implications for academic integrity. While plagiarism detection has traditionally relied on string-matching and authorship attribution, the emergence of generative models like GPT-4 challenges these methods. Institutions now face a dual imperative: uphold fairness and accountability while respecting privacy, transparency, and due process. This article reviews the evolution of detection systems, contextualizes them within cybersecurity frameworks, and analyzes ethical tensions through global policy comparisons and case scenarios. A conceptual model of AI-enabled detection is presented, alongside a comparative table of international data protection laws. The paper argues for a balanced governance approach that integrates human judgment, safeguards student rights, and acknowledges cultural diversity in plagiarism norms. Recommendations include hybrid detection-prevention strategies, transparent algorithms, and ethics-informed policy design. Ultimately, the goal is to ensure that detection systems serve education rather than undermine it.