Addressing the Misuse of GenAI for Malicious Purposes
Main Article Content
Abstract
The rapid evolution of Generative Artificial Intelligence (GenAI) technologies has unleashed transformative applications across various domains. However, this progress has also given rise to malicious uses, such as deepfakes, AI-powered phishing, and AI-generated malware. These threats pose significant risks to individuals, organizations, and national security. This paper explores cutting-edge research and technological interventions for the detection and mitigation of GenAI misuse. We present advanced methodologies for detecting deepfakes across video, audio, and text, with a focus on attribution, real-time analysis, and source tracing. Furthermore, we investigate the rise of AI-driven phishing and social engineering, using linguistic and behavioural analytics. Finally, we delve into GenAI-enhanced malware development and propose robust detection mechanisms. The paper concludes with ethical considerations, regulatory implications, and future challenges in securing GenAI against adversarial exploitation.