A Phenomenological Inquiry into IT Professionals' Perspectives on AI-Powered Cybersecurity Systems for Intrusion Detection
Main Article Content
Abstract
Cybersecurity has become a critical field in protecting digital infrastructure, with increasing reliance on AI-powered systems for intrusion detection. However, the integration of AI into cybersecurity raises concerns about trust, effectiveness, and the human role in decision-making. Despite significant advancements, there is limited understanding of how IT professionals experience and interact with AI-driven intrusion detection systems in real-world settings. This study aims to explore these experiences and identify factors influencing trust and acceptance of AI in security contexts. Using a phenomenological approach, the research delves into the subjective perspectives of IT professionals, uncovering insights into their interactions with AI systems. Through in-depth interviews and thematic analysis, the findings highlight the dual role of AI as an assistant rather than a complete replacement for human decision-making, while also revealing challenges related to false positives and the need for human oversight. These results contribute to a more nuanced understanding of human-AI collaboration in cybersecurity, emphasizing the ongoing importance of human expertise. The study’s implications suggest that future research should focus on optimizing AI systems for better alignment with user trust and expectations.