Mitigating Bias in AI-Driven Recruitment: Ethical Challenges and Governance Solutions

Main Article Content

Rohan Chhatre, Seema Singh

Abstract

Introduction: Artificial intelligence (AI) is transforming Human Resources (HR) and recruitment by automating tasks like resume screening, candidate assessment, and hiring recommendations. However, the deployment of AI in these areas has raised ethical concerns, particularly around bias. This paper investigates bias within AI-driven recruitment tools, focusing on real-world case studies where biased algorithms have influenced hiring outcomes, diversity, and inclusion.


Objectives: This paper investigates bias within AI-driven recruitment tools, focusing on real-world case studies where biased algorithms have influenced hiring outcomes, diversity, and inclusion.


Methods: This study adopts a mixed-methods approach to address its objectives. The methodology includes a comprehensive literature review followed by case studies of specific instances.


Results: Key findings reveal that biases in training data—such as historical hiring trends favouring certain demographics—lead to skewed candidate assessments. Furthermore, opaque algorithmic designs hinder the detection and correction of such biases, making it difficult for HR teams to ensure equitable hiring. The study also finds that even well-intentioned algorithms can perpetuate stereotypes if not rigorously monitored.


Conclusions: To address these issues, the paper advocates for improved governance frameworks emphasizing transparency, regular bias audits, and collaboration between AI developers and HR professionals. The research highlights that ethical, accountable AI practices in recruitment are essential for fostering diverse, inclusive workplaces.

Article Details

Section
Articles