Dynamic Detection and Debias of Bayesian Network Classifier (3D-BN)

Main Article Content

Fahad S. Alenazi, Khalil El Hindi

Abstract

Fairness in machine learning is a complex and multifaceted concept, increasingly critical in automated decision-making systems. Numerous metrics and techniques have been developed to measure and mitigate bias effectively. However, tensions often arise between different fairness notions, such as individual versus group fairness, and even among various group fairness approaches. These conflicts are typically rooted in inadequate implementation of fairness measures rather than fundamental contradictions. Additionally, failing to account for interdependencies among attributes can lead to unintended outcomes, such as those exemplified by Simpson's paradox, when focusing solely on group fairness based on sensitive attributes. This paper seeks to reconcile individual and group fairness by addressing the sources and causal dynamics of unfairness. We propose a dynamic in-process fairness enforcement method that leverages Bayesian networks and harmonizes conditional probability terms through an agnostic and symmetric objective function. Our approach aims to achieve both individual and group fairness simultaneously by applying causal path-specific bias mitigation. Moreover, it implicitly handles multiple sensitive attributes to prevent hidden redlining effects from correlated attributes and supports multi-valued attributes. A comparative evaluation of our method against related approaches using 14 real-world datasets demonstrates that our technique significantly outperforms existing fairness solutions.

Article Details

Section
Articles