Explainable AI in Ad Enforcement: Striking the Balance Between Transparency and Safety

Main Article Content

Binita Mukesh Shah

Abstract

The combination of Artificial Intelligence (AI) and advances in technology into the advertising platforms has changed the way policy enforcement works. AI offers scalability and improves efficiency in preventing bad actors from encroaching systems while protecting users from harmful content, however, this brings a new challenge in terms of opacity to legitimate advertisers. Explainable AI (XAI) gives a potential solution by offering some level of transparency into the AI world of enforcement and decision making. This paper talks about the benefits of XAI in ad enforcement, helping advertisers understand and correct their policy violations and in improving user trust through better ad disapproval explanation and recommendations to fix them. Along with the advantage of increased transparency, there are potential risks that need to be considered like exploiting of this information by malicious and bad actors. This paper focused on a tiered transparency framework for achieving a good balance between the transparency and protection problem in the implementation of XAI in the digital advertising landscape.

Article Details

Section
Articles