Reverse-Engineering Black-Box AI Decisions for Regulatory Compliance: A Cloud-Native Explainability Platform for Financial Systems
Main Article Content
Abstract
More uses of AI in financial institutions are related to complex AI systems being used to make high-stakes decisions in areas that include, but are not limited to, credit scoring, fraud detection and underwriting. Nevertheless, regulatory compliance, clarity, and credibility are major problems since these black-box models operate in a black-box fashion. The current paper introduces a new cloud-native platform of Explainability-as-a-Service (EaaS) that plans to reverse-engineer the behavior of models in the model-agnostic manner and using the regulatory knowledge graphs. The platform allows financial organizations to create audit-ready, real-time explanations of any model and it does not need to access the model internals. Quantitative analysis in three use cases (loans approval, transaction fraud detection and credit limit assignment) shows that the platform will cut approval time of compliance to more than half, and output explanations are highly clear (average expert rating of 4.6 out of 5 on the outputs assignment to SHAP). Mechanistic interpretability on transformer-based financial models, which was used, gave the most solid, the least variance attribution on features. HITL surveys showed over 48 per cent saved review work and nearly 86 per cent acceptance rate of auto-generated explanations by the compliance teams. This experiment confirms the viability and effects of the implementation of a cloud-native explainability platform in regulated financial settings providing a legally compliant and scalable answer to the omnipresent problem of the black-box AI.