Explainable AI in Credit Scoring: Improving Transparency in Loan Decisions
Main Article Content
Abstract
The increasing dependence on Artificial Intelligence (AI) in the realm of credit scoring has led to notable enhancements in loan approval processes, particularly with regard to accuracy, efficiency, and risk evaluation. Yet, due to the opacity of sophisticated AI models, there are worries regarding transparency, fairness, and adherence to regulations. Because traditional black-box models like deep learning and ensemble methods are not interpretable, financial institutions find it challenging to justify credit decisions based on them. This absence of transparency creates difficulties in complying with regulatory standards such as Basel III, the Fair Lending Act, and GDPR, while also heightening the risk of biased or unjust lending practices. This study examines the role of Explainable AI (XAI) in credit scoring to tackle these issues, concentrating on methods that improve model interpretability while maintaining predictive performance.
This study puts forward a credit scoring framework driven by XAI, which combines Shapley Additive Explanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) to enhance the transparency of AI-based loan decision-making. Machine learning models such as random forests, gradient boosting, and neural networks are evaluated for their accuracy and explainability using real-world credit risk datasets. The results demonstrate that although AI improves risk prediction, post-hoc interpretability techniques effectively identify the key factors affecting loan approvals, thereby promoting trust and adherence to regulations. This research emphasizes how XAI can reduce bias, enhance fairness, and foster transparency in credit decision-making. These developments open the door to more ethical and accountable AI-based financial systems.