Credit Card Fraud Detection using Explainable AI Methods
Main Article Content
Abstract
Explainable AI (XAI) system assists users in understanding the underlying processes of AI's decision making. XAI algorithms differ from conventional AI algorithms as XAI systems highlight decision-making processes and therefore can be regarded as trustworthy. Fraud detection should be precise in credit card transactions as the volume of global transactions is enormous. Most of these transactions are legitimate but an alarming amount of them are fraudulent. Detecting these fraudulent transactions enables banks and consumers to save enormous amounts of resources that would have otherwise been spent on compensation. Tools like Watson OpenScale by companies like IBM are designed to ensure that AI models are unbiased and transparent. The proposed project relies on the use of XAI methods such as LIME and SHAP designed to identify fraud in credit card transactions. LIME understands the reason an AI model made a decision and presents that rationale in a simplified manner. SHAP illustrates the transaction features like transaction amount or location and how these elements affect the model's choice. The aid of these XAI enabled methods improves the comprehension of the automated fraud detection systems and why certain transactions were unsuccessfully authenticated. Furthermore, we need to balance this dataset using SMOTE because there could be an imbalance between Legit and fraudulent transactions. XGBoost is great for large datasets which is why we will build our predictive model with that algorithm. The project merges XAI with powerful fraud detection approaches like SMOTE and hyperparameter tuning to builds a system that can be easily manipulated for its effectiveness.