Enhancing Explainability in AI Models: A Quantitative Comparison of XAI Techniques for Large Language Models and Healthcare Applications

Main Article Content

Vijayasekhar Duvvur

Abstract

The growing need for Artificial intelligence (AI) in healthcare requires both accurate and explainable models. Elevating Transparency, Trust, and Decision-Making with Explainable AI in Medical Applications of LLMs. In this study, we conduct a comparative quantitative evaluation of the most important XAI methods (SHAP, LIME, and Attention-based mechanisms) for LLMs in healthcare. We evaluate these techniques in terms of interpretability, computational efficiency, fidelity, and clinical relevance. The findings underline trade-offs that matter, with SHAP offering very fine-tuned interpretation of model decisions at high computational costs, LIME giving additional insights by momentarily opening up the black-box model at moderate computational costs, and Attention-based methods providing clear alignment with predictions but no reasoning behind those predictions. This research contributes to the ethical and reliable deployment of AI in healthcare by revealing effective XAI strategies for improving clinical decisions and fostering trust among medical professionals and patients.

Article Details

Section
Articles