Explainable Deep Learning for Time Series Analysis: Integrating SHAP and LIME in LSTM-Based Models
Main Article Content
Abstract
LSTM networks which are an aspect of deep learning display combined temporal relations and forecast time series. However, the decision making methods embedded in these models are said to be a black box. Both SHAP and LIME have been integrated into time series models based on LSTM in order to increase the interpretability of black box models. As an explanation model, LIME interprets local projections using simpler, more comparable models while SHAP explains all projections by quantifying each attribute’s role in it. Together both methods help determine the influence time and other external parameters have on the prediction the model will yield. Energy consumption forecasting with regard to temperature and humidity allows us see how LIME and SHAP operate,they bring out the relevance of independent variables such as temperature and humidity and more significantly, the influence of past time periods. This dual-explanation strategy adds credibility to the usage of the LSTM model and aids domain experts in comprehending the underlying mechanisms behind the time-series data. Since SHAP as well as LIME increase the time series analysis models interpretability self-sufficiently, without compromising their performance, they are pertinent to the field of Explainable Deep Learning.