Transforming Heart Disease Detection with Feature Fusion and Deep Learning
Main Article Content
Abstract
Since heart disease ranks highest among all the causes of mortality worldwide, early and accurate identification of it is crucial. This work investigates how utilising auditory characteristics and machine learning and deep learning could help to detect heart sounds and improve diagnosis accuracy. Three models—Random Forest, CNN (MFCC), and CNN (Feature Fusion)—are under comparison in this work. With a moderate degree of accuracy (73%), the Random Forest model struggled with handling intricate patterns shown by its F1-score (72%). With an accuracy of 83% and an F1-score of 81%, the CNN (MFCC) model—which makes use of Mel-frequency cepstral coefficients (MFCC)—showcased much enhanced performance, thereby capturing both temporal and spectral signal aspects. Getting the greatest accuracy (89%), and F1-score (88%), the CNN (Feature Fusion) model outperformed the others by aggregating statistical, spectral, and MFCC data. The feature fusion approach enabled the model to perform well across a broad spectrum of classes, even those noisy or not well represented. CNN (feature fusion) exhibited the lowest training and validation losses when compared accuracy, loss curves, and confusion matrices, thereby supporting the evidence of how steady and efficient it is. The research did uncover issues like unjust class divisions and erroneous classifications in certain groups, notwithstanding its effectiveness. Researchers will aim to tackle these issues in the future by including additional datasets, refining their feature selection, and investigating more complex designs. This work indicates generally that feature fusion-based deep learning models might be used to consistently identify cardiac problems, hence improving clinical results.