Comparative Study on Performance of Patient Classification Using Heart Sound and Deep Learning
Main Article Content
Abstract
Speech processing is emerging as an important application area of digital signal processing. In this paper, we present a performance comparison evaluation for patient classification based on Mel Frequency Cepstrum Coefficient (MFCC) using deep learning in the field of speech recognition. We conduct research by heart sound data of patients and healthy people. Each MFCC feature and heart sound feature are extracted by imaging them. We extract only MFCC features and compare the performance. In addition, we perform wavelet transformation to solve the noise problem of dataand learn the extracted heart sound information using Gramian Angular Fields (GAF) and Phrase Space Reconstruction (PSR) techniques using deep learning and conduct a comparative evaluation. The accuracy results were 89.31%, 97.53%, and 100%, respectively, by learning with MFCC, GAF, and PSR techniques. We confirmed that good performance evaluations can be obtained by distinguishing patients using MFCC features.