Attention and Deep Learning Framework for Wearable Sensor-based Human Activity Recognition
Main Article Content
Abstract
Human Activity Recognition (HAR) using wearable sensors has emerged as a significant research area that has found a key role in fitness tracking, ambient assisted living, and smart environments. Traditional machine learning (ML) methods with handcrafted features often exhibit limited ability in learning complex patterns and adaptability across datasets. To overcome this issue, deep learning (DL) techniques offer improved performance by automating feature extraction and capturing sequential patterns. However, DL-based HAR methods often face limitations such as high computational complexity and overfitting risks with deeper networks. To address these limitations, this paper proposes a novel, lightweight, attention-deep learning-based framework tailored for wearable sensor-based HAR (WHAR). The proposed method processes raw accelerometer readings through a convolutional autoencoder (ConvAE) architecture comprising an average pooling layer as a bottleneck layer for initial feature extraction. A self-attention layer is added to highlight the relevant, informative features, followed by two stacked long short-term memory (LSTM) layers to extract the deeper feature representation and long-term dependencies. These features are then passed through fully connected layers to classify activities. A scaling-based data augmentation technique is employed to address the imbalanced nature of datasets. The proposed method attained accuracies of 97.21%, 95.54%, and 99.84% on three publicly available datasets, namely, HAR70+, HARTH and MHealth, respectively. The experimental results demonstrate that the proposed framework achieved better performance across the wearable sensor-based application by introducing attention mechanism and augmentation techniques.