A Comparative Study of Deep Learning Models for Human Activity Recognition

Authors

  • Mohammed Elnazer Abazar Elmamoon Department of Computer Science & Engineering, Vignan’s Foundation for Science, Technology & Research (Deemed to be University), Vadlamudi, Guntur, A.P., India https://orcid.org/0009-0009-7678-217X
  • Ahmad Abubakar Mustapha School of Computer Science and Engineering, VIT-AP University, Andhra Pradesh, India https://orcid.org/0000-0001-6872-4097

DOI:

https://doi.org/10.37256/ccds.6120256264

Keywords:

Human Activity Recognition (HAR), CNN, pretrained models, surveillance systems, performance evaluation

Abstract

Human Activity Recognition (HAR) is essential for real-time surveillance and security systems, enabling the detection and classification of human actions. This study evaluates five pre-trained Convolutional Neural Network (CNN) models, EfficientNetB7, DenseNet121, InceptionV3, MobileNetV2, and VGG19 on a dataset comprising 15 human activity classes. The models were compared based on accuracy, precision, recall, F1-score, loss, and Receiver Operating Characteristic Area Under the Curve (ROC AUC). InceptionV3 achieved the highest performance with a validation accuracy of 80.16%, precision of 80.20%, and ROC AUC of 0.81, demonstrating its effectiveness for HAR tasks. EfficientNetB7 and DenseNet121 also performed well, with ROC AUC scores of 0.74 and 0.80, respectively. VGG19, however, showed lower metrics, emphasizing its limitations for complex HAR applications. This work highlights the trade-offs between model performance and efficiency, offering guidance for selecting suitable architectures for real-time surveillance. The findings contribute to the optimization of HAR systems for applications in smart cities, healthcare, and security.

Downloads

Published

2025-01-23

How to Cite

1.
Mohammed Elnazer Abazar Elmamoon, Ahmad Abubakar Mustapha. A Comparative Study of Deep Learning Models for Human Activity Recognition. Cloud Computing and Data Science [Internet]. 2025 Jan. 23 [cited 2025 Jan. 30];6(1):79-93. Available from: https://ojs.wiserpub.com/index.php/CCDS/article/view/6264