Development of an Effective Deep Learning Multimodal Based on Biology Approach for Identifying Parts of Gait From Real-Time Video Data.

Main Article Content

Ashish Kumar Misal, Abha Chaubey, Siddharth Chaubey

Abstract

The unique bio-inspired deep learning multimodal technique presented in this work shows promise in identifying gait components from live video recordings. To provide a comprehensive representation of the temporal relationships between gait components, the proposed methodology integrates two popular Recurrent Neural Network (RNN) models: Gated Recurrent Unit (GRU) and Long Short-Term Memory (LSTM). The neural network's parameters are changed using the Elephant Herding Optimizer, an optimization method, to raise performance levels. This technique assists in boosting the accuracy of categorization. The investigation of LSTM and GRU models in the domain of time-series data modeling has been extensively conducted. Nevertheless, the incorporation of these models has not been extensively examined and executed in real-time applications. The present study introduces a novel hybrid methodology that integrates the Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) models. The objective is to capitalize on the individual advantages of each model while addressing their distinct drawbacks. The GRU model has robust skills in modeling short-term interactions, whereas the LSTM model demonstrates greater ability in capturing long-term dependencies. The proposed methodology possesses a multitude of pragmatic implementations in real-world contexts, specifically in the domain of posture and gait analysis. In this context, the precise identification of gait components holds paramount importance for the detection of parameters associated with gait. The proposed methodology has been assessed in real-time situations using various datasets, encompassing multi-camera and multimodal datasets, in order to quantify its accuracy, precision, and recall. The proposed methodology showcases its efficacy and superiority over current methodologies when applied to the analyzed datasets, attaining an accuracy rate of 98.5%, precision rate of 97.4%, and recall rate of 98.3%. In summary, compared to current methods, the proposed methodology performs better in terms of efficiency and accuracy for the identification of gait components from real-time video data. This tool's practical applicability renders it indispensable for assessing posture and gait, hence assisting medical and other professionals in identifying and tracking gait components.

Article Details

Section
Articles