Human Motion Pattern Recognition Based on Nano-sensor and Deep Learning

A human motion pattern recognition algorithm based on Nano-sensor and deep learning is studied to recognize human motion patterns in real time and with high accuracy. First, human motion data are collected by micro electro mechanical system, and the noise in such data is filtered by smoothing filtering method to obtain high-quality motion data. Second, key time-domain features are extracted from high-quality motion data. Finally, after fusing and processing the key time-domain features, it is input into the deep long and short-term memory (LSTM) neural network to build a deep LSTM human motion pattern recognition model and complete human motion pattern recognition. The results show that the proposed algorithm can realize the recognition of various motion patterns with high accuracy of data acquisition, the average recognition accuracy is 94.8%, the average recall reaches 89.7%, and the F1 score of the algorithm are high, and the recognition time consuming is short, which can realize accurate and efficient human motion pattern recognition and provide guarantee for effective monitoring of the target human motion health.


Introduction
Human motion pattern recognition is one of the key research issues in the field of computer vision [12,14].Human motion pattern recognition is a process of recognizing motion patterns by recognizing the real-time motion state data of the target human and combining the effective analysis of motion data [18,19].At present, such recognition technology has been applied in many fields, such as personnel navigation, medical rehabilitation, intelligent health monitoring [17] and human-computer interaction [2].However, how to accurately and effectively recognize the motion pattern of the target human has become the key to the research of this kind of problem [4].At present, many scholars have performed relevant research work in this regard [26].Aiming at the important topic of human motion pattern recognition, Liu Wei et al. [15] studied a motion recognition algorithm combining global constraint block matching and convolution neural network.It mainly extracts human motion features through convolution neural network, and completes the matching of the same motion features in combination with global constraint block matching to realize motion recognition [11].Although this algorithm can realize human motion pattern recognition.However, the recognition accuracy and timeliness are not ideal.Ali et al. [3] evaluated the accuracy and robustness of the combination of convolutional neural network and naive Bayes to correctly recognize the real alarm trigger in the form of buzzer sound.The results shows that pattern recognition can be achieved using either of the two methods, even when part of the motion pattern is derived as a subset of the full motion path.This paper verifies the effectiveness of convolution neural network and naive Bayes in human activity and motion pattern recognition.However, the algorithm is not time-sensitive.Xue et al. [25] proposed a human hand motion recognition system based on multimodal perceptual information fusion, which synchronously collected finger trajectory, contact force and electromyographic signal data through a multimodal data acquisition platform; second, a threshold segmentation method was used to achieve motion segmentation, and the maximum Lyapunov index was used to achieve multimodal signal feature extraction; then, a detailed nonlinear data analysis was conducted to complete the recognition of complex human hand motion.However, the expression of relevant data information was still not accurate enough, which affected the recognition effect.Jian et al. [8] established a human activity model based on Cartesian coordinates and normalized the data in the model, then introduced the sliding window technique to establish a mapping map and designed a convolutional neural network for human activity recognition, the algorithm has good operational efficienc; however, the poor data acquisition effect leads to poor recognition accuracy.Wang and Feng [21] proposed a human motion pattern recognition algorithm based on knowledge graph.The spatial features of human motion are sampled, and a three-dimensional contour feature reconstruction model is established.Adaptive edge feature detection method is used to reconstruct the spatial contour structure of human motion, extract the knowledge map of moving image, and multiscale information enhancement method is used to enhance and recognize human motion.The recognition time of this algorithm is less, which can ensure timeliness.However, its recognition accuracy is not high.
At present, micro intelligent wearable monitoring devices have attracted more and more attention of the public and are gradually popularized and applied in health and sports monitoring [1].As one of the technologies further upgraded on the basis of micro, the scientific and rational application of Nano-sensor has attracted more attention [5].The micro electro mechanical system (MEMS) Nano-sensor based on nanotechnology is one of the representative cutting-edge technologies.It is a technology that can manufacture, process and design nano materials.Its emergence makes micro sensing technology gradually move towards Nano-sensor technology [10,24].Smoothing filtering method belongs to a kind of spatial domain filtering and noise removal technology, which is mainly a technology for enhancing low-frequency signals.
Its main functions include filtering fuzzy noise and enhancing signal quality [16].Long and short-term memory (LSTM) neural network belongs to a representative deep learning model.Its advantage is that it has better memory performance, has more advantages in the field of processing long sequence data, and can realize the deep mining of long-term dependencies in long sequence data.It is widely used in word classification, power prediction, risk prediction, action recognition and other fields [6,13].
Based on the above analysis, this paper studies a human motion pattern recognition algorithm combining Nano-sensor and deep learning.The main contributions of this paper are as follows: (1) The angular velocity sensor and acceleration sensor in the MEMS Nano-sensor collect human motion data to ensure the quality and speed of data collection.It addresses the problem that the slow data acquisition speed of traditional algorithms leads to the decline of recognition efficiency, and can lay a solid foundation for the Information Technology and Control 2023/3/52 778 subsequent human motion pattern recognition.(2) Fuse the key time-domain features, input the feature fusion results into the deep LSTM human motion pattern recognition model, and use the excellent performance of the deep LSTM to ensure the accuracy of the recognition results.(3) The results of different data sets show that the proposed algorithm can effectively recognize human motion patterns and achieve good application results

General Architecture of Human Motion Pattern Recognition Algorithm
In this paper, we propose a human motion pattern recognition algorithm based on Nano-sensor and deep learning to realize the effective recognition of different human motion patterns.The overall architecture of the algorithm is shown in Figure 1.
tion data, which lays a solid foundation for subsequent feature extraction, fusion and human motion pattern recognition.
On this basis, the mean and skewness time-domain features of angular velocity sensor data are extracted, and the variance, interquartile spacing and peak value of acceleration sensor data are obtained, to realize the feature extraction of human motion data.The fusion processing feature processing results are input into the deep LSTM neural network in deep learning, and the human motion pattern recognition results are obtained.The design of human motion pattern recognition algorithm based on nano-sensor and deep learning is completed.The micro Nano-sensor framework is shown in Figure 2.

Overall architecture of human motion pattern recognition algorithm
The overall architecture of human motion pattern recognition algorithm mainly includes three parts: Nano-sensor human motion data collection, human motion data feature extraction and human motion pattern recognition of deep learning.In which, the angular velocity sensor and acceleration sensor in MEMS Nano-sensor are used to collect human mo- By analyzing the micro Nano-sensor framework in Figure 2, it can be seen that the framework is composed of 9 micro Nano-sensor, which are used for angular velocity and acceleration data in human motion.The framework is divided into common anode and cathode, the wiring pattern reduces the number of wires, increases the monitoring direction of the monitored points, improves the survival rate of the sensors, and is able to lay a solid foundation for the operation of 9 micro-nano sensors, thus ensuring the quality and efficiency of data acquisition.In this paper, MEMS Nano-sensor based on nanotechnology is selected to collect the motion data of the target human.The MEMS Nano-sensor combines angular velocity and acceleration sensors.In the acquisition process, the MEMS Nano-sensor is fixed at the back waist of the target human, and the X axis of the MEMS Nano-sensor is consistent with the motion direction of the target human.TheY axis is parallel to the ground, and the Z axis is perpendicular to the ground, which is consistent with the Cartesian coordinate system of the right hand.
After the motion data of the target human is collected by the MEMS Nano-sensor, the noise in the initial motion data needs to be eliminated by filtering, so as to improve the quality of the motion data collected by the MEMS Nano-sensor and facilitate the subsequent extraction of the feagures of the human motion data in the sensor.Because the speed of human motion is relatively slow, most of the human motion data collected by MEMS Nano-sensors are low-frequency signals.It is inevitable that there are many redundant noise signals in the collected human motion data due to jitter, transmission noise and circuit interference in the collection process [9].Let the measured value of human motion data of MEMS Nano-sensor be represented by measure C .The actual motion value is represented by C .Then is consistent with the Cartesian coordinate system of the right hand.
After the motion data of the target human is collected by the MEMS Nano-sensor, the noise in the initial motion data needs to be eliminated by filtering, so as to improve the quality of the motion data collected by the MEMS Nano-sensor and facilitate the subsequent extraction of the feagures of the human motion data in the sensor.Because the speed of human motion is relatively slow, most of the human motion data collected by MEMS Nanosensors are low-frequency signals.It is inevitable that there are many redundant noise signals in the collected human motion data due to jitter, transmission noise and circuit interference in the collection process [9].Let the measured value of human motion data of MEMS Nano-sensor be represented by measure where C′ represents the measurement error of MEMS Nano-sensor.Here, the smooth filtering method is selected to filter out the measurement error, so that the measured value of MEMS Nanosensor is close to the actual value.By increasing the threshold, the smoothing filter parameters are adjusted in real time, which is convenient to deal with the timeliness of dynamic data signal processing in human motion.Let the current human motion value ( ) x m collected by the MEMS Nano- sensor be weighted with the previously collected filtered output value ( ) 1 y m − , and The output value obtained after weighting is the new output ( ) ( ) ( ) where ε indicates the default filtering parameter.
y ∆ represents the difference between ( ) y m and ( )

Feature Extraction of Human Motion Data from MEMS Nano-sensor
Before recognizing human motion patterns, it is necessary to extract various feature quantities from the human motion data collected above, so as to lay the foundation for later motion pattern recognition.The effect of feature extraction affects the accuracy of subsequent motion pattern recognition.Here, various time-domain features are selected as extraction targets, which are characterized by strong real-time, less extraction time and simple extraction process, which can more effectively ensure the effect of feature extraction.In this paper, we focus on two aspects: angular velocity sensor data of MEMS Nano-sensor and feature extraction of acceleration sensor data.

Time Domain Feature Extraction of Angular Velocity Sensor Data
The time-domain feature extraction of angular velocity sensor data includes mean value and skewness.The detailed process is as follows: (1) Mean value: the mean value can describe the fluctuation intensity of motion data, which is characterized by convenient , where C′ represents the measurement error of MEMS Nano-sensor.Here, the smooth filtering method is selected to filter out the measurement error, so that the measured value of MEMS Nano-sensor is close to the actual value.By increasing the threshold, the smooth-ing filter parameters are adjusted in real time, which is convenient to deal with the timeliness of dynamic data signal processing in human motion.Let the current human motion value ( ) x m collected by the MEMS Nano-sensor be weighted with the previously collected filtered output value ( ) 1 y m − , and The out- put value obtained after weighting is the new output value ( ) y m of the filtered MEMS Nano-sensor.Its Equation is the following: where C′ represents the measurement error of MEMS Nano-sensor.Here, the smooth filtering method is selected to filter out the measurement error, so that the measured value of MEMS Nanosensor is close to the actual value.By increasing the threshold, the smoothing filter parameters are adjusted in real time, which is convenient to deal with the timeliness of dynamic data signal processing in human motion.Let the current human motion value ( ) x m collected by the MEMS Nano- sensor be weighted with the previously collected filtered output value ( ) where φ is filter coefficient, and where φ is filter coefficient, and 0 < δ < 1 .Equation (2) shows that the new output value ( ) y m of the fil- tered MEMS Nano-sensor.The proportion of ( ) x m is not high, and its key function is to correct the value of ( ) y m to make the motion data have high inertia, which belongs to the simulation of low-pass filtering function.On this basis, the conditional threshold C ∆ for judging the motion state is integrated to adjust the filter coefficient φ .The adjustment Equation is the following: is consistent with the Cartesian coordinate system of the right hand.
After the motion data of the target human is collected by the MEMS Nano-sensor, the noise in the initial motion data needs to be eliminated by filtering, so as to improve the quality of the motion data collected by the MEMS Nano-sensor and facilitate the subsequent extraction of the feagures of the human motion data in the sensor.Because the speed of human motion is relatively slow, most of the human motion data collected by MEMS Nanosensors are low-frequency signals.It is inevitable that there are many redundant noise signals in the collected human motion data due to jitter, transmission noise and circuit interference in the collection process [9].Let the measured value of human motion data of MEMS Nano-sensor be represented by measure where C′ represents the measurement error of MEMS Nano-sensor.Here, the smooth filtering method is selected to filter out the measurement error, so that the measured value of MEMS Nanosensor is close to the actual value.By increasing the threshold, the smoothing filter parameters are adjusted in real time, which is convenient to deal with the timeliness of dynamic data signal processing in human motion.Let the current human motion value ( ) x m collected by the MEMS Nano- sensor be weighted with the previously collected filtered output value ( ) where ε indicates the default filtering parameter.
y ∆ represents the difference between ( ) y m and ( )

Feature Extraction of Human Motion Data from MEMS Nano-sensor
Before recognizing human motion patterns, it is necessary to extract various feature quantities from the human motion data collected above, so as to lay the foundation for later motion pattern recognition.The effect of feature extraction affects the accuracy of subsequent motion pattern recognition.Here, various time-domain features are selected as extraction targets, which are characterized by strong real-time, less extraction time and simple extraction process, which can more effectively ensure the effect of feature extraction.In this paper, we focus on two aspects: angular velocity sensor data of MEMS Nano-sensor and feature extraction of acceleration sensor data.

Time Domain Feature Extraction of Angular Velocity Sensor Data
The time-domain feature extraction of angular velocity sensor data includes mean value and skewness.The detailed process is as follows: (1) Mean value: the mean value can describe the fluctuation intensity of motion data, which is characterized by convenient operation and can use all the , where ε indicates the default filtering parameter.y ∆ represents the difference between ( ) y m and ( )

Feature Extraction of Human Motion Data from MEMS Nano-sensor
Before recognizing human motion patterns, it is necessary to extract various feature quantities from the human motion data collected above, so as to lay the foundation for later motion pattern recognition.The effect of feature extraction affects the accuracy of subsequent motion pattern recognition.Here, various time-domain features are selected as extraction targets, which are characterized by strong real-time, less extraction time and simple extraction process, which can more effectively ensure the effect of feature extraction.In this paper, we focus on two aspects: angular velocity sensor data of MEMS Nano-sensor and feature extraction of acceleration sensor data.

Time Domain Feature Extraction of Angular Velocity Sensor Data
The time-domain feature extraction of angular velocity sensor data includes mean value and skewness.

Time Domain Feature Extraction of Angular Velocity Sensor Data
The time-domain feature extraction of angular velocity sensor data includes mean value and skewness.The detailed process is as follows: (1) Mean value: the mean value can describe the fluctuation intensity of motion data, which is characterized by convenient operation and can use all the characteristics of motion data.Its Equation is the following: ( ) where 1 2 , , , m y y y  represents a sample of angular velocity sensor data y .m represents the total data size of the sample.
(2) Skewness: skewness is the trade-off between the skewness degree and direction of motion data [20].
, , , m y y y  represents a sample of angular velocity sensor data y .The calculation Equation of skewness bs is the following: where 1 2 , , , m y y y  represents a sample of angular velocity sensor data y.m represents the total data size of the sample.
2 Skewness: skewness is the trade-off between the skewness degree and direction of motion data [20].
, , , m y y y  represents a sample of angular velocity sensor data y .The calculation Equation of skewness bs is the following: where i y is the i -th measured value in the sample.y refers to the measured average value of m data samples.σ represents the standard deviation of the sample.
When recognizing the motion patterns of downstairs, upstairs and walking, which are difficult to recognize, it is not only necessary to use the quartile spacing characteristic parameters of the acceleration sensor data, but also to combine the variance parameters and the skewness and mean value characteristic parameters of the angular velocity sensor data, and synthesize the recognition results of all the extracted characteristic parameters to complete the final recognition of the motion pattern of the target human.

Time Domain Feature Extraction of Acceleration Sensor Data
Time domain feature extraction of acceleration sensor data includes variance, interquartile spacing and peak value, as follows: (1) Variance: for the average value, the deviation degree of the data is variance [22], which represents the action range of the target human to implement the motion behavior mode.The higher the value, the larger the action range.
The resultant acceleration is obtained by scalar sum calculation of triaxial acceleration.The actual wearing mode of the sensor cannot affect this value.
where i y′ represents the i -th measured value in the acceleration sensor data sample.y′ represents the average of the measured values of m data samples.
Interquartile spacing: this feature is selected in this paper to identify the motion patterns of the target human.This feature has the feature of small overlapping part, which can effectively distinguish the characteristics of going upstairs and downstairs and walking in the human motion mode.Sort the X axis data X a of the acceleration sensor according to the order from small to large.Q from the third quartile 3 Q , and the result is the quartile spacing Q value.As shown in Equation ( 8): Peak value: the peak value of motion data refers to the change intensity of motion data signal in a specific period of time.The higher the value, the greater the motion amplitude of the target human, and vice versa [7].Here, the peak feature is used for fall pattern recognition in the motion of the target human, and the motion of the target human in all directions can be presented by the data peak of the three axes of the acceleration sensor.When the peak value of the X axis data is higher than that of the conventional motion mode data, it represents that the target human may fall , where i y is the i -th measured value in the sample.y refers to the measured average value of m data samples.σ represents the standard deviation of the sam- ple.
When recognizing the motion patterns of downstairs, upstairs and walking, which are difficult to recognize, it is not only necessary to use the quartile spacing characteristic parameters of the acceleration sensor data, but also to combine the variance parameters and the skewness and mean value characteristic parameters of the angular velocity sensor data, and synthesize the recognition results of all the extracted characteristic parameters to complete the final recognition of the motion pattern of the target human.

Time Domain Feature Extraction of Acceleration Sensor Data
Time domain feature extraction of acceleration sensor data includes variance, interquartile spacing and peak value, as follows: 1 Variance: for the average value, the deviation degree of the data is variance [22], which represents the action range of the target human to implement the motion behavior mode.The higher the value, the larger the action range.
The resultant acceleration is obtained by scalar sum calculation of triaxial acceleration.The actual wearing mode of the sensor cannot affect this value.At the same time, compared with uniaxial acceleration, this value is more stable.Therefore, the variance of resultant acceleration is selected as one of the time-domain features of the acceleration sensor data to be extracted, which is used to identify the running and standing modes in the motion mode of the target human.The calculation method of resultant acceleration a  is

Time Domain Feature Extraction of Acceleration Sensor Data
Time domain feature extraction of acceleration sensor data includes variance, interquartile spacing and peak value, as follows: (1) Variance: for the average value, the deviation degree of the data is variance [22], which represents the action range of the target human to implement the motion behavior mode.The higher the value, the larger the action range.
The resultant acceleration is obtained by scalar sum calculation of triaxial acceleration.The actual wearing mode of the sensor cannot affect this value.At the same time, compared with uniaxial acceleration, this value is more stable.Therefore, the variance of resultant acceleration is selected as one of the time-domain features of the acceleration sensor data to be extracted, which is used to identify the running and standing modes in the motion mode of the target human.The calculation method of resultant acceleration a  is where X a 、 Y a 、 Z a represents the acceleration of axis X 、 Y and Z of the acceleration sensor, respectively.Let a sample of acceleration sensor Peak value: the peak value refers to the change intensity signal in a specific period of t the value, the greater the moti the target human, and vice ver peak feature is used for fall pat in the motion of the target h motion of the target human can be presented by the data p axes of the acceleration sensor value of the X axis data is hig the conventional motion m represents that the target h forward.At the same time, com peak value fluctuation of the th the acceleration sensor, the act the target human is identified.

Human Motion
where X a , Y a , Z a represents the acceleration of axis X, Y and Z of the acceleration sensor, respectively.Let a sample of acceleration sensor data y′ be 1 2 , , , m y y y ′ ′ ′  .

The Equation of variance δ is
Time domain feature extraction of acceleration sensor data includes variance, interquartile spacing and peak value, as follows: (1) Variance: for the average value, the deviation degree of the data is variance [22], which represents the action range of the target human to implement the motion behavior mode.The higher the value, the larger the action range.
The resultant acceleration is obtained by scalar sum calculation of triaxial acceleration.The actual wearing mode of the sensor cannot affect this value.At the same time, compared with uniaxial acceleration, this value is more stable.Therefore, the variance of resultant acceleration is selected as one of the time-domain features of the acceleration sensor data to be extracted, which is used to identify the running and standing modes in the motion mode of the target human.The calculation method of resultant acceleration a  is where X a 、 Y a 、 Z a represents the acceleration of axis X 、 Y and Z of the acceleration sensor, respectively.Let a sample of acceleration sensor where i y′ represents the i-th measured value in the ac- celeration sensor data sample.y′ represents the average of the measured values of m data samples.
Interquartile spacing: this feature is selected in this paper to identify the motion patterns of the target human.This feature has the feature of small overlapping part, which can effectively distinguish the characteristics of going upstairs and downstairs and walking in the human motion mode.Sort the X axis data X a of the acceleration sensor according to the order from small to large.Q from the third quartile 3 Q , and the result is the quartile spacing Q value.As shown in Equation ( 8): where i y is the i -th measured value in the sample.y refers to the measured average value of m data samples.σ represents the standard deviation of the sample.
When recognizing the motion patterns of downstairs, upstairs and walking, which are difficult to recognize, it is not only necessary to use the quartile spacing characteristic parameters of the acceleration sensor data, but also to combine the variance parameters and the skewness and mean value characteristic parameters of the angular velocity sensor data, and synthesize the recognition results of all the extracted characteristic parameters to complete the final recognition of the motion pattern of the target human.

Time Domain Feature Extraction of Acceleration Sensor Data
Time domain feature extraction of acceleration sensor data includes variance, interquartile spacing and peak value, as follows: (1) Variance: for the average value, the deviation degree of the data is variance [22], which represents the action range of the target human to implement the motion behavior mode.The higher the value, the larger the action range.
The resultant acceleration is obtained by scalar sum calculation of triaxial acceleration.The actual wearing mode of the sensor cannot affect this value.At the same time, compared with uniaxial acceleration, this value is more stable.Therefore, the variance of resultant acceleration is selected as one of the time-domain features of the acceleration sensor data to be extracted, which is used to identify the running and standing modes in the motion where i y′ represents the i -th measured value in the acceleration sensor data sample.y′ represents the average of the measured values of m data samples.
Interquartile spacing: this feature is selected in this paper to identify the motion patterns of the target human.This feature has the feature of small overlapping part, which can effectively distinguish the characteristics of going upstairs and downstairs and walking in the human motion mode.Sort the X axis data X a of the acceleration sensor according to the order from small to large.Q from the third quartile 3 Q , and the result is the quartile spacing Q value.As shown in Equation ( 8): Peak value: the peak value of motion data refers to the change intensity of motion data signal in a specific period of time.The higher the value, the greater the motion amplitude of the target human, and vice versa [7].Here, the peak feature is used for fall pattern recognition in the motion of the target human, and the motion of the target human in all directions can be presented by the data peak of the three axes of the acceleration sensor.When the peak value of the X axis data is higher than that of the conventional motion mode data, it represents that the target human may fall forward.At the same time, combined with the peak value fluctuation of the three-axis data of the acceleration sensor, the actual fall mode of the target human is identified.

Human Motion Pattern Recognition
Based on Deep Learning (8) Peak value: the peak value of motion data refers to the change intensity of motion data signal in a specific period of time.The higher the value, the greater the motion amplitude of the target human, and vice versa [7].
Here, the peak feature is used for fall pattern recognition in the motion of the target human, and the motion of the target human in all directions can be presented by the data peak of the three axes of the acceleration sensor.When the peak value of the X axis data is higher than that of the conventional motion mode data, it represents that the target human may fall forward.At the same time, combined with the peak value fluctuation of the three-axis data of the acceleration sensor, the actual fall mode of the target human is identified.

Human Motion Pattern Recognition Based on Deep Learning
After fusing and processing the key time-domain features, it is input into the deep LSTM neural network to build a deep LSTM human motion pattern recognition model to complete human motion pattern recognition.The LSTM neural network is used to build a human motion pattern recognition model based on deep learning.After fusing the time-domain features of all MEMS Nano-sensor human motion data extracted, the fused features are input into the neural network to obtain the output of human motion pattern recognition results, and complete the recognition of the target human motion pattern.Each neuron of LSTM is composed of three gating and a memory storage part [23].In which, the three gates include input gate, forgetting gate and output gate.The function of forgetting gate is to control the forgetting degree of the previous unit status, and the functions of receiving, adjusting and output parameters are realized by input gate and output gate.The function of memory is to store and record the condition of neurons.The constructed deep LSTM human motion pattern recognition model is shown in Figure 3.
According to the data in Figure 3.After the time-domain features of the extracted angular velocity sensor and acceleration sensor are fused, they are input into the constructed LSTM, and the human motion pattern recognition results are output after training.The deep LSTM model is composed of full connection layer, BN layer, LSTM layer and Dropout layer.In LSTM, the results of fusion processing are transmitted in two directions through hidden layer neurons, namely, to the output layer and to the hidden layer in subsequent periods, and the operation is continued.In this kind of recursive transmission structure, the deep reached by LSTM is deeper.However, because this recurrent neural network is only applicable to the processing of short sequence data, it is easy to have problems such as over fitting and gradient disappearance.Therefore, the dropout layer and BN layer are introduced into the LSTM to obtain the deep LSTM network structure, effectively address the gradient disappearance

Figure 3
Human and over fitting problems, and integrate memory units into the hidden layer neurons of the LSTM, so as to realize the effective control of memory data in the time series and further prevent the gradient explosion and disappearance.The function from input layer to hidden layer in deep LSTM can be expressed as pattern.Each neuron of LSTM is composed of three gating and a memory storage part [23].In which, the three gates include input gate, forgetting gate and output gate.The function of forgetting gate is to control the forgetting degree of the previous unit status, and the functions of receiving, adjusting and output parameters are realized by input gate and output gate.The function of memory is to store and record the condition of neurons.The constructed deep LSTM human motion pattern recognition model is shown in Figure 3.
where ( ) ( ) where  where ( ) X t represents the input layer of deep LSTM.η indicates activation function.D and W represent the offset matrix and weight matrix between the input layer and the hidden layer, respectively.( ) g t is the output matrix of the hidden layer. 1 t e − represents the memory up to the previous moment.Select tanh function in this activation function.The function expression from hidden layer to output layer in deep LSTM is pattern.Each neuron of LSTM is composed of three gating and a memory storage part [23].In which, the three gates include input gate, forgetting gate and output gate.The function of forgetting gate is to control the forgetting degree of the previous unit status, and the functions of receiving, adjusting and output parameters are realized by input gate and output gate.The function of memory is to store and record the condition of neurons.The constructed deep LSTM human motion pattern recognition model is shown in Figure 3.
where ( ) ( ) where l d represents the forgetting gate offset vector.t Y and t X represent output signal and input signal, respectively.
where ( ) Y t represents the output matrix of the output layer.D′ represents the offset matrix of the hidden layer connecting the output layer.( ) p t and W ′ represent the input matrix and weight matrix between them, respectively.The forgetting gate vector t l operation equation of the memory unit integrated into the hidden layer neurons of the deep LSTM is the following: Information Technology and Control 2023/3/52 782 ( ) where ( ) where t j represents the input gate of the memory unit.
where t j represents the input gate of the memory unit.
XG W and YG W represent the weight matrix of t G vector connected with input and output signals, respectively.G d represents the offset vector of t G .The expression of t j vector of memory unit is the following: ( ) where j d represents the offset vector of t j .Gj W represents the weight matrix connecting t j and t G Xj W and Yj W represent the weight matrix of t j connected with input and output signals, respectively.The expression of the output gate vector t h of the memory unit is the following: ( ) where h d ( )

Data Sets
HiEve data set: it contains a large number of postures (>1m), the maximum number of complex event action tags (>56k), and the maximum number of long-term persistent tracks (average track length >480).It is used for data collection in challenging scenes under various crowded and complex events downstairs, falling forward, falling backward and back off under different indoor and outdoor scenes according to the motion patterns in the two experimental data sets.The recognition is performed by using the algorithm in this paper, and the recognition effect of this algorithm is tested.The experimental sample data size is 26.35G.After the experimental data set is set, the data will be filtered and cleaned first to standardize and unify the data format, and eliminate abnormal data and duplicate data.Then, the data will be converted into a form suitable for deep LSTM network computing through generalization and normalization.Finally, the pre-processed data is used as the experimental input data, and it is divided into two parts: the training set and the test set.The data in the training set is input to the simulation software for trial operation.
The MEMS angular velocity sensor and MEMS acceleration sensor used in the proposed algorithm are ML728 and SCA3300-D01, respectively.The establishment of the deep LSTM network recognition model is based on the pythoch deep learning framework.The network parameters are optimized using the genetic algorithm's hyperparametric evolutionary algorithm, and the network model is trained using the Adam optimization algorithm.The initial learning rate is set to 0.001, the iteration batch is set to 64, and the weight attenuation factor is 0.002.This , (13) where j d ( ) where j d represents the offset vector of t j .Gj W represents the weight matrix connecting t j and t G Xj W and Yj W represent the weight matrix of t j connected with input and output signals, respectively.The expression of the output gate vector t h of the memory unit is the following: ( ) where h d ( )

Data Sets
HiEve data set: it contains a large number of postures (>1m), the maximum number of complex event action tags (>56k), and the maximum number of long-term persistent tracks (average track length >480).It is used for data collection in challenging scenes under various crowded and complex events (such as dining, earthquake escape, subway getting off, collision), it can play a good role in the fields of multi-target tracking, attitude estimation and tracking, motion recognition and so on.Weizmann data set: this data set includes a total of 90 videos, downstairs, falling forward, falling backward and back off under different indoor and outdoor scenes according to the motion patterns in the two experimental data sets.The recognition is performed by using the algorithm in this paper, and the recognition effect of this algorithm is tested.The experimental sample data size is 26.35G.After the experimental data set is set, the data will be filtered and cleaned first to standardize and unify the data format, and eliminate abnormal data and duplicate data.Then, the data will be converted into a form suitable for deep LSTM network computing through generalization and normalization.Finally, the pre-processed data is used as the experimental input data, and it is divided into two parts: the training set and the test set.The data in the training set is input to the simulation software for trial operation.
The MEMS angular velocity sensor and MEMS acceleration sensor used in the proposed algorithm are ML728 and SCA3300-D01, respectively.The establishment of the deep LSTM network recognition model is based on the pythoch deep learning framework.The network parameters are optimized using the genetic algorithm's hyperparametric evolutionary algorithm, and the network model is trained using the Adam optimization algorithm.The initial learning rate is set to 0.001, the iteration batch is set to 64, and the weight attenuation factor is 0.002.This parameter is always used during the experiment to ensure the authenticity and reliability of the experimental results.

Experimental Index
, (14) where d h represents the offset vector of h t .Gh W rep- resents the weight matrix connecting h t and G t .Xh W and Yh W represent the weight matrix of h t connected with input and output signals, respectively.The expression of output signal Y t is the following: ( ) where j d represents the offset vector of t j .Gj W represents the weight matrix connecting t j and t G Xj W and Yj W represent the weight matrix of t j connected with input and output signals, respectively.The expression of the output gate vector t h of the memory unit is the following: ( ) where h d ( )

Data Sets
HiEve data set: it contains a large number of postures (>1m), the maximum number of complex event action tags (>56k), and the maximum number of long-term persistent tracks (average track length >480).It is used for data collection in challenging scenes under various crowded and complex events (such as dining, earthquake escape, subway getting off, collision), it can play a good role in the fields of multi-target tracking, attitude estimation and tracking, motion recognition and so on.Weizmann data set: this data set includes a total of 90 videos, downstairs, falling forward, falling backward and back off under different indoor and outdoor scenes according to the motion patterns in the two experimental data sets.The recognition is performed by using the algorithm in this paper, and the recognition effect of this algorithm is tested.The experimental sample data size is 26.35G.After the experimental data set is set, the data will be filtered and cleaned first to standardize and unify the data format, and eliminate abnormal data and duplicate data.Then, the data will be converted into a form suitable for deep LSTM network computing through generalization and normalization.Finally, the pre-processed data is used as the experimental input data, and it is divided into two parts: the training set and the test set.The data in the training set is input to the simulation software for trial operation.
The MEMS angular velocity sensor and MEMS acceleration sensor used in the proposed algorithm are ML728 and SCA3300-D01, respectively.The establishment of the deep LSTM network recognition model is based on the pythoch deep learning framework.The network parameters are optimized using the genetic algorithm's hyperparametric evolutionary algorithm, and the network model is trained using the Adam optimization algorithm.The initial learning rate is set to 0.001, the iteration batch is set to 64, and the weight attenuation factor is 0.002.This parameter is always used during the experiment to ensure the authenticity and reliability of the experimental results.

Experimental Index
The calculation Equation of human motion (15)

Data Sets
HiEve data set: it contains a large number of postures (>1m), the maximum number of complex event action tags (>56k), and the maximum number of long-term persistent tracks (average track length >480).It is used for data collection in challenging scenes under various crowded and complex events (such as dining, earthquake escape, subway getting off, collision), it can play a good role in the fields of multi-target tracking, attitude estimation and tracking, motion recognition and so on.Weizmann data set: this data set includes a total of 90 videos, which are performed by 9 people with 10 different actions (bend, Jack, jump, run, side, skip, walk, WAVE1, WAVE2).The background, perspective and camera of the video are static.Moreover, the dataset provides annotated foreground contour video.
In the experiment, HiEve and Weizmann two public data sets are selected as the experimental data sets to test the recognition effect of the algorithm in this paper.The two experimental data sets include walked forward, running, going upstairs, going downstairs, falling forward, falling backward and back off.Thirty subjects of different ages and genders were selected as the targets to be identified in the experimental detection, and each subject was asked to make seven motion patterns of walked forward, running, going upstairs, going downstairs, falling forward, falling backward and back off under different indoor and outdoor scenes according to the motion patterns in the two experimental data sets.The recognition is performed by using the algorithm in this paper, and the recognition effect of this algorithm is tested.The experimental sample data size is 26.35G.After the experimental data set is set, the data will be filtered and cleaned first to standardize and unify the data format, and eliminate abnormal data and duplicate data.Then, the data will be converted into a form suitable for deep LSTM network computing through generalization and normalization.Finally, the pre-processed data is used as the experimental input data, and it is divided into two parts: the training set and the test set.The data in the training set is input to the simulation software for trial operation.
The MEMS angular velocity sensor and MEMS acceleration sensor used in the proposed algorithm are ML728 and SCA3300-D01, respectively.The establishment of the deep LSTM network recognition model is based on the pythoch deep learning framework.The network parameters are optimized using the genetic algorithm's hyperparametric evolutionary algorithm, and the network model is trained using the Adam optimization algorithm.The initial learning rate is set to 0.001, the iteration batch is set to 64, and the weight attenuation factor is 0.002.This parameter is always used during the experiment to ensure the authenticity and reliability of the experimental results.

Experimental Index
The calculation Equation of human motion acquisition accuracy is as follows: algorithm's hyperparametric evolutionary algorithm, and the network model is trained using the Adam optimization algorithm.The initial learning rate is set to 0.001, the iteration batch is set to 64, and the weight attenuation factor is 0.002.This parameter is always used during the experiment to ensure the authenticity and reliability of the experimental results.

Experimental Index
The calculation Equation of human motion acquisition accuracy is as follows: where F represents the size of data accurately collected.G represents the total size of data.
The Equation for calculating the accuracy index of human motion pattern recognition is as follows: where TP means the predicted answer is correct.FP indicates that other categories are incorrectly predicted as this category.
The calculation Equation of human motion where F represents the size of data accurately collected.G represents the total size of data.
The Equation for calculating the accuracy index of human motion pattern recognition is as follows: algorithm's hyperparametric evolutionary algorithm, and the network model is trained using the Adam optimization algorithm.The initial learning rate is set to 0.001, the iteration batch is set to 64, and the weight attenuation factor is 0.002.This parameter is always used during the experiment to ensure the authenticity and reliability of the experimental results.

Experimental Index
The calculation Equation of human motion acquisition accuracy is as follows: where F represents the size of data accurately collected.G represents the total size of data.
The Equation for calculating the accuracy index of human motion pattern recognition is as follows: where TP means the predicted answer is correct.FP indicates that other categories are incorrectly predicted as this category.
The calculation Equation of human motion where TP means the predicted answer is correct.FP indicates that other categories are incorrectly predicted as this category.
The calculation Equation of human motion pattern recognition recall index is as follows: pattern recognition recall index is as follows: where FN indicates that this type of label is predicted to be other types of labels.
The Equation for calculating F1 score of human motion pattern recognition is as follows: The time consuming calculation Equation for human motion pattern recognition is as follows: where 1 t represents the recognition start time. 2 t indicates the end time of recognition.

Results and Discussion
First, two experimental data sets are used to test the recognition performance of the proposed algorithm.The proposed algorithm is applied to recognize the motion pattern of the tester in reality.
Through the actual recognition results, the recognition effect of this algorithm is analyzed.Four randomly selected movements in various patterns made by 30 experimental testers are presented in the form of images, as shown in Figure 4.

Figure 4
Recognition results

(c) Running (d) Goes upstairs
According to the analysis of Figure 4, the proposed algorithm can recognize the walked forward, running and upstairs motions of the tester, which can ensure the recognition effect.The recognition results of five testers were randomly selected for inspection, as shown in Table 1.
It can be seen from table1 that the proposed algorithm can recognize different motion patterns of different testers.In the randomly selected partial recognition results, only the Go upstairs motion pattern of tester 4 is incorrectly recognized as the walked forward motion pattern, and the other recognition results are consistent with the actual motion pattern.
HMTR [15] algorithm, HAMPR [3] algorithm, HHMR [25] algorithm, HART [8] algorithm and REAHM [21]  The comparison results of human motion data acquisition accuracy of different algorithms are shown in Figure 5.By analyzing the data in Figure 5, we can see that the human motion data acquisition accuracy curve of the proposed where FNindicates that this type of label is predicted to be other types of labels.
The Equation for calculating F1 score of human motion pattern recognition is as follows: pattern recognition recall index is as follows: where FN indicates that this type of label is predicted to be other types of labels.
The Equation for calculating F1 score of human motion pattern recognition is as follows: The time consuming calculation Equation for human motion pattern recognition is as follows: where 1 t represents the recognition start time. 2 t indicates the end time of recognition.

Results and Discussion
First, two experimental data sets are used to test the recognition performance of the proposed algorithm.The proposed algorithm is applied to recognize the motion pattern of the tester in reality.
Through the actual recognition results, the recognition effect of this algorithm is analyzed.Four randomly selected movements in various patterns made by 30 experimental testers are presented in the form of images, as shown in Figure 4.

Figure 4
Recognition results

(c) Running (d) Goes upstairs
According to the analysis of Figure 4, the proposed algorithm can recognize the walked forward, running and upstairs motions of the tester, which can ensure the recognition effect.
The recognition results of five testers were randomly selected for inspection, as shown in Table 1.
It can be seen from table1 that the proposed algorithm can recognize different motion patterns of different testers.In the randomly selected partial recognition results, only the Go upstairs motion pattern of tester 4 is incorrectly recognized as the walked forward motion pattern, and the other recognition results are consistent with the actual motion pattern.
HMTR [15] algorithm, HAMPR [3] algorithm, HHMR [25] algorithm, HART [8] algorithm and REAHM [21]  The comparison results of human motion data acquisition accuracy of different algorithms are shown in Figure 5.By analyzing the data in The time consuming calculation Equation for human motion pattern recognition is as follows: pattern recognition recall index is as follows: where FN indicates that this type of label is predicted to be other types of labels.
The Equation for calculating F1 score of human motion pattern recognition is as follows: The time consuming calculation Equation for human motion pattern recognition is as follows: where 1 t represents the recognition start time. 2 t indicates the end time of recognition.

Results and Discussion
First, two experimental data sets are used to test the recognition performance of the proposed algorithm.The proposed algorithm is applied to recognize the motion pattern of the tester in reality.Through the actual recognition results, the recognition effect of this algorithm is analyzed.Four randomly selected movements in various patterns made by 30 experimental testers are presented in the form of images, as shown in Figure 4.

Figure 4
Recognition results

(c) Running (d) Goes upstairs
According to the analysis of Figure 4, the proposed algorithm can recognize the walked forward, running and upstairs motions of the tester, which can ensure the recognition effect.The recognition results of five testers were randomly selected for inspection, as shown in Table 1.
It can be seen from table1 that the proposed algorithm can recognize different motion patterns of different testers.In the randomly selected partial recognition results, only the Go upstairs motion pattern of tester 4 is incorrectly recognized as the walked forward motion pattern, and the other recognition results are consistent with the actual motion pattern.
HMTR [15] algorithm, HAMPR [3] where t 1 represents the recognition start time.t 2 indicates the end time of recognition.

Results and Discussion
First, two experimental data sets are used to test the recognition performance of the proposed algorithm.The proposed algorithm is applied to recognize the motion pattern of the tester in reality.Through the actual recognition results, the recognition effect of this algorithm is analyzed.Four randomly selected movements in various patterns made by 30 experimental testers are presented in the form of images, as shown in Figure 4. indicates the end time of recognition.

Results and Discussion
First, two experimental data sets are used to test the recognition performance of the proposed algorithm.The proposed algorithm is applied to recognize the motion pattern of the tester in reality.
Through the actual recognition results, the recognition effect of this algorithm is analyzed.Four randomly selected movements in various patterns made by 30 experimental testers are presented in the form of images, as shown in Figure 4.  indicates the end time of recognition.

Results and Discussion
First, two experimental data sets are used to test the recognition performance of the proposed algorithm.The proposed algorithm is applied to recognize the motion pattern of the tester in reality.
Through the actual recognition results, the recognition effect of this algorithm is analyzed.Four randomly selected movements in various patterns made by 30 experimental testers are presented in the form of images, as shown in Figure 4.  indicates the end time of recognition.

Results and Discussion
First, two experimental data sets are used to test the recognition performance of the proposed algorithm.The proposed algorithm is applied to recognize the motion pattern of the tester in reality.
Through the actual recognition results, the recognition effect of this algorithm is analyzed.Four randomly selected movements in various patterns made by 30 experimental testers are presented in the form of images, as shown in Figure 4.  indicates the end time of recognition.

Results and Discussion
First, two experimental data sets are used to test the recognition performance of the proposed algorithm.The proposed algorithm is applied to recognize the motion pattern of the tester in reality.
Through the actual recognition results, the recognition effect of this algorithm is analyzed.Four randomly selected movements in various patterns made by 30 experimental testers are presented in the form of images, as shown in Figure 4.According to the analysis of Figure 4, the proposed algorithm can recognize the walked forward, running and upstairs motions of the tester, which can ensure the recognition effect.The recognition results of five testers were randomly selected for inspection, as shown in Table 1.
It can be seen from table1 that the proposed algorithm can recognize different motion patterns of different testers.In the randomly selected partial recognition results, only the Go upstairs motion pattern of tester 4 is incorrectly recognized as the walked forward motion pattern, and the other recognition results are consistent with the actual motion pattern.
HMTR [15] algorithm, HAMPR [3] algorithm, HHMR [25] algorithm, HART [8] algorithm and REAHM [21] Information Technology and Control 2023/3/52 784 The comparison results of human motion data acquisition accuracy of different algorithms are shown in Figure 5.By analyzing the data in Figure 5, we can see that the human motion data acquisition accuracy curve of the proposed algorithm is always above the experimental comparison algorithm, indicating that the data acquisition accuracy of the proposed algorithm is higher.For example, for the motion mode of walked forward, the human motion data acquisition accuracy of the proposed algorithm is 97%, which is 17%, 30%, 17%, 19% and 17% higher than the algorithms in HMTR [15], HAMPR [3], HHMR [25], HART [8] and REAHM [21], respectively, It shows that compared with the experimental comparison method, the data acquisition accuracy of this algorithm is higher, which can lay a solid data foundation for the subsequent human motion pattern recognition.
The comparison results of human motion pattern recognition accuracy of different algorithms are shown in Table 2.According to the data in Table 2, the average accuracy of human motion pattern recognition of this algorithm is 94.8%, which is 7.3%, 2.7%, 6.8%, 2.6% and 7.5% higher than that of HMTR [15] algorithm, HAMPR [3] algorithm, HHMR [25] algorithm, HART [8] algorithm and REAHM [21] algorithm, respectively.This is because this paper improves the traditional LSTM network and constructs a deep LSTM human motion pattern recognition model.Through the recognition analysis of this model, the problem of inaccurate human motion data expression is avoided, and the accuracy of recognition results is improved.
According to the data in Table 3, we can see that the average recall of human motion pattern recognition of this algorithm is 89.7%, which is 13.3%, 5.1%, 16.0%, 4.9%, and 14.2% higher than that of HMTR [15] algorithm, HAMPR [3] algorithm, HHMR [25] algorithm, HART [8] algorithm and REAHM [21] algorithm,  Comparison of data acquisition accuracy Comparison of recognition time consuming   Comparison of recognition time consuming parison algorithm, the F1 score of the proposed algorithm is higher and the human motion pattern recognition effect is better.
The time consuming comparison results of human motion pattern recognition with different algorithms are shown in Figure 6.By analyzing the data in Figure 6, we can see that the time consuming curve of human motion pattern recognition of this algorithm is always lower than that of the experimental comparison algorithm, which shows that the recognition time consuming of proposed algorithm is the lowest and the efficiency is higher.For the human motion pattern of walking forward, the human motion pattern recognition time consuming of the proposed algorithm is 63ms, which is lower than 17ms, 32ms, 23ms, 44ms and 8ms than the algorithms in HMTR [15], HAMPR [3], HHMR [25], HART [8], and REAHM [21], respectively, indicating that the human motion pattern recognition time consuming of the proposed algorithm is shorter and the overall efficiency is higher.

Conclusions
In this paper, a human motion pattern recognition algorithm based on Nano-sensor and deep learning is proposed.By wearing MEMS Nano-sensor including angular velocity sensors and acceleration sensors on the waist of the target human, the motion data of the target human is collected in real time.For the noise contained in the collected motion data, the smooth filtering method is used to remove and enhance the motion data signal.From this kind of motion data, the mean value and skewness features of angular velocity sensor data, the peak value, variance and interquartile spacing features of acceleration sensor data are extracted, respectively.After fusing all the features, the deep LSTM recognition model is constructed as input, and the final recognition result is obtained.The results show that the accuracy of human motion data acquisition of this algorithm is 97%, the average accuracy of human motion pattern recognition is 94.8%, the average recall of human motion pattern recognition is 89.7%, the average F1 score is 0.88, and the time of human motion pattern recognition is 63ms, which can realize the accurate recognition of human motion patterns.Overall, the gaps and cutting edges in human motion pattern recognition based on nano-sensors and deep learning reflect the ongoing efforts to improve the accuracy, efficiency, and reliability of motion recognition systems.However, we review the latest advances in deep learning techniques for human motion pattern recognition, including cutting-edge methods such as graph neural networks and meta-learning approaches.However, there are still gaps in our understanding of how to optimize deep learning models for efficient and effective motion recognition, particularly with respect to dealing with small datasets and reducing the computational complexity of these models.In future research, we need to further test the recognition effect of more types of motion patterns to further verify the practical application performance of proposed algorithm.

Figure 2
Figure 2 Micro Nano-sensor framework

C.
The actual motion value is represented by C .Then measure and The output value obtained after weighting is the new output value ( ) y m of the filtered MEMS Nano-sensor.Its Equation is the following:

C.
The actual motion value is represented by C .Then measure and The output value obtained after weighting is the new output value ( ) y m of the filtered MEMS Nano-sensor.Its the sorted data sequence.Divide this sequence into quartiles, subtract the first quartile 1 the sorted data sequence.Divide this sequence into quartiles, subtract the first quartile 1 the sorted data sequence.Divide this sequence into quartiles, subtract the first quartile 1

Figure 3
Figure 3Human motion pattern recognition model based on deep LSTM status, and the functions of receiving, adjusting and output parameters are realized by input gate and output gate.The function of memory is to store and record the condition of neurons.The constructed deep LSTM human motion pattern recognition model is shown in Figure3.

Figure 3 Human
Figure 3 Human motion pattern recognition model based on deep LSTM Features of angular velocity sensor

Figure 3 Human
Figure 3 Human motion pattern recognition model based on deep LSTM Features of angular velocity sensor

(
and SCA3300-D01, respectively.The establishment of the deep LSTM network recognition model is based on the pythoch deep learning framework.The network parameters are optimized using the genetic and SCA3300-D01, respectively.The establishment of the deep LSTM network recognition model is based on the pythoch deep learning framework.The network parameters are optimized using the genetic

where 1 t
Figure 4 Recognition results

Figure 4
Figure 4Recognition results

Figure 4
Figure 4Recognition results

Figure 4
Figure 4Recognition results

Figure 4
Figure 4Recognition results

Figure 5
Figure 5 Comparison of data acquisition accuracy

Figure 6
Figure 6 Comparison of recognition time consuming

.3.1. Time Domain Angular Velocity Sen
motion pattern recognition model based on deep LSTM d l represents the forgetting gate offset vector.Y t and X t represent output signal and input signal, respectively.Y t-1 indicates the last output signal.
represents the offset vector of t h .Gh W represents the offset vector of t j .
represents the offset vector of t h .Gh W represents the offset vector of t h .Gh W

Table 1
Recognition results of proposed algorithm where 1 t represents the recognition start time. 2 t

Table 1
Recognition results of proposed algorithm where 1 t represents the recognition start time. 2 t

Table 1
Recognition results of proposed algorithm where 1 t represents the recognition start time. 2 t

Table 1 Recognition
results of proposed algorithm (a) Walked forward (c) Running

Table 1
Recognition results of proposed algorithm algorithm are used as the comparison object to test multiple experimental indicators.The two experimental data sets of HiEve and Weizmann are input into the proposed algorithm and the five comparative algorithms, respectively.The test index values of the seven operation modes in the two experimental data sets are identified by the comparison methods and the proposed algorithm, and the average value of each mode recognition index in the two data sets is taken as the final test index value.

Table 2
Comparison of recognition accuracy

Table 3
Comparison of recognition recall