Improved Glowworm Swarm Optimization for Parkinson’s Disease Prediction Based on Radial Basis Functions Networks

Parkinson’s disease is caused by a disruption in the chemical products that enables the communication be-tween brain cells. The brain’s dopamine cells are responsible for movement control, adaptability, and fluidity. Parkinson’s motor symptoms manifest when 60–80% of these cells are damaged due to insufficient dopamine. Researchers are working to find a way to identify the non-motor symptoms that manifest early detection in the disease to stop the disease’s progression because it is believed that the disease starts many years before the motor symptoms. This research presents Parkinson’s disease diagnosis based on deep learning. Processes for feature selection and classification encompass the suggested diagnosis technique. The proposed model searches for the best subset of characteristics using the Improved Glowworm Swarm Optimization (IGSO) algorithm. Radial Basis Functions Networks (RBFN) classifiers evaluate the chosen features. The suggested model is tested using datasets from Parkinson’s Handwriting samples and Parkinson’s Speech and voice with various sound recordings. With an accuracy of about 95.78%, the suggested algorithm forecasts Parkinson’s disease using the VoicePD dataset more precisely


Introduction
The second most prevalent neurological dysfunction is Parkinson's disease (PD), a neuroinflammatory psychosomatic disorder.People with Parkinson's disease (PD) are becoming more common everywhere, particularly in Asia's developing nations [4].Even though the root cause of PD is uncertain, if it is discovered in the early stages, the symptoms can be greatly reduced.Muscle spasms, stiffness, sluggish movement, sensory symptom abnormalities, and poor posture are symptoms of PD [7].According to research, phonation and communication issues are also typical among PD patients.In fact, phonation and communication issues can appear in PD patients up to five years before receiving a medical assessment.Dysphonia, impaired vocal tract resonance, and apraxia of speech is the voice problems connected to PD [6].
The middle layer and the brain shell are the brain sections where PD spreads.The disease is believed to manifest itself years before the appearance of motor symptoms, including neurological disorders and weak muscles, absence of smell, sleep difficulties, and bowel problems [1].Additionally, voice problems affect 90% of PD patients.Therefore, to slow the advancement of the disease, researchers are working to find strategies to identify these non-motor symptoms that manifest early in the illness [16].
Utilizing deep learning and machine learning tactics, techniques, and tools to examine real-world datasets in a clinical setting aid in developing a valuable and instructive framework that can assist doctors in making decisions [23].The deep learning models are well-suited for clinical object recognition, ophthalmology, and optical diseases.Very few deep-learning models have been used to date to diagnose brain conditions like Alzheimer's disease, psychiatric illnesses, and Parkinson's disease [27].
Although these models have demonstrated great accuracy in discriminating between those with brain problems and those who are healthy, their medical use has not yet been established for several reasons [21].The fact that a huge number of parameters will be generated during the initialization of the deep learning model and that these parameters must be tuned to attain a higher rating of accuracy is one of the fundamental limitations in the current deep learning-based demonstration models [12].A multi-stage optimiza-tion process utilizing glowworm swarm optimization is suggested in this study to create a deep-learning model that can forecast the early start of Parkinson's disease.The proposed approach optimizes the deep learning model for accuracy and complexity in many phases, whereas earliest relevant research only considers prediction accuracy as a goal.

Contributions to the Work
The significant contributions of this work are summarized as follows.

Paper Organization
The remainder of the paper is organized as follows.Section 2 briefs the existing literature on diagnosing Parkinson's disease using machine learning and deep learning techniques.Section 3 discusses the proposed

Related Works
The direction of the suggested methodologies and improving our understanding of the deep learning model have been made possible by some previously completed research, as discussed in this section.
Many researchers performed experiments using machine learning algorithms to diagnose PD patients on the same dataset, motivated by the findings in [9].The authors compared the categorization ratings for the diagnosis of PD using artificial neural networks (ANN) and logistic functions [13].With 93.5% accuracy, the ANN classifier produced the best results.A concurrent feed-forward artificial neural system created by the authors of [5] produced a 9.6% increase in PD classification.In [11,14], researchers reported a technique for detecting Parkinson's disease (PD) that integrated feature extraction with a Support Vector Machine, leveraging mutual information, and achieved a prediction performance of 93.45% [15].
Machine learning techniques were used to estimate the psychological impact of PD [17].ML application estimates the degree of trembling in PD patients [19].Additionally, ML was used to predict the phase of PD [20].However, the majority of the study focuses on ML-based early PD detection.To predict PD using motion data collected from people's hands and wrists [22].The experimental subjects were forced to do several performance tests while wearing a gadget on their upper extremities, as instructed by the researchers.After performing a positional, longitudinal, and harmonic data analysis to generate parameters, several supervised learning techniques were employed for categorization.The detection of PD in [25] utilizes various feature extraction techniques and machine learning techniques.They demonstrated that the most straightforward task for PD detection is phonation.The study assessed classifiers such as K nearest neighbor algorithm, the Multilayer Perceptron algorithm, and Ideal Route Prediction algorithm.Artificial neural networks decreased the voice characteristics for the ML-based PD diagnosis [26].Support Vector Machine was employed for segmentation.In contrast, unsupervised techniques were developed for PD [28].Self-organizing Kohenen maps were used for grouping and progressive regression of support vectors forecasting after sparse representation by fractional linear most minor.
It was challenging for researchers to predict this, according to [29], because Parkinson's disease symptoms did not start to manifest until late midlife.There are numerous suggestions for PD [31].Three different approaches to data mining were employed in the study to set a standard for voice articulation [33].The three data mining techniques are derived from the statistical learner, graph, and KNN classifier, which are three different data mining environments [32,34,35].The output performances of the three classifiers are evaluated using the three performance indicators: precision, range, and responsiveness.The main goal of this study [30] is to develop the best network for people with Parkinson's disease.However, additional symptoms, including ecological and demographic variables and issues with speech and development, and shaking arms, legs, and hands were not considered; just the vocal sample was treated [24].However, the incidents are still recorded with the wrong conclusion.The accuracy rate for the contributors of this work is 81.42%.To overcome the aforementioned restrictions, another researcher [8] used a telemonitor to calculate six aspects of significance algorithms and a total output of thirteen classification algorithms [2,3,18,33].Table 1 compares the existing works based on deep learning and machine learning for Parkinson's disease diagnosis.

Proposed Methodology
This section discusses the proposed methodology for the prediction of Parkinson's disease.The suggested method employs Improved Glowworm Swarm Optimization to select the optimal set of features from the dataset and further implement the classification using Radial Basis Function Networks.The proposed architecture is depicted in Figure 1.The four datasets are collected and preprocessed to remove noises in the dataset.Then the processed dataset is used for selecting appropriate features, and classification is done.

Improved Glowworm Swarm Optimization
Glowworms containing the luminous substance luciferin are randomly positioned in the glowworm swarm optimization process in the target feature space.The location's optimization problem and luciferin strength correlate, with higher luciferin levels indicating suitable replacement and optimization problem values for glowworms.

Proposed Methodology
This section discusses the proposed methodology for the prediction of Parkinson's disease.The suggested method employs Improved Glowworm Swarm Optimization to select the optimal set of features from the dataset and further implement the classification using Radial Basis Function Networks.The proposed architecture is depicted in Figure 1.The four datasets are collected and preprocessed to remove noises in the dataset.Then the processed dataset is used for selecting appropriate features, and classification is done.The population of the glowworms is initialized as  � ,  � , . . . .,  � along with the offset value p =1.Every glowworm is associated with a luminescence level indicated as in (1), . ( The resolution range of each glowworm can be represented as shown in (2), RR RR where k n   . ( (1) The resolution range of each glowworm can be represented as shown in (2), The population of the glowworms is initialized as  � ,  � , . . . .,  � along with the offset value p =1.Every glowworm is associated with a luminescence level indicated as in (1), . ( The resolution range of each glowworm can be represented as shown in (2), (2)

Phase 2: Modification of luciferin
The modification rule for the luciferin is as denoted in (3),

Phase 2: Modification of luciferin
The modification rule for the luciferin is as denoted in (3), where  is a constant value that indicates the decay of the luciferin and ranges between 0 and 1.  is a constant value that indicates the growth of the luciferin and ranges between 0 and 1.  � () indicates the updated value of the luciferin for any k glow worm at  �� iteration. � ( − 1) indicates the updated value of the luciferin for any k glow worm at ( − 1) �� iteration.( � ()) denotes the location strength of any k glow worm at  �� iteration and it can be represented as in (4), where  � is a constant that makes the value in the denominator to be more than zero.

Phase 3: Mobility phase
For any glow worm k, the probability of it moving towards a glowing neighbor is denoted using (5), The location of the glow worm is altered after every turn and the new location is computed using (6) as, where mb denotes the length of the mobility step taken by the glow worm.
If suppose the altered new location is not available for the glow worm, it can be updated using (7),

Phase 4: Alteration of the local decision unit
The decision value is modified for every iteration using (8), where  denotes the rate at which the location is modified,  � denotes the range covered by the glow worms,  � is a factor used to manage the count of the glow worms in the from the set of neighbors and the total count of the glow worms in the neighbor's unit is given by ().Algorithm 1.

Improved Glowworm Swarm Optimization algorithm
where δ is a constant value that indicates the decay of the luciferin and ranges between 0 and 1. β is a constant value that indicates the growth of the luciferin and ranges between 0 and 1. L k (p) indicates the updated value of the luciferin for any k glow worm at p th iteration.L k (p -1) indicates the updated value of the luciferin for any k glow worm at (p -1) th iteration.F(a k (p)) denotes the location strength of any k glow worm at p th iteration and it can be represented as in (4),

Phase 2: Modification of luciferin
The modification rule for the luciferin is as denoted in (3), where  is a constant value that indicates the decay of the luciferin and ranges between 0 and 1.  is a constant value that indicates the growth of the luciferin and ranges between 0 and 1.  � () indicates the updated value of the luciferin for any k glow worm at  �� iteration. � ( − 1) indicates the updated value of the luciferin for any k glow worm at ( − 1) �� iteration.( � ()) denotes the location strength of any k glow worm at  �� iteration and it can be represented as in (4), where  � is a constant that makes the value in the denominator to be more than zero.

Phase 3: Mobility phase
For any glow worm k, the probability of it moving towards a glowing neighbor is denoted using (5), The location of the glow worm is altered after every turn and the new location is computed using (6) as, where mb denotes the length of the mobility step taken by the glow worm.
If suppose the altered new location is not available for the glow worm, it can be updated using (7),

Phase 4: Alteration of the local decision unit
The decision value is modified for every iteration using (8), where  denotes the rate at which the location is modified,  � denotes the range covered by the glow worms,  � is a factor used to manage the count of the glow worms in the from the set of neighbors and the total count of the glow worms in the neighbor's unit is given by ().Algorithm 1.

Improved Glowworm Swarm Optimization algorithm
Input: Number of glow worms, luciferin value, local decision unit value Output: coordinates of location points Step 1: Initialize the number of glow worms to be n, initial luciferin value to be  � with initial local decision unit value to be 0

RR
Step 2: while( max p  || 0 err the error known initially  ) Step 3: for each glow worm k Step 4: where m 1 is a constant that makes the value in the denominator to be more than zero.

Phase 3: Mobility phase
For any glow worm k, the probability of it moving towards a glowing neighbor is denoted using (5),

Phase 2: Modification of luciferin
The modification rule for the luciferin is as denoted in (3), where  is a constant value that indicates the decay of the luciferin and ranges between 0 and 1.  is a constant value that indicates the growth of the luciferin and ranges between 0 and 1.  � () indicates the updated value of the luciferin for any k glow worm at  �� iteration. � ( − 1) indicates the updated value of the luciferin for any k glow worm at ( − 1) �� iteration.( � ()) denotes the location strength of any k glow worm at  �� iteration and it can be represented as in (4), where  � is a constant that makes the value in the denominator to be more than zero.

Phase 3: Mobility phase
For any glow worm k, the probability of it moving towards a glowing neighbor is denoted using (5), The location of the glow worm is altered after every turn and the new location is computed using (6) as, where mb denotes the length of the mobility step taken by the glow worm.
If suppose the altered new location is not available for the glow worm, it can be updated using (7),

Phase 4: Alteration of the local decision unit
The decision value is modified for every iteration using (8), where  denotes the rate at which the location is modified,  � denotes the range covered by the glow worms,  � is a factor used to manage the count of the glow worms in the from the set of neighbors and the total count of the glow worms in the neighbor's unit is given by ().Algorithm 1.

Improved Glowworm Swarm Optimization algorithm
Input: Number of glow worms, luciferin value, local decision unit value Output: coordinates of location points Step 1: Initialize the number of glow worms to be n, initial luciferin value to be  � with initial local decision unit value to be 0

RR
Step 2: while( max p  || 0 err the error known initially  ) Step 3: for each glow worm k The location of the glow worm is altered after every turn and the new location is computed using (6) as,

Phase 2: Modification of luciferin
The modification rule for the luciferin is as denoted in (3), where  is a constant value that indicates the decay of the luciferin and ranges between 0 and 1.  is a constant value that indicates the growth of the luciferin and ranges between 0 and 1.  � () indicates the updated value of the luciferin for any k glow worm at  �� iteration. � ( − 1) indicates the updated value of the luciferin for any k glow worm at ( − 1) �� iteration.( � ()) denotes the location strength of any k glow worm at  �� iteration and it can be represented as in ( 4), where  � is a constant that makes the value in the denominator to be more than zero.

Phase 3: Mobility phase
For any glow worm k, the probability of it moving towards a glowing neighbor is denoted using ( 5), The location of the glow worm is altered after every turn and the new location is computed using (6) as, where mb denotes the length of the mobility step taken by the glow worm.
If suppose the altered new location is not available for the glow worm, it can be updated using (7),

Phase 4: Alteration of the local decision unit
The decision value is modified for every iteration using (8), where  denotes the rate at which the location is modified,  � denotes the range covered by the glow worms,  � is a factor used to manage the count of the glow worms in the from the set of neighbors and the total count of the glow worms in the neighbor's unit is given by ().Algorithm 1.

Input: Number of glow worms, luciferin value, local decision unit value Output: coordinates of location points
Step 1: Initialize the number of glow worms to be n, initial luciferin value to be  � with initial local decision unit value to be 0

RR
Step 2: while( max p  || 0 err the error known initially  ) Step 3: for each glow worm k Step 4: Step 5: for each glow worm k Step 7: for each gloworm in the neighbor set Step 8: Calculate mobility probability Step 9: Step 10: choose the mobility direction , where mb denotes the length of the mobility step taken by the glow worm.
If suppose the altered new location is not available for the glow worm, it can be updated using ( 7), Phase 2: Modification of luciferin The modification rule for the luciferin is as denoted in where  is a constant value that indicates the decay of the luciferin and ranges between 0 and 1.  is a constant value that indicates the growth of the luciferin and ranges between 0 and 1.  � () indicates the updated value of the luciferin for any k glow worm at  �� iteration. � ( − 1) indicates the updated value of the luciferin for any k glow worm at ( − 1) �� iteration.( � ()) denotes the location strength of any k glow worm at  �� iteration and it can be represented as in ( 4), where  � is a constant that makes the value in the denominator to be more than zero.

Phase 3: Mobility phase
For any glow worm k, the probability of it moving towards a glowing neighbor is denoted using ( 5), The location of the glow worm is altered after every turn and the new location is computed using (6) as, where mb denotes the length of the mobility step taken by the glow worm.
If suppose the altered new location is not available for the glow worm, it can be updated using (7),

Phase 4: Alteration of the local decision unit
The decision value is modified for every iteration using (8), where  denotes the rate at which the location is modified,  � denotes the range covered by the glow worms,  � is a factor used to manage the count of the glow worms in the from the set of neighbors and the total count of the glow worms in the neighbor's unit is given by ().Algorithm 1.

Input: Number of glow worms, luciferin value, local decision unit value Output: coordinates of location points
Step 1: Initialize the number of glow worms to be n, initial luciferin value to be  � with initial local decision unit value to be 0

RR
Step 2: while( max p  || 0 err the error known initially  ) Step 3: for each glow worm k Step 4: Step 5: for each glow worm k Step 7: for each gloworm in the neighbor set Step 8: Calculate mobility probability Step 9: Step 10: choose the mobility direction

Phase 4: Alteration of the local decision unit
The decision value is modified for every iteration using (8),

Phase 2: Modification of luciferin
The modification rule for the luciferin is as denoted in where  is a constant value that indicates the decay of the luciferin and ranges between 0 and 1.  is a constant value that indicates the growth of the luciferin and ranges between 0 and 1.  � () indicates the updated value of the luciferin for any k glow worm at  �� iteration. � ( − 1) indicates the updated value of the luciferin for any k glow worm at ( − 1) �� iteration.( � ()) denotes the location strength of any k glow worm at  �� iteration and it can be represented as in (4), where  � is a constant that makes the value in the denominator to be more than zero.

Phase 3: Mobility phase
For any glow worm k, the probability of it moving towards a glowing neighbor is denoted using (5), The location of the glow worm is altered after every turn and the new location is computed using (6) as, where mb denotes the length of the mobility step taken by the glow worm.
If suppose the altered new location is not available for the glow worm, it can be updated using (7),

Phase 4: Alteration of the local decision unit
The decision value is modified for every iteration using (8), where  denotes the rate at which the location is modified,  � denotes the range covered by the glow worms,  � is a factor used to manage the count of the glow worms in the from the set of neighbors and the total count of the glow worms in the neighbor's unit is given by ().Algorithm 1.

Input: Number of glow worms, luciferin value, local decision unit value Output: coordinates of location points
Step 1: Initialize the number of glow worms to be n, initial luciferin value to be  � with initial local decision unit value to be 0

RR
Step 2: while( max p  || 0 err the error known initially  ) Step 3: for each glow worm k Step 4: Step 5: for each glow worm k Step 7: for each gloworm in the neighbor set Step 8: Calculate mobility probability Step 9: Step 10: choose the mobility direction where α denotes the rate at which the location is modified, RR t denotes the range covered by the glow worms, u w is a factor used to manage the count of the glow worms in the from the set of neighbors and the total count of the glow worms in the neighbor's unit is given by num(p).

Algorithm 1
Improved Glowworm Swarm Optimization algorithm

Output: coordinates of location points
Step 1: Initialize the number of glow worms to be n, initial luciferin value to be L 0 with initial local decision unit value to be 0

RR
Step 2: while( max p <= || 0 err the error known initially < ) Step 3: for each glow worm k Step 4: ( ) ( ) ( ) ( ) ( ) Step 5: for each glow worm k Step 6: Step 6: Step 7: for each gloworm in the neighbor set Step 8: Calculate mobility probability Step 9: Step 10: choose the mobility direction Step 11: modify location of glowworm and compute new location Step 12: Step 15: end for Step 16: end for Step 17: end for Step 18: return location points coordinates The algorithm uses a local decision unit and mobility probability to guide the movement of the glow worms towards better solutions.The algorithm repeats this process until a stopping criterion is met, and the best solution is returned as the output, which corresponds to the coordinates of the location points.

Radial Basis Function Networks
Consider a set of data points D which are available in the input region in M dimension.Each input in this region can be represented as in (9),

Radial Basis Function Networks
Consider a set of data points D which are available in the input region in M dimension.Each input in this region can be represented as in (9), The corresponding output for the input vector in ( 9) is denoted as shown in (10), The corresponding output for the input vector in ( 9) is denoted as shown in (10),

Radial Basis Function Networks
Consider a set of data points D which are available in the input region in M dimension.Each input in this region can be represented as in (9), The corresponding output for the input vector in ( 9) is denoted as shown in (10), The output representations are produced by approximating the functions as denoted in (11), where The output representations are produced by approximating the functions as denoted in (11),

Radial Basis Function Networks
Consider a set of data points D which are available in the input region in M dimension.Each input in this region can be represented as in (9), The corresponding output for the input vector in ( 9) is denoted as shown in (10), The output representations are produced by approximating the functions as denoted in (11), where where ( ) i x σ denotes the Gaussian representation of the function as shown in (12), representation of the function as shown in (12),

Network Training
There are three approaches that are used for optimizing the functions in order to train the networks efficiently.

a) Arbitrary selection of centers
The easiest method of assigning the values for the parameters in the network is through the arbitrary selection of center points.The width of all the selected points are assigned to be the same and are also fixed to a right size according to the disposition of the points as represented in (13),

b) Selection using clustering methods
In this work, a K-means clustering approach is employed which initially chooses K centers and assigns inputs to these arbitrary centers as represented in ( 14), In the above equation,  � is computed as a mean of the inputs and it is denoted as in ( 15),

c) Selection using Least squares Methods
The least squares method is another ethical technique for choosing a subset of data points as the basis function centers.This includes successively adding new kernel functions, each focused on a different data point.
The descents with will mount to null 0 0

Results an
The performance IGSO-RBFN tech prediction is discu

Experimental
The experiments a an Intel Core i5 pr memory of 4 G system with Wi experiments are language with Py such as Pandas matplotlib are us testing of the pro model for Radia created using the application being u

Datasets
Four different da Dataset, HandPD dataset, and Voic used for experime

HandPD Me
As part of the stu were asked to com cycles to acquire This dataset comp of which 74 were healthy group.T Step 11: modify location of glowworm and compute new location Step 12: Step 13: update decision value Step 14: Step 15: end for Step 16: end for Step 17: end for Step 18: return location points coordinates The algorithm uses a local decision unit and mobility probability to guide the movement of the glow worms towards better solutions.The algorithm repeats this process until a stopping criterion is met, and the best solution is returned as the output, which corresponds to the coordinates of the location points.

Figure 2
Flow Chart of IGSO algorithm

Radial Basis Function Networks
Consider a set of data points D which are available in the input region in M dimension.Each input in this region can be represented as in (9), The output representations are produced by approximating the functions as denoted in (11),

Network Training
There are three approaches that are used for optimizing the functions in order to train the networks efficiently.
a Arbitrary selection of centers The easiest method of assigning the values for the parameters in the network is through the arbitrary selection of center points.The width of all the selected points are assigned to be the same and are also fixed to a right size according to the disposition of the points as represented in (13), representation of the function as shown in (12),

Network Training
There are three approaches that are used for optimizing the functions in order to train the networks efficiently.

a) Arbitrary selection of centers
The easiest method of assigning the values for the parameters in the network is through the arbitrary selection of center points.The width of all the selected points are assigned to be the same and are also fixed to a right size according to the disposition of the points as represented in (13),

b) Selection using clustering methods
In this work, a K-means clustering approach is employed which initially chooses K centers and assigns inputs to these arbitrary centers as represented in ( 14), In the above equation,  � is computed as a mean of the inputs and it is denoted as in ( 15),

c) Selection using Least squares Methods
The least squares method is another ethical technique for choosing a subset of data points as the basis function centers.This includes successively adding new kernel functions, each focused on a different data point.

Output Prediction
The output for the network is determined based on the hidden layers and the optimized weights as shown in (16), The outputs are represented as an amalgamation of the continuous sequence of units in the hidden layer as denoted in (17), The descents with respect to the weight notations will mount to null value as shown in (18),

Results and Discussion
The performance evaluation of the proposed IGSO-RBFN technique for Parkinson's disease prediction is discussed in this section.

Experimental Setup
The experiments are carried out in a machine with an Intel Core i5 processor with 2.40 GHz having a memory of 4 GB employed on the operating system with Windows 11 configuration.The experiments are implemented using Python language with Python 3.9.The Python packages such as Pandas, Scikit-learn, NumPy, and matplotlib are used for the implementation and testing of the program codes.The deep learning model for Radial Basis Function Networks is created using the Keras library, with the backend application being utilized as Theano Framework.

Datasets
Four different datasets, such as HandPD Spiral Dataset, HandPD Meander Dataset, SpeechPD dataset, and VoicePD datasets, are exclusively used for experimental purposes.

HandPD Meander Dataset
As part of the study and test, healthy individuals were asked to complete a questionnaire and do four cycles to acquire the data from various patients.This dataset comprises information on 158 people, of which 74 were patients, and 18 were in the healthy group.The dataset has a total of 13 characteristics and 632 occurrences.The samples of the meanders collected from individuals from varying age groups are shown in Figure 4.

Figure 4
Samples for HandPD meanders

HandPD Spiral Dataset
This dataset was also gathered at Sao Paulo (13) b Selection using clustering methods In this work, a K-means clustering approach is employed which initially chooses K centers and assigns inputs to these arbitrary centers as represented in (14), representation of the function as shown in (12),

Network Training
There are three approaches that are used for optimizing the functions in order to train the networks efficiently.

a) Arbitrary selection of centers
The easiest method of assigning the values for the parameters in the network is through the arbitrary selection of center points.The width of all the selected points are assigned to be the same and are also fixed to a right size according to the disposition of the points as represented in (13),

b) Selection using clustering methods
In this work, a K-means clustering approach is employed which initially chooses K centers and assigns inputs to these arbitrary centers as represented in ( 14), In the above equation,  � is computed as a mean of the inputs and it is denoted as in (15),

c) Selection using Least squares Methods
The least squares method is another ethical technique for choosing a subset of data points as the basis function centers.This includes successively adding new kernel functions, each focused on a different data point.

Output Prediction
The output for the network is determined based on the hidden layers and the optimized weights as shown in (16), The outputs are represented as an amalgamation of the continuous sequence of units in the hidden layer as denoted in (17), The descents with respect to the weight notations will mount to null value as shown in (18), 0 0 ( ( )) ( ) 0

Results and Discussion
The performance evaluation of the proposed IGSO-RBFN technique for Parkinson's disease prediction is discussed in this section.

Experimental Setup
The experiments are carried out in a machine with an Intel Core i5 processor with 2.40 GHz having a memory of 4 GB employed on the operating system with Windows 11 configuration.The experiments are implemented using Python language with Python 3.9.The Python packages such as Pandas, Scikit-learn, NumPy, and matplotlib are used for the implementation and testing of the program codes.The deep learning model for Radial Basis Function Networks is created using the Keras library, with the backend application being utilized as Theano Framework.

Datasets
Four different datasets, such as HandPD Spiral Dataset, HandPD Meander Dataset, SpeechPD dataset, and VoicePD datasets, are exclusively used for experimental purposes.

HandPD Meander Dataset
As part of the study and test, healthy individuals were asked to complete a questionnaire and do four cycles to acquire the data from various patients.This dataset comprises information on 158 people, of which 74 were patients, and 18 were in the healthy group.The dataset has a total of 13 characteristics and 632 occurrences.The samples of the meanders collected from individuals from varying age groups are shown in Figure 4.

Figure 4
Samples for HandPD meanders

HandPD Spiral Dataset
This dataset was also gathered at Sao Paulo (14) In the above equation, μ i is computed as a mean of the inputs and it is denoted as in (15), representation of the function as shown in (12),

Network Training
There are three approaches that are used for optimizing the functions in order to train the networks efficiently.

a) Arbitrary selection of centers
The easiest method of assigning the values for the parameters in the network is through the arbitrary selection of center points.The width of all the selected points are assigned to be the same and are also fixed to a right size according to the disposition of the points as represented in (13),

b) Selection using clustering methods
In this work, a K-means clustering approach is employed which initially chooses K centers and assigns inputs to these arbitrary centers as represented in ( 14), In the above equation,  � is computed as a mean of the inputs and it is denoted as in (15),

c) Selection using Least squares Methods
The least squares method is another ethical technique for choosing a subset of data points as the basis function centers.This includes successively adding new kernel functions, each focused on a different data point.

Output Prediction
The output for the network is determined based on the hidden layers and the optimized weights as shown in (16), The outputs are represented as an amalgamation of the continuous sequence of units in the hidden layer as denoted in (17), The descents with respect to the weight notations will mount to null value as shown in (18), 0 0 ( ( )) ( ) 0

Results and Discussion
The performance evaluation of the proposed IGSO-RBFN technique for Parkinson's disease prediction is discussed in this section.

Experimental Setup
The experiments are carried out in a machine with an Intel Core i5 processor with 2.40 GHz having a memory of 4 GB employed on the operating system with Windows 11 configuration.The experiments are implemented using Python language with Python 3.9.The Python packages such as Pandas, Scikit-learn, NumPy, and matplotlib are used for the implementation and testing of the program codes.The deep learning model for Radial Basis Function Networks is created using the Keras library, with the backend application being utilized as Theano Framework.

Datasets
Four different datasets, such as HandPD Spiral Dataset, HandPD Meander Dataset, SpeechPD dataset, and VoicePD datasets, are exclusively used for experimental purposes.

HandPD Meander Dataset
As part of the study and test, healthy individuals were asked to complete a questionnaire and do four cycles to acquire the data from various patients.This dataset comprises information on 158 people, of which 74 were patients, and 18 were in the healthy group.The dataset has a total of 13 characteristics and 632 occurrences.The samples of the meanders collected from individuals from varying age groups are shown in Figure 4.

Figure 4
Samples for HandPD meanders

HandPD Spiral Dataset
This dataset was also gathered at Sao Paulo .(15) c Selection using Least squares Methods The least squares method is another ethical technique for choosing a subset of data points as the basis function centers.This includes successively adding new kernel functions, each focused on a different data point.

Output Prediction
The output for the network is determined based on the hidden layers and the optimized weights as shown in (16), representation of the function as shown in (12),

Network Training
There are three approaches that are used for optimizing the functions in order to train the networks efficiently.

a) Arbitrary selection of centers
The easiest method of assigning the values for the parameters in the network is through the arbitrary selection of center points.The width of all the selected points are assigned to be the same and are also fixed to a right size according to the disposition of the points as represented in ( 13),

b) Selection using clustering methods
In this work, a K-means clustering approach is employed which initially chooses K centers and assigns inputs to these arbitrary centers as represented in ( 14), In the above equation,  � is computed as a mean of the inputs and it is denoted as in ( 15),

c) Selection using Least squares Methods
The least squares method is another ethical technique for choosing a subset of data points as the basis function centers.This includes successively adding new kernel functions, each focused on a different data point.

Output Prediction
The output for the network is determined based on the hidden layers and the optimized weights as shown in (16), The outputs are represented as an amalgamation of the continuous sequence of units in the hidden layer as denoted in (17), The descents with respect to the weight notations will mount to null value as shown in (18),

Results and Discussion
The performance evaluation of the proposed IGSO-RBFN technique for Parkinson's disease prediction is discussed in this section.

Experimental Setup
The experiments are carried out in a machine with an Intel Core i5 processor with 2.40 GHz having a memory of 4 GB employed on the operating system with Windows 11 configuration.The experiments are implemented using Python language with Python 3.9.The Python packages such as Pandas, Scikit-learn, NumPy, and matplotlib are used for the implementation and testing of the program codes.The deep learning model for Radial Basis Function Networks is created using the Keras library, with the backend application being utilized as Theano Framework.

Datasets
Four different datasets, such as HandPD Spiral Dataset, HandPD Meander Dataset, SpeechPD dataset, and VoicePD datasets, are exclusively used for experimental purposes.

HandPD Meander Dataset
As part of the study and test, healthy individuals were asked to complete a questionnaire and do four cycles to acquire the data from various patients.This dataset comprises information on 158 people, of which 74 were patients, and 18 were in the healthy group.The dataset has a total of 13 characteristics and 632 occurrences.The samples of the meanders collected from individuals from varying age groups are shown in Figure 4.

Figure 4
Samples for HandPD meanders

HandPD Spiral Dataset
This dataset was also gathered at Sao Paulo (16) The outputs are represented as an amalgamation of the continuous sequence of units in the hidden layer as denoted in (17), representation of the function as shown in (12),  . (12)

Network Training
There are three approaches that are used for optimizing the functions in order to train the networks efficiently.

a) Arbitrary selection of centers
The easiest method of assigning the values for the parameters in the network is through the arbitrary selection of center points.The width of all the selected points are assigned to be the same and are also fixed to a right size according to the disposition of the points as represented in ( 13),

b) Selection using clustering methods
In this work, a K-means clustering approach is employed which initially chooses K centers and assigns inputs to these arbitrary centers as represented in ( 14), In the above equation,  � is computed as a mean of the inputs and it is denoted as in ( 15),

c) Selection using Least squares Methods
The least squares method is another ethical technique for choosing a subset of data points as the basis function centers.This includes successively adding new kernel functions, each focused on a different data point.

Output Prediction
The output for the network is determined based on the hidden layers and the optimized weights as shown in (16), The outputs are represented as an amalgamation of the continuous sequence of units in the hidden layer as denoted in (17), The descents with respect to the weight notations will mount to null value as shown in (18),

Results and Discussion
The performance evaluation of the proposed IGSO-RBFN technique for Parkinson's disease prediction is discussed in this section.

Experimental Setup
The experiments are carried out in a machine with an Intel Core i5 processor with 2.40 GHz having a memory of 4 GB employed on the operating system with Windows 11 configuration.The experiments are implemented using Python language with Python 3.9.The Python packages such as Pandas, Scikit-learn, NumPy, and matplotlib are used for the implementation and testing of the program codes.The deep learning model for Radial Basis Function Networks is created using the Keras library, with the backend application being utilized as Theano Framework.

Datasets
Four different datasets, such as HandPD Spiral Dataset, HandPD Meander Dataset, SpeechPD dataset, and VoicePD datasets, are exclusively used for experimental purposes.

HandPD Meander Dataset
As part of the study and test, healthy individuals were asked to complete a questionnaire and do four cycles to acquire the data from various patients.This dataset comprises information on 158 people, of which 74 were patients, and 18 were in the healthy group.The dataset has a total of 13 characteristics and 632 occurrences.The samples of the meanders collected from individuals from varying age groups are shown in Figure 4.

Figure 4
Samples for HandPD meanders

HandPD Spiral Dataset
This dataset was also gathered at Sao Paulo (17) The descents with respect to the weight notations will mount to null value as shown in (18), representation of the function as shown in (12),

Network Training
There are three approaches that are used for optimizing the functions in order to train the networks efficiently.

a) Arbitrary selection of centers
The easiest method of assigning the values for the parameters in the network is through the arbitrary selection of center points.The width of all the selected points are assigned to be the same and are also fixed to a right size according to the disposition of the points as represented in (13),

b) Selection using clustering methods
In this work, a K-means clustering approach is employed which initially chooses K centers and assigns inputs to these arbitrary centers as represented in ( 14), In the above equation,  � is computed as a mean of the inputs and it is denoted as in (15),

c) Selection using Least squares Methods
The least squares method is another ethical technique for choosing a subset of data points as the basis function centers.This includes successively adding new kernel functions, each focused on a different data point.

Output Prediction
The output for the network is determined based on the hidden layers and the optimized weights as shown in (16), The outputs are represented as an amalgamation of the continuous sequence of units in the hidden layer as denoted in (17), The descents with respect to the weight notations will mount to null value as shown in (18),

Results and Discussion
The performance evaluation of the proposed IGSO-RBFN technique for Parkinson's disease prediction is discussed in this section.

Experimental Setup
The experiments are carried out in a machine with an Intel Core i5 processor with 2.40 GHz having a memory of 4 GB employed on the operating system with Windows 11 configuration.The experiments are implemented using Python language with Python 3.9.The Python packages such as Pandas, Scikit-learn, NumPy, and matplotlib are used for the implementation and testing of the program codes.The deep learning model for Radial Basis Function Networks is created using the Keras library, with the backend application being utilized as Theano Framework.

Datasets
Four different datasets, such as HandPD Spiral Dataset, HandPD Meander Dataset, SpeechPD dataset, and VoicePD datasets, are exclusively used for experimental purposes.

HandPD Meander Dataset
As part of the study and test, healthy individuals were asked to complete a questionnaire and do four cycles to acquire the data from various patients.This dataset comprises information on 158 people, of which 74 were patients, and 18 were in the healthy group.The dataset has a total of 13 characteristics and 632 occurrences.The samples of the meanders collected from individuals from varying age groups are shown in Figure 4.

Figure 4
Samples for HandPD meanders

HandPD Spiral Dataset
This dataset was also gathered at Sao Paulo

Results and Discussion
The performance evaluation of the proposed IG-SO-RBFN technique for Parkinson's disease prediction is discussed in this section.

Experimental Setup
The experiments are carried out in a machine with an Intel Core i5 processor with 2.40 GHz having a memory of 4 GB employed on the operating system with Windows 11 configuration.The experiments are implemented using Python language with Python 3.9.The Python packages such as Pandas, Scikit-learn, NumPy, and matplotlib are used for the implementation and testing of the program codes.The deep learning model for Radial Basis Function Networks is created using the Keras library, with the backend application being utilized as Theano Framework.

Datasets
Four different datasets, such as HandPD Spiral Dataset, HandPD Meander Dataset, SpeechPD dataset, and VoicePD datasets, are exclusively used for experimental purposes.

HandPD Meander Dataset
As part of the study and test, healthy individuals were asked to complete a questionnaire and do four cycles to acquire the data from various patients.This dataset comprises information on 158 people, of which 74 were patients, and 18 were in the healthy group.The dataset has a total of 13 characteristics and 632 occurrences.The samples of the meanders collected from individuals from varying age groups are shown in Figure 4.

HandPD Spiral Dataset
This dataset was also gathered at Sao Paulo University in Brazil, where participants were instructed to draw spirals instead of meanders on the form.The study's dataset also includes 158 participants, a total of 632 incidents, and 13 attributes.The samples of the spirals collected from individuals from varying age groups are shown in Figure 5.The major properties of this dataset are similar to that given in Table 3. were asked to complete a questionnaire and do four cycles to acquire the data from various patients.This dataset comprises information on 158 people, of which 74 were patients, and 18 were in the healthy group.The dataset has a total of 13 characteristics and 632 occurrences.The samples of the meanders collected from individuals from varying age groups are shown in Figure 4.  University in Brazil, where participants were instructed to draw spirals instead of meanders on the form.The study's dataset also includes 158 participants, a total of 632 incidents, and 13 attributes.The samples of the spirals collected from individuals from varying age groups are shown in Figure 5.The major properties of this dataset are similar to that given in Table 3.

Evaluation metrics
The performance of the proposed technique is evaluated using the following metrics.

Experimental Results
The experimental results obtained by applying the proposed IGSO-RBFN technique to the four different datasets as, the HandPD Meander dataset, HandPD Spiral Dataset, SpeechPD dataset, and VoicePD dataset, are explained in this section.For comparison purposes, the Glowworm Swarm Optimization technique was applied in combination with the algorithms such as Random Forest, K Nearest Neighbour, Support Vector Machine, and Convolutional Neural Networks.The achieved results are compared with the results produced by Improved Glowworm Swarm Optimization (IGSO) and Radial Basis Function Networks combination.The HandPD Meander Dataset is initially considered and applied with the techniques shown in Table 2 and Figure 6.According to the results obtained, it can be observed that the proposed IGSO-RBFN produced an accurate prediction of 92.57%, and also, the rate of prediction was 91.24%.The false prophecy was lower at 78.75% in the proposed technique compared to the other algorithms.

Evaluation metrics
The performance of the proposed technique is evaluated using the following metrics.
a Accurate Prediction (AP) The property by which the model classifies the instances correctly is termed Accurate Prediction.
females and ten males, who appealed at the Department of Neurology at Istanbul University, are considered as the subjects for the training data.Numerous sound recordings, including voice samples counting to 26, comprising continuous syllables, digits, phrases, and short statements, were collected from all subjects.

Evaluation metrics
The performance of the proposed technique is evaluated using the following metrics.

a) Accurate Prediction (AP)
The property by which the model classifies the instances correctly is termed Accurate Prediction.b Rate of Prediction (RoP) The rate at which the model classifies individuals with Parkinson's disease with more precision is termed as Rate of Prediction.
University in Brazil, where participants were instructed to draw spirals instead of meanders on the form.The study's dataset also includes 158 participants, a total of 632 incidents, and 13 attributes.The samples of the spirals collected from individuals from varying age groups are shown in Figure 5.The major properties of this dataset are similar to that given in Table 3.

Evaluation metrics
The performance of the proposed technique is evaluated using the following metrics.

Experimental Results
The experimental results obtained by applying the proposed IGSO-RBFN technique to the four different datasets as, the HandPD Meander dataset, HandPD Spiral Dataset, SpeechPD dataset, and VoicePD dataset, are explained in this section.For comparison purposes, the Glowworm Swarm Optimization technique was applied in combination with the algorithms such as Random Forest, K Nearest Neighbour, Support Vector Machine, and Convolutional Neural Networks.The achieved results are compared with the results produced by Improved Glowworm Swarm Optimization (IGSO) and Radial Basis Function Networks combination.The HandPD Meander Dataset is initially considered and applied with the techniques shown in Table 2 and Figure 6.According to the results obtained, it can be observed that the proposed IGSO-RBFN produced an accurate prediction of 92.57%, and also, the rate of prediction was 91.24%.The false prophecy was lower at 78.75% in the proposed technique compared to the other algorithms.
c False Prediction (FP) The rate at which the model wrongly classifies normal individuals as Parkinson's disease patients is termed a False Prediction.
University in Brazil, where participants were instructed to draw spirals instead of meanders on the form.The study's dataset also includes 158 participants, a total of 632 incidents, and 13 attributes.The samples of the spirals collected from individuals from varying age groups are shown in Figure 5.The major properties of this dataset are similar to that given in Table 3.

Evaluation metrics
The performance of the proposed technique is evaluated using the following metrics.

a) Accurate Prediction (AP)
The property by which the model classifies the instances correctly is termed Accurate Prediction.

Experimental Results
The experimental results obtained by applying the proposed IGSO-RBFN technique to the four different datasets as, the HandPD Meander dataset, HandPD Spiral Dataset, SpeechPD dataset, and VoicePD dataset, are explained in this section.For comparison purposes, the Glowworm Swarm Optimization technique was applied in combination with the algorithms such as Random Forest, K Nearest Neighbour, Support Vector Machine, and Convolutional Neural Networks.The achieved results are compared with the results produced by Improved Glowworm Swarm Optimization (IGSO) and Radial Basis Function Networks combination.The HandPD Meander Dataset is initially considered and applied with the techniques shown in Table 2 and Figure 6.According to the results obtained, it can be observed that the proposed IGSO-RBFN produced an accurate prediction of 92.57%, and also, the rate of prediction was 91.24%.The false prophecy was lower at 78.75% in the proposed technique compared to the other algorithms.

Table 2
Results on HandPD Meander Dataset

Experimental Results
The experimental results obtained by applying the proposed IGSO-RBFN technique to the four different datasets as, the HandPD Meander dataset, Hand-PD Spiral Dataset, SpeechPD dataset, and VoicePD dataset, are explained in this section.For comparison purposes, the Glowworm Swarm Optimization technique was applied in combination with the algorithms such as Random Forest, K Nearest Neighbour, Support Vector Machine, and Convolutional Neural Networks.The achieved results are compared with the results produced by Improved Glowworm Swarm Optimization (IGSO) and Radial Basis Function Networks combination.
The HandPD Meander Dataset is initially considered and applied with the techniques shown in Table 2 and Figure 6.According to the results obtained, it can be observed that the proposed IGSO-RBFN produced an accurate prediction of 92.57%, and also, the rate of prediction was 91.24%.The false prophecy was lower at 78.75% in the proposed technique compared to the other algorithms.Out of all the datasets used in the experimentation, the accurate predictions produced by the proposed technique are the highest for the VoicePD dataset with 95.78% as depicted in Table 5 and Figure 7.The rate of prediction is higher at 94.75% and with a lower false forecast of 80.75%.Comparatively, the performance of the other algorithms is also improved for the VoicePD dataset, however, it is lower compared to the proposed technique.Further, the performance of the proposed technique is also compared with the existing works on Parkinson's disease diagnosis, as in Table 6 and Figure 8. Multilayer perceptrons were used in [25] and were applied to the HandPD Meander dataset, which obtained an accuracy of 91.46%.Random Forests and Logistic regression combination were tested on the HandPD Spiral dataset and produced an accuracy of 85.56% [28].Long Short-Term Memory Networks were combined with Particle Swarm Optimization in [29] for the SpeechPD dataset and had an accuracy of 86.85%.Deep Neural Networks were used in [34] for predicting Parkinson's disease using the VoicePD dataset with an accuracy of 92.68%.However, the proposed model outperformed the existing works for all the datasets and produced the highest accuracy of 95.78% for the VoicePD dataset.

Conclusion
This study presents a new approach to feature selection using an improved glowworm swarm optimization algorithm.The proposed algorithm is designed to select a smaller subset of relevant meander [25] spiral [28] speechPD [29] VoicePD [34]  Further, the performance of the proposed technique is also compared with the existing works on Parkinson's disease diagnosis, as in Table 6 and Figure 8. Multilayer perceptrons were used in [25] and were applied to the HandPD Meander dataset, which obtained an accuracy of 91.46%.Random Forests and Logistic regression combination were tested on the HandPD Spiral dataset and produced an accuracy of 85.56% [28].Long Short-Term Memory Networks were combined with Particle Swarm Optimization in [29] for the SpeechPD dataset and had an accuracy of 86.85%.Deep Neural Networks were used in [34] for predicting Parkinson's disease using the VoicePD dataset with an accuracy of 92.68%.However, the proposed model outperformed the existing works for all the datasets and produced the highest accuracy of 95.78% for the VoicePD dataset.

Conclusion
This study presents a new approach to feature selection using an improved glowworm swarm optimization algorithm.The proposed algorithm is designed to select a smaller subset of relevant features, and the classification is performed using Radial Basis Function Networks.This method can improve the accura-

Figure 1
Figure 1 Proposed Architecture

Input: 1 :
Number of glow worms, luciferin value, local decision unit value Output: coordinates of location pointsStep Initialize the number of glow worms to be n, initial luciferin value to be  � with initial local decision unit value

Figure 2 Flow
Figure 2 Flow Chart of IGSO algorithm Figure 4 Samples for HandPD meanders

Figure 4
Figure 4 Samples for HandPD meanders

Figure 5
Figure 5 Samples for HandPD spirals

Figure 5
Figure 5 Samples for HandPD spirals a) Accurate Prediction (AP) The property by which the model classifies the instances correctly is termed Accurate Prediction.The rate at which the model classifies individuals with Parkinson's disease with more precision is termed as Rate of Prediction.' 100 ' Count of Parkinson s disease predictions RoP x Total Parkinson s disease predictions  .(20) c) False Prediction (FP) The rate at which the model wrongly classifies normal individuals as Parkinson's disease patients is termed a False Prediction.

Figure 5
Figure 5 Samples for HandPD spirals a) Accurate Prediction (AP) The property by which the model classifies the instances correctly is termed Accurate Prediction. 100 Count of exact predictions AP x Total predictions  .(19) b) Rate of Prediction (RoP) The rate at which the model classifies individuals with Parkinson's disease with more precision is termed as Rate of Prediction.' 100 ' Count of Parkinson s disease predictions RoP x Total Parkinson s disease predictions  .(20) c) False Prediction (FP) The rate at which the model wrongly classifies normal individuals as Parkinson's disease patients is termed a False Prediction.

Figure 5
Figure 5 Samples for HandPD spirals Count of Parkinson s disease predictions RoP x Total Parkinson s disease predictions  .(20) c) False Prediction (FP) The rate at which the model wrongly classifies normal individuals as Parkinson's disease patients is termed a False Prediction.

Figure 7
Figure 7 Performance Comparison on VoicePD dataset

Table 1
Comparison of the existing ML works on the diagnosis of Parkinson's disease

Table 2 4.2.3. SpeechPD Dataset This
dataset contains a variety of biological vocal measurements taken from 31 individuals, 23 of whom have Parkinson's disease (PD).Each row in the table corresponds to one of the 195 voice recordings from these people, and each column represents a specific vocal measure.

Table 2
Results on HandPD Meander Dataset
The rate at which the model classifies individuals with Parkinson's disease with more precision is termed as Rate of Prediction.
b) Rate of Prediction (RoP)

Table 2
Results on HandPD Meander Dataset

Table 3
Results on HandPD Spiral Dataset

Table 4 Results on HandPD Speech Dataset Classifiers Accurate Prediction (%) Rate of Prediction (%) False Prediction (%)
The results obtained for the HandPD Spiral dataset are tabulated in Table3.The accurate predictions produced by the proposed technique, 91.36% for this dataset, are a little lower compared to the predictions made on the HandPD Meander dataset but comparatively higher to the other algorithms considered for comparison.For this dataset, GSO-RF produced the least accurate predictions of 83.45%, a rate of the forecast of 85.45%, and the highest false prophecy of 82.45%.The accurate prediction, rate of prediction, and false prediction produced by the IGSO-RBFN algorithm are 91.78%,90.68%,and80.21%, respectively, for the Hand-PD Speech dataset, as shown in Table4.GSO-CNN al-

Table 4
Results on HandPD Speech Dataset The results obtained for the HandPD Spiral dataset are tabulated in Table3.The accurate predictions produced by the proposed technique, 91.36% for this dataset, are a little lower compared to the predictions made on the HandPD Meander dataset but comparatively higher to the other algorithms considered for comparison.For this dataset, GSO-RF produced the least accurate predictions of 83.45%, a rate of the forecast of 85.45%, and the highest false prophecy of 82.45%.The accurate prediction, rate of prediction, and false prediction produced by the IGSO-RBFN algorithm are 91.78%,90.68%,and80.21%, respectively, for the HandPD Speech dataset, as shown in Table4.GSO-CNN algorithm produced the second-best performance next to the proposed technique with 88.98%, 86.24%, and 82.45% accurate prediction, rate of prediction, and false predictions, respectively.

Table 5
Results on HandPD Voice Dataset gorithm produced the second-best performance next to the proposed technique with 88.98%, 86.24%, and 82.45% accurate prediction, rate of prediction, and false predictions, respectively.Out of all the datasets used in the experimentation, the accurate predictions produced by the proposed technique are the highest for the VoicePD dataset with 95.78% as depicted in Table5and Figure7.The rate of prediction is higher at 94.75% and with a lower false forecast of 80.75%.Comparatively, the performance of the other algorithms is also improved for the VoicePD dataset, however, it is lower compared to the proposed technique.

Table 6
Performance Comparison of Existing vs Proposed based on Accuracy in %

Table 6
Performance Comparison of Existing vs Proposed based on Accuracy in % cy and efficiency of various machine-learning tasks requiring feature selection.The IGSO-RBFN algorithm is applied to four different datasets: the Hand-PD Meander dataset, HandPD Spiral dataset, Speech-PD dataset, and VoicePD dataset.In comparison with machine learning techniques and deep learning techniques like KNN, Random Forest, Support Vector Machine, and Convolutional Neural Networks in combination with traditional Glowworm Swarm Optimization algorithm, the proposed IGSO-RBFN outperforms all the datasets and, in particular, the highest accurate prediction of 95.78% is obtained for the VoicePD dataset.One of the limitations of this work is that all the datasets are tested independently of each other.By coming up with ways to merge the HandPD and Voice Datasets models, further research in the same area of Parkinson's disease diagnosis can be done to improve detection accuracy.