Convolutional-Neural-Network Assisted Segmentation and SVM Classification of Brain Tumor in Clinical MRI Slices

Due to the increased disease occurrence rates in humans


Introduction
Due to various uncontrollable reasons, the disease occurrence rates in humans are gradually growing worldwide. The diseases in internal body organs are considered more acute compared to the diseases in external organs. In human physiology, the brain is one of the vital internal organ and also the prime part in Central Nervous System (CNS) and the abnormality/ disease in brain is one of the chief medical emergency [1][2][3]. The brain abnormality is due to various reasons and the brain tumour is one of the leading causes of abnormality in CNS.
The various class of Brain Tumour (BT) based on the dimension and the orientation is clearly discussed in [4] and this report also suggests various clinical treatment procedures for the BT, such as surgery, radiation therapy, and chemotherapy. In humans, the tumours, such as Low-Grade-Glioma (LGG) and Glioblastoma-Multiforme (GBM) causes a severe problem in CNS and the efficient recognition of these tumour cases are very essential to plan and implement appropriate treat process. The LGG begins in glial-cells of the CNS and rigorously affects the normal activity of CNS based on the orientation and progression rate. The GBM is one of the harsh condition, normally occurs in the chief part of the CNS (brain/ spinal cord) and the untreated GBM may lead to various problems, like headaches, nausea, vomiting and seizures [5][6][7].
The screening of the BT can be performed using signal-assisted-methodologies (recording and examining the electroencephalogram) and image-assisted-procedures (recording the brain section using radiological imaging). The BT detection with imaging practice helps to reach a healthier diagnosis compared to signal supported techniques [8][9][10].
In the literature, a number of image processing procedures are proposed and implemented to evaluate the BT using MRI slices of varied modalities. The MRI slice with axial-view is widely considered by the re-searchers and doctors to evaluate the abnormality, since the brain section seen in the axial-view is clear compared to sagittal-and coronal-view [11,12].
The clinical level detection of the BT using the chosen MRI slice is a challenging task during the mass screening operation and classifying the BT into GBM/LGG is necessary during the decision making and treatment planning process. When, the number of MRI slices to be diagnosed increases, then a Computer-Aided-Detection (CAD) can be implemented to reduce the burden and the report obtained from the CAD along with the categorized images are submitted for doctor's perusal. The CAD report combined with the findings by the doctor will help in taking the firm decision regarding the BT class and the possible treatment procedure to be implemented to recover the patient.
The proposed research aims at developing an appropriate Computer Aided Disease Diagnosis (CADD) scheme to segment and categorize the BT existing in the T2 modality brain MRI slice. The proposed CADD framework consist of the following stages; (i) Two-dimensional slice extraction and resizing (224x224x3 pixels), (ii) Deep-Feature (DF) extraction with VGG16, (iii) VGG-UNet assisted tumour mining, (iv) Handcrafted-feature (HF) extraction, (v) Dominant feature selection using Firefly-Algorithm, (vi) Serial features concatenation, and (vii) Binary classification with a 10-fold cross confirmation.
The performance of the proposed CADD is tested and authenticated using benchmark as well as clinically collected MRI. The main purpose of the benchmark images (T2 modality MRI slices with LGG/GBM) is to train the VGG16 and VGG-UNet for the proposed task. The result attained with the proposed study confirms that the developed CADD helps in achieving a better accuracy with the benchmark (96.04%) as well as clinical images (98.89%).
This research is prepared as follows; Section 2 presents the context, Section 3 discusses the implemented methodology, Section 4 and 5 depicts the experimental results and discussions. The conclusion of this research is specified in Section 6.

Related Works
Due to its significance, a significant number of BT diagnosis systems are proposed and applied by the researchers using, traditional, Machine-Learning (ML) and Deep-Learning (DL) approaches [1][2][3]. In most of the CAD schemes, ML/`DL methods are applied to classify the brain MRI slices based on the disease conditions.
The ML based BT detection directly implements a scheme with the following phases; MRI processing and resizing, feature extraction, feature selection, classifier implementation and validation [12]. In few existing ML technique, the extraction and evaluation of the BT section is also presented [14]. The ML approaches implemented in earlier research offered a satisfactory classification result on the benchmark as well as the clinical grade MRI slices (94.51%) when a binary classification is employed to categorize the MRI slices into healthy/disease class [10]. The DL supported approaches helped to achieve a better result during the binary as well as the multi-class categorization of the brain MRI slices [7,8]. The earlier works on BT detection confirmed that, the DL schemes implemented with the DF and combined DF and HF (DF+HF) will offer a better detection accuracy. The works presented in [11] evident the need for DF+HF in order to enhance the disease detection.
The recent work discussed in [11] implemented a serial concatenation of DL and HF to enhance the performance of VGG19 architecture and the implemented technique helps in achieving a better classification accuracy with benchmark (98.00%) and clinical database (98.17%) MRI slices of T2 modality. The existing works in the literature [15][16][17] confirmed the need for combining the segmentation, HF extraction and DF extraction methodologies to enhance the overall accuracy of the disease detection system. Hence, in this work a CADD framework is developed by combining the automated segmentation and classification scheme to improve the BT categorization accuracy and the proposed work is experienced and authenticated using the benchmark and clinical grade brain MRI slices of T2 modality. In this work, a pre-trained VGG16 architecture is considered and its segmentation/classification performance is initially trained, tested and validated using the TCIA dataset with GBM/ LGG image cases. Later, the BT detection performance of VGG16 is then confirmed using the clinically collected brain MRI slices. The attained result with the proposed CADD confirms that, the implemented VGG16 architecture offers a better BT detection accuracy.
In the earlier works, brain MRI segmentation and classification [9] is seperately discussed by the researchers. Further, the existing brain MRI slices are classified using the machine-learning [10] and/or deep-learning [11] methods. The chied motivation of the proposed research is to implement the CNN based joint segmentation and classifcation to enhance the disease detection accuracy for the benchmark as well as the clinical grade images.Further, to improve the classification accuracy, the optimally selected handcrafted features are combined with the deep-features and a binary classifier is implemented to categorize the images. This work aimed to implement a novel CADD system to classifiy the brain abnormality into LGG/GBM class using the benchmark as well as the clinically obtained MRI slices of real patients.

Methodology
The disease detection performance of every CAD unit depends on the tactic considered to execute the particular system. In this work, the BT detection is achieved using a novel CADD unit developed and implemented using the pre-trained Convolutional-Neural-Network (CNN) scheme.

Disease Detection Framework
The CADD unit considered to detect and categorize the BT of brain MRI slices is depicted in Figure 1. Initially, a 3D image of the brain MRI is collected from the patients and then a 3D to 2D conversion is implemented using ITK-Snap software [18]. The extracted 2D slices are then resized to 224x224x3 pixels (recommended image dimension for VGG16). The resized images are then considered for the feature extraction task. The BT segment and the DF are extracted by employing a trained VGG-UNet and the HF are extracted using methods, such as GLCM, Hu, and LBP with different weights. After extracting the essential features, the dominant feature vector for DF and HF is selected using the Firefly-Algorithm and the selected features are sorted and combined using the serial feature concatenation technique discussed in [11]. The concatenated features are then considered to train, test and validate the binary classifiers, which helps to categorize the brain MRI slices into GBM and LGG class.

Figure 1
Proposed CADD framework to examine brain MRI slices

Figure 2
Sample test images collected from TCIA database helps to categorize the brain MRI slices into GBM and LGG class.

Figure 1
Proposed CADD framework to examine brain MRI slices

Image Database
The proposed work considers T2 modality brain MRI slices of GBM/LGG for the assessment and during this work; both the benchmark as well as clinically collected dataset are considered.

Benchmark Database
The Cancer Imaging Archive (TCIA) [19] is one of the vital data-source widely adopted by the researchers to evaluate their disease detection systems. In this work, the essential GBM [20] and LGG [21] classes of brain images with T2 modality are collected for the assessment. These images are available in 3D form and the 2D slices are then extracted with ITK-Snap and resized to 224x224x3. The sample test images adopted in this work is depicted in Figure 2. The total number of images considered for GBM/LGG is presented in Table 1.

Figure 2
Sample test images collected from TCIA database is ava

Extra
The i appro abnor furth segm semiand supp

Image Database
The proposed work considers T2 modality brain MRI slices of GBM/LGG for the assessment and during this work; both the benchmark as well as clinically collected dataset are considered.

Benchmark Database
The Cancer Imaging Archive (TCIA) [19] is one of the vital data-source widely adopted by the researchers to evaluate their disease detection systems. In this work, the essential GBM [20] and LGG [21] classes of brain images with T2 modality are collected for the assessment. These images are available in 3D form and the 2D slices are then extracted with ITK-Snap and resized to 224x224x3. The sample test images adopted in this work is depicted in Figure 2. The total number of images considered for GBM/LGG is presented in Table 1.

Image Database
The proposed work considers T2 modality brain MRI slices of GBM/LGG for the assessment and during this work; both the benchmark as well as clinically collected dataset are considered.

Benchmark Database
The Cancer Imaging Archive (TCIA) [19] is one of the vital data-source widely adopted by the researchers to evaluate their disease detection systems. In this work, the essential GBM [20] and LGG [21] classes of brain images with T2 modality are collected for the assessment. These images are available in 3D form and the 2D slices are then extracted with ITK-Snap and resized to 224x224x3. The sample test images adopted in this work is depicted in Figure 2. The total number of images considered for GBM/LGG is presented in Table 1.

Figure 2
Sample test images collected from TCIA database

Clinical Database
The clinical significance of the proposed CADD unit is confirmed by considering the clinically

CN Extractio
The ima approac abnorma further segment semi-au and hen supporte extract t brain M

VGG
The aut initially included categori Sample T2 modality brain MRI slices of clinical database Table 1 TCIA and clinical-grade brain MRI slices considered in this study

Clinical Database
The clinical significance of the proposed CADD unit is confirmed by considering the clinically collected real patient's MRI slices of T2 modality. The earlier works implemented with this dataset can be accessed from [9][10][11]. All the real patient's images are collected based on the approved medical protocol and informed consent is obtained from each patient participated in this study and all this information can be found in [10]. The sample clinical images of GBM/LGG can be found in Figure 3 and number of MRI slices considered in this work is available in Table 1.

CNN Segmentation and Feature Extraction
The image segmentation is one of the proven approaches, widely adopted to extract the abnormal section from the test image for further assessment images are collected based on the approved medical protocol and informed consent is obtained from each patient participated in this study and all this information can be found in [10]. The sample clinical images of GBM/LGG can be found in Figure 3 and number of MRI slices considered in this work is available in Table 1.

Figure 3
Sample T2 modality brain MRI slices of clinical database  [22,23]. Automated segmentation is widely adopted compared to semi-automated and traditional procedures and hence, in the proposed work, CNN supported segmentation is implemented to extract the BT segment from the considered brain MRI slices.

VGG-UNet Implementation
The automated segmentation using UNet is initially proposed in [24]. This scheme included a encoder-decoder section to categorize the image components based on its pixel and for the medical image assessment, a binary classification is employed to extract the abnormal section. In this work, the VGG-UNet scheme depicted in Figure 4 is employed to extract the BT with better accuracy. The essential information on VGG-UNet can be accessed from [25][26][27]. The initial part (encoder) of VGG-UNet consists of the The outcome of the encoder section presents the DF with a value of 1x4096 which is then stored separately for further assessment. The extracted DF are then normalized and processed with the UNet decoder unit. The number of layers in encoder as well as decoder is unique (5 layers) and the final layer of decoder is then given to a SoftMax classifier which implements a binary classification to separate the BT with background using a Signoid-activation-function. This BT segment is then considered to extract the GLCM features.

Deep-Features
The total number of DF extracted is very large (1x4096) and hence three fully-connected-layers (FCL) with 50% dropout are considered to get a reduced feature with dimension 1x1024. All these features are then sorted based on their rank and later a FA assisted feature selection is then implemented to overcome the over-fitting problem, commonly originate in binary classification. Equation (1) presents the feature-vector attained from VGG-UNet and  VGG-UNet implemented to extract the tumour from MRI slices

Deep-Features
The total number of DF extracted is very large (1x4096) and hence three fully-connected-layers (FCL) with 50% dropout are considered to get a reduced feature with dimension 1x1024. All these features are then sorted based on their rank and later a FA assisted feature selection is then implemented to overcome the over-fitting problem, commonly originate in binary classification. Equation (1) presents the featurevector attained from VGG-UNet and Equation (2) depicts the feature-vector after the FCL dropout.  encoder as well as decoder is unique (5 layers) and the final layer of decoder is then given to a SoftMax classifier which implements a binary classification to separate the BT with background using a Signoid-activation-function. This BT segment is then considered to extract the GLCM features.

Figure 4
VGG-UNet implemented to extract the tumour from MRI slices

Deep-Features
The total number of DF extracted is very large (1x4096) and hence three fully-connected-layers (FCL) with 50% dropout are considered to get a reduced feature with dimension 1x1024. All these features are then sorted based on their rank and later a FA assisted feature selection is then implemented to overcome the over-fitting problem, commonly originate in binary classification. Equation (1) presents the featurevector attained from VGG-UNet and Equation (2) depicts the feature-vector after the FCL dropout.

F Selectio
The fea role dur and to necessa using a reductio traditio test) [3, techniqu reductio using t features (1) the final layer of decoder is then given to a SoftMax classifier which implements a binary classification to separate the BT with background using a Signoid-activation-function. This BT segment is then considered to extract the GLCM features.

Figure 4
VGG-UNet implemented to extract the tumour from MRI slices

Deep-Features
The total number of DF extracted is very large (1x4096) and hence three fully-connected-layers (FCL) with 50% dropout are considered to get a reduced feature with dimension 1x1024. All these features are then sorted based on their rank and later a FA assisted feature selection is then implemented to overcome the over-fitting problem, commonly originate in binary classification. Equation (1) presents the featurevector attained from VGG-UNet and Equation (2) depicts the feature-vector after the FCL dropout.

Handcrafted-Features
The earlier works [11,16,17,23] confirmed that the combination of DF and HF (DF+HF) will improve the performance of deep-learning system. In this work, the essential HF from the brain MRI slices are mined using the well known methods such as GLCM [3,10,29], Hu [3,10,30] and LBP [31,32]. The GLCM is widely adopted due to its superior performance and the essential GLCM parameters of the MRI slices are extracted from the segmented BT by VGG-UNet. Similar procedure is implemented to extract the Hu moments. Equation (3) and Equation (4)  (1) . (2) GLCM is widely adopted due to its superior performance and the essential GLCM parameters of the MRI slices are extracted from the segmented BT by VGG-UNet. Similar procedure is implemented to extract the Hu moments. Equation (3) and Equation (4) present the extracted GLCM and Hu features.
The LBP provides the important information regarding the gray-scale picture under assessment and the LBP with different weight discussed in [32] is adopted to extract the pixel information of MRI slices with GBM/LGG. In this work, the weights, such as W=1, 2, 3, and 4 are considered to enhance the image and from each image, 1x59 number of features are extracted. The LBP features considered in this work are depicted in Equation (5): The total number of HF collected with GLCM, Hu and LBP is shown in Equation (6);

Firefly-Algorithm Based Feature Selection and Serial Fusion
The feature reduction process plays a vital role during in ML and DL based classification and to avoid the over-fitting problem, it is necessary to identify the dominant features using an appropriate approach. The feature reduction can be implemented using traditional statistical approaches (Student's ttest) [3,12] and heuristic algorithm assisted techniques [17,22]. In this work, the feature reduction for DF and HF are implemented using the FA algorithm and the reduced features are then serially combined as (3) s the tored acted h the rs in ) and to a inary ound BT LCM mour large ayers get a these and then itting inary turen (2) ut.
(1) (2) performance and the essential GLCM parameters of the MRI slices are extracted from the segmented BT by VGG-UNet. Similar procedure is implemented to extract the Hu moments. Equation (3) and Equation (4) present the extracted GLCM and Hu features.
The LBP provides the important information regarding the gray-scale picture under assessment and the LBP with different weight discussed in [32] is adopted to extract the pixel information of MRI slices with GBM/LGG. In this work, the weights, such as W=1, 2, 3, and 4 are considered to enhance the image and from each image, 1x59 number of features are extracted. The LBP features considered in this work are depicted in Equation (5): The total number of HF collected with GLCM, Hu and LBP is shown in Equation (6);

Firefly-Algorithm Based Feature Selection and Serial Fusion
The feature reduction process plays a vital role during in ML and DL based classification and to avoid the over-fitting problem, it is necessary to identify the dominant features using an appropriate approach. The feature reduction can be implemented using traditional statistical approaches (Student's ttest) [3,12] and heuristic algorithm assisted techniques [17,22]. In this work, the feature reduction for DF and HF are implemented using the FA algorithm and the reduced features are then serially combined as (4) The LBP provides the important information regarding the gray-scale picture under assessment and the LBP with different weight discussed in [32] is adopted to extract the pixel information of MRI slices with GBM/LGG. In this work, the weights, such as W=1, 2, 3, and 4 are considered to enhance the image and from each image, 1x59 number of features are extracted. The LBP features considered in this work are depicted in Equation (5): ts the tored acted h the rs in ) and to a inary ound s BT LCM mour large layers get a these k and then itting inary atureon (2) ut.
(1) (2) performance and the essential GLCM parameters of the MRI slices are extracted from the segmented BT by VGG-UNet. Similar procedure is implemented to extract the Hu moments. Equation (3) and Equation (4) present the extracted GLCM and Hu features.
The LBP provides the important information regarding the gray-scale picture under assessment and the LBP with different weight discussed in [32] is adopted to extract the pixel information of MRI slices with GBM/LGG. In this work, the weights, such as W=1, 2, 3, and 4 are considered to enhance the image and from each image, 1x59 number of features are extracted. The LBP features considered in this work are depicted in Equation (5): The total number of HF collected with GLCM, Hu and LBP is shown in Equation (6);

Firefly-Algorithm Based Feature Selection and Serial Fusion
The feature reduction process plays a vital role during in ML and DL based classification and to avoid the over-fitting problem, it is necessary to identify the dominant features using an appropriate approach. The feature reduction can be implemented using traditional statistical approaches (Student's ttest) [3,12] and heuristic algorithm assisted techniques [17,22]. In this work, the feature reduction for DF and HF are implemented using the FA algorithm and the reduced features are then serially combined as (5) The total number of HF collected with GLCM, Hu and LBP is shown in Equation (6) (2) GLCM is widely adopted due to its superior performance and the essential GLCM parameters of the MRI slices are extracted from the segmented BT by VGG-UNet. Similar procedure is implemented to extract the Hu moments. Equation (3) and Equation (4) present the extracted GLCM and Hu features.
The LBP provides the important information regarding the gray-scale picture under assessment and the LBP with different weight discussed in [32] is adopted to extract the pixel information of MRI slices with GBM/LGG. In this work, the weights, such as W=1, 2, 3, and 4 are considered to enhance the image and from each image, 1x59 number of features are extracted. The LBP features considered in this work are depicted in Equation (5) The total number of HF collected with GLCM, Hu and LBP is shown in Equation (6);

Firefly-Algorithm Based Feature Selection and Serial Fusion
The feature reduction process plays a vital role during in ML and DL based classification and to avoid the over-fitting problem, it is necessary to identify the dominant features using an appropriate approach. The feature reduction can be implemented using traditional statistical approaches (Student's t-test) [3,12] and heuristic algorithm assisted techniques [17,22]. In this work, the feature reduction for DF and HF are implemented using the FA algorithm and the reduced features are then serially combined as discussed in [33].
The FA feature selection is implemented as follows; Let us consider there exist feature vectors is implemented as ature vectors GBM FV is with a value he FA then performs t and computes the in Equation (7):.
268 in HF between features is (8) other FA parameter information can be accessed from [34].
The FA based feature selection helped to get DF vector with a dimension of 1x427 and HF with a dimension of 1x193. These features are serially combined to get a new fused-featurevector (FFV) depicted in Equation (10). This FFV is then considered to train, test and validate the binary classifiers considered in the developed CADD unit.

Classifier Implementation and Performance Validation
The performance of the medical data assessment using the developed CADD and; let this vector is with a value of discussed in [33].
The FA feature selection is implemented as follows; Let us consider there exist feature vectors GBM FV and; let this vector is with a value The FA then performs a feature wise assessment and computes the Hamming-Distance (HD) as in Equation (7):.
LGGa FV  GBMa  FV  ) LGG where N=1024 in DF and N=268 in HF The difference in values between features is expressed as in Equation (8); The fitness function then assigned as in Equation other FA parameter information can be accessed from [34].
The FA based feature selection helped to get DF vector with a dimension of 1x427 and HF with a dimension of 1x193. These features are serially combined to get a new fused-featurevector (FFV) depicted in Equation (10). This FFV is then considered to train, test and validate the binary classifiers considered in the developed CADD unit.

Classifier Implementation and Performance Validation
The performance of the medical data assessment using the developed CADD depends on the employed classifiers. Binary classification is implemented in the proposed . The FA then performs a feature wise assessment and computes the Hamming-Distance (HD) as in Equation (7): discussed in [33].
The FA feature selection is implemented as follows; Let us consider there exist feature vectors GBM FV and; let this vector is with a value The FA then performs a feature wise assessment and computes the Hamming-Distance (HD) as in Equation (7):.
LGGa FV  GBMa  FV  ) LGG where N=1024 in DF and N=268 in HF The difference in values between features is expressed as in Equation (8); other FA parameter information can be accessed from [34].
The FA based feature selection helped to get DF vector with a dimension of 1x427 and HF with a dimension of 1x193. These features are serially combined to get a new fused-featurevector (FFV) depicted in Equation (10). This FFV is then considered to train, test and validate the binary classifiers considered in the developed CADD unit.

Classifier Implementation and Performance Validation
The performance of the medical data assessment using the developed CADD depends on the employed classifiers. Binary classification is implemented in the proposed , (7) where N=1024 in DF and N=268 in HF.
The difference in val1ues between features is expressed as in Equation (8); The FA then performs a feature wise assessment and computes the Hamming-Distance (HD) as in Equation (7):.
LGGa FV  GBMa  FV  ) LGG FV , GBM FV ( HD , (7) where N=1024 in DF and N=268 in HF The difference in values between features is expressed as in Equation (8); The fitness function then assigned as in Equation (9): then the corresponding feature is selected and a new feature vector is formed. The proposed procedure is graphically presented in Figure 5 and the FA with optimally assigned parameters will help to identify the new feature vector.

Figure 5
Firefly algorithm based dominant feature selection In this work, the FA with Brownian-Distribution is considered and other essential parameters are assigned as follows; number of fireflies=30, search dimension = total features, iterations=2500 and LGG features Selected feature (8) The fitness function then assigned as in Equation (9): and; let this vector is with a value The FA then performs a feature wise assessment and computes the Hamming-Distance (HD) as in Equation (7):.
LGGa FV  GBMa  FV  ) LGG FV , GBM FV ( HD , (7) where N=1024 in DF and N=268 in HF The difference in values between features is expressed as in Equation (8); The fitness function then assigned as in Equation (9): then the corresponding feature is selected and a new feature vector is formed. The proposed procedure is graphically presented in Figure 5 and the FA with optimally assigned parameters will help to identify the new feature vector.

Figure 5
Firefly algorithm based dominant feature selection LGG features Selected feature (9) The The FA then performs a feature wise assessment and computes the Hamming-Distance (HD) as in Equation (7):.
LGGa FV  GBMa  FV  ) LGG FV , GBM FV ( HD , (7) where N=1024 in DF and N=268 in HF The difference in values between features is expressed as in Equation (8); The fitness function then assigned as in Equation (9): then the corresponding feature is selected and a new feature vector is formed. The proposed procedure is graphically presented in Figure 5 and the FA with optimally assigned parameters will help to identify the new feature vector.

Figure 5
Firefly algorithm based dominant feature selection LGG features Selected feature is then evaluated to decide the features and if a feature wise assessment and computes Hamming-Distance (HD) as in Equation (7):.
LGGa FV  GBMa  FV  ) LGG FV , GBM FV ( HD where N=1024 in DF and N=268 in HF The difference in values between feature expressed as in Equation (8) Figure 5 and th with optimally assigned parameters will he identify the new feature vector.

Figure 5
Firefly algorithm based dominant feature selec In this work, the FA with Brownian-Distributi considered and other essential parameters assigned as follows; number of fireflies=30, se dimension = total features, iterations=2500 LGG features Selected feature , then the particular feature is discarded and if LGG GBM a feature wise assessment and computes the Hamming-Distance (HD) as in Equation (7):.
LGGa FV  GBMa  FV  ) LGG FV , GBM FV ( HD , (7) where N=1024 in DF and N=268 in HF The difference in values between features is expressed as in Equation (8); The fitness function then assigned as in Equation (9): then the corresponding feature is selected and a new feature vector is formed. The proposed procedure is graphically presented in Figure 5 and the FA with optimally assigned parameters will help to identify the new feature vector.

Figure 5
Firefly algorithm based dominant feature selection In this work, the FA with Brownian-Distribution is considered and other essential parameters are assigned as follows; number of fireflies=30, search dimension = total features, iterations=2500 and LGG features Selected feature then the corresponding feature is selected and a new feature vector is formed. The proposed procedure is graphically presented in Figure 5 and the FA with optimally assigned parameters will help to identify the new feature vector. ∈ . The FA then performs a feature wise assessment and computes the Hamming-Distance (HD) as in Equation (7):.
LGGa FV  GBMa  FV  ) LGG FV , GBM FV ( HD , (7) where N=1024 in DF and N=268 in HF The difference in values between features is expressed as in Equation (8); The fitness function then assigned as in Equation (9): then the corresponding feature is selected and a new feature vector is formed. The proposed procedure is graphically presented in Figure 5 and the FA with optimally assigned parameters will help to identify the new feature vector.

Figure 5
Firefly algorithm based dominant feature selection LGG features

Selected feature
In this work, the FA with Brownian-Distribution is considered and other essential parameters are assigned as follows; number of fireflies=30, search dimension = total features, iterations=2500 and other FA parameter information can be accessed from [34].
The FA based feature selection helped to get DF vector with a dimension of 1x427 and HF with a dimension of 1x193. These features are serially combined to get a new fused-feature-vector (FFV) depicted in Equation (10). This FFV is then considered to train,  [34].
The FA based feature selection helped to get DF vector with a dimension of 1x427 and HF with a dimension of 1x193. These features are serially combined to get a new fused-featurevector (FFV) depicted in Equation (10). This FFV is then considered to train, test and validate the binary classifiers considered in the developed CADD unit.

Classifier Implementation and Performance Validation
The performance of the medical data assessment using the developed CADD depends on the employed classifiers. Binary classification is implemented in the proposed work to classify the MRI slices into GBM/LGG class for benchmark as well as clinical data. To achieve this task, the classifiers existing in the literature, such as SoftMax, SVM with various kernels (Linear, RBF, and Cubic) [3,12,33,[35][36][37], DA (Linear, and Quadratic) [12,33] and KNN (Fine, and Cubic) [12,33] are employed to accomplish the task. The earlier research also presents the similar medical image assessment tasks which implemented classifiers [38][39][40][41].
The performance of the classifier is then assessed by recording the confusion-matrix values, such as true-positive (TP), truenegative (TN), false-positive (FP), falsenegative (FN), accuracy (ACC), precision (PRE), sensitivity (SEN), specificity (SPE) and negative predictive value (NPV) [3,11]. Based on these values, the performance of proposed CADD with a chosen binary classifier is confirmed.

Experimental Results
This section presents the experimental outcome attained with the proposed work. The experimental investigation is implemented using a workstation with Intel I5 2.5GHz processor with 16GB RAM and 2GB VRAM equipped with MATLAB ® .
Initially, the essential number of MRI slices is extracted from benchmark as well as the clinical dataset as discussed in Table 1 and every image is then resized into 3 x 224 x 224 pixels to implement the selected CNN (10)

Classifier Implementation and Performance Validation
The performance of the medical data assessment using the developed CADD depends on the employed classifiers. Binary classification is implemented in the proposed work to classify the MRI slices into GBM/LGG class for benchmark as well as clinical data. To achieve this task, the classifiers existing in the literature, such as SoftMax, SVM with various kernels (Linear, RBF, and Cubic) [3,12,33,[35][36][37], DA (Linear, and Quadratic) [12,33] and KNN (Fine, and Cubic) [12,33] are employed to accomplish the task. The earlier research also presents the similar medical image assessment tasks which implemented classifiers [38][39][40][41].
The performance of the classifier is then assessed by recording the confusion-matrix values, such as true-positive (TP), true-negative (TN), false-positive (FP), false-negative (FN), accuracy (ACC), precision (PRE), sensitivity (SEN), specificity (SPE) and negative predictive value (NPV) [3,11]. Based on these values, the performance of proposed CADD with a chosen binary classifier is confirmed.

Experimental Results
This section presents the experimental outcome attained with the proposed work. The experimental investigation is implemented using a workstation with Intel I5 2.5GHz processor with 16GB RAM and 2GB VRAM equipped with MATLAB ® .
Initially, the essential number of MRI slices is extracted from benchmark as well as the clinical dataset as discussed in Table 1 and every image is then resized  into  3  x  224  x  224 pixels to implement the selected CNN scheme. Initially, the VGG-UNet is implemented to extract the BT segment from the considered test images. The VGG-UNet depicted in Figure 4 is initially trained for the considered image data using the original and augmented benchmark images and after the training, the segmentation performance of VGG-UNet is then validated using the benchmark and clinical grade MRI slices. Figure 6 depicts the result attained for GBM class clinical MRI and in this image Figure  6 The encoder section of the VGG-UNet also helps to extract the essential DF with a dimension of 1x4096 and this feature value is then initially reduced to 1x1024 features using the FCL and further this DF is reduced to 1x427 using the FA based feature reduction technique. After collecting the essential DF vector, the necessary HF are then collected with GLCM and Hu moments. To get the LBP from the test images, various weight values (W=1,2,3 and 4) are implemented and the corresponding outcome attained with GBM and LGG class images are presented in Figure  7. From every LBP image, a feature value with dimension 1x59 is then extracted and all these features are combined together to get the essential feature value for LBP (1x236).
scheme. Initially, the VGG-UNet is implemented to extract the BT segment from the considered test images. The VGG-UNet depicted in Figure 4 is initially trained for the considered image data using the original and augmented benchmark images and after the training, the segmentation performance of VGG-UNet is then validated using the benchmark and clinical grade MRI slices.  The encoder section of the VGG-UNet also helps to extract the essential DF with a dimension of 1x4096 Finally, the DF+HCF is implemented and the FFV is then considered to authenticate the classifiers and based on the attained confusion-matrix parameters, the eminence of the proposed CADD is confirmed. During this classification task, a 10-fold cross validation is employed and the best result which is attained during the 10 trial is adopted as the best result by the considered classifier.

Discussion
This section presents the merit of the proposed scheme on the considered image database. The performance of CADD is separately validated on the considered image datasets, initially CADD's performance is tested using the TCIA database and the attained result is presented in Table 2 for DF alone and DF+HF.
The result presented in Table 2 depicts the results achieved with the binary classifier and the presented result confirms that the SVM classifier outperforms other classifier units, such as SoftMax, DA and KNN considered in this study. Figure 8 presents the Glyphplot to demonstrate the classifier performance with DF along and DF+HF and from this figure, it can be 1x59 is then extracted and all these features are combined together to get the essential feature value for LBP (1x236).

Figure 7
LBP treated brain MRI slices with chosen weights Finally, the DF+HCF is implemented and the FFV is then considered to authenticate the classifiers and based on the attained confusion-matrix DF DF+HF

Figu
Glyp W= 1 LGG GBM W= 2 W= 3 W= 4 noted that, the overall performance offered by the SVM-RBF in the case of DF (Figure 8(a)) and the performance by SVM-Cubic is better for DF+HF. This outcome confirms that, the classification accuracy of the CADD is improved due to the DF+HF compared to the classification accuracy by DF alone.
Related practice is then repeated with the clinical MRI dataset and the attained results are presented in Table 3. From Table 3, it can be noted that the SVM-Cubic classifier helped to achieve better classification accuracy with DF and DF+HF feature vectors. The confusion matrix presented in Figure 9 (a) and (b) also confirms the eminence of the SVM-Cubic classifier compared to other methods. Further, the Receiver Operating Characteristic (ROC) curve presented in Figure 9(c) confirms that the SVM-Cubic offers better result with DF+HF compared to other SVM, DA, KNN and SoftMax classifiers considered in this work.
The overall performance of the CADD is then presented in Figure 10 using the Glyph-plot and this figure also confirms that the performance of the SSM-Cubic is better with the clinical images for DF and DF+HF. These results confirm that, proposed CNN based segmentation and SVM classification is clinically noteworthy and the developed CADD can be considered to    Table 3. From Table 3, noted that the SVM-Cubic classifier performance for TCIA image database      Table 3. From Table 3 Table 3. From Table 3, it c noted that the SVM-Cubic classifier help achieve better classification accuracy wi and DF+HF feature vectors. The conf matrix presented in Figure 9 (a) and (b confirms the eminence of the SVMclassifier compared to other methods. Fu the Receiver Operating Characteristic ( curve presented in Figure 9(c) confirms th SVM-Cubic offers better result with D compared to other SVM, DA, KNN SoftMax classifiers considered in this wor

Figure 10
Glyph-plot generated using the cla performance for clinical MRI database The overall performance of the CADD i presented in Figure 10 using the Glyp and this figure also confirms tha

100-Specificity
(c) ROC curve for DF+H Related practice is then repeated clinical MRI dataset and the attained presented in Table 3. From Table  noted that the SVM-Cubic classifie achieve better classification accurac and DF+HF feature vectors. The matrix presented in Figure 9 (a) a confirms the eminence of the classifier compared to other method the Receiver Operating Character curve presented in Figure 9(c) confir SVM-Cubic offers better result w compared to other SVM, DA, SoftMax classifiers considered in thi

Figure 10
Glyph-plot generated using the performance for clinical MRI databa The overall performance of the CA presented in Figure 10 using the and this figure also confirms

Figure 10
Glyph-plot generated using the classifier performance for clinical MRI database examine the clinical level MRI slices collected from real patients.
The main contribution of the proposed work is that, it has developed a CNN assisted CADD system which is used to detect and classify the BT into GBM/LGG with better accuracy. The proposed approach helped to attain a classification accuracy of >98% on the clinically collected images. The future extent of this work includes; (i) Improving the performance of VGG-UNet using VGG19 scheme, (ii) Comparing the performance of VGG-UNet with other CNN segmentation methods existing in the literature and (iii) Testing and validating the performance of proposed CADD using the brain MRI of modalities, such as T1, T1C and Flair.

Conclusion
The chief intention of this research is to develop a CNN supported CADD unit to categorize the brain MRI slices into GBM/LGG class. This work developed the CADD system using; (i) VGG-UNet assisted segmentation and DF extraction, (ii) HF extraction using GLCM, Hu and LBP, (iii) Dominant feature selection using FA, and (iv) Feature fusion and validation using various binary classifiers. The performance of the proposed CADD is separately tested using the TCIA compared to other SVM, DA, KNN and SoftMax classifiers considered in this work.

Figure 10
Glyph-plot generated using the classifier performance for clinical MRI database The overall performance of the CADD is then presented in Figure 10 using the Glyph-plot and this figure also confirms that the The overall performance of the CADD is then presented in Figure 10 using the Glyph-plot and this figure also confirms that the and clinical MRI dataset and the result attained with the proposed scheme substantiate that, proposed work assist to accomplish better classification accuracy on the TCIA as well as the benchmark dataset. The classifier attained with the clinical grade MRI using the DF and DF+HF with a ten-fold cross validation confirms that, proposed CADD system offers better outcome with SVM-Cubic classifier compared to other binary classifiers. In future, this CADD can be adopted to inspect the clinically collected brain MRI slices of T2 modality.