Liver Lesion Detection Using Semantic Segmentation and Chaotic Cuckoo Search Algorithm

The classic feature extraction techniques used in recent research on computer-aided diagnosis (CAD) of liver cancer have several disadvantages, including duplicated features and substantial computational expenses. Modern deep learning methods solve these issues by implicitly detecting complex structures in massive quantities of healthcare image data. This study suggests a unique bio-inspired deep-learning way for improving liver cancer prediction outcomes. Initially, a novel semantic segmentation technique known as UNet++ is proposed to extract liver lesions from computed tomography (CT) images. Second, a hybrid approach that combines the Chaotic Cuck-oo Search algorithm and AlexNet is indicated as a feature extractor and classifier for liver lesions. LiTS, a freely accessible database that contains abdominal CT images, was employed for liver tumor diagnosis and investigation. The segmentation results were evaluated using the Dice similarity coefficient and Correlation coefficient. Concerning the performance metrics such as accuracy, precision, and recall, the recommended method performs better than existing algorithms producing the highest values such as 99.2%, 98.6%, and 98.8%, respectively.


Introduction
The liver, the second most massive organ in the human body and the one that weighs the most is on the right side of the stomach. The parts of the body, such as the gallbladder, intestines, and pancreas, are all connected with the liver's right and left lobes. The liver interacts with several other organs. The numerous cells that make up the liver are the source of primary and metastatic liver cancer, which is carried Information Technology and Control 2023/3/52 762 on by malignant tissue from other organs. Among all liver malignancies, hepatocellular carcinoma is one of the most prevalent types of liver disease. Liver cancer is by far the most commonly diagnosed illness in the world. The World Health Organization, or WHO, reports that liver cancer was responsible for nearly eight million fatalities in 2019, of which 675,000 were due to hepatocellular carcinoma [22]. More than 400,000 fatalities yearly are caused by liver cancer, especially prevalent in certain parts of Africa [27].
Specialists in radiology and oncology employ images obtained from either computed tomography (CT) or magnetic resonance imaging (MRI) to observe the structure and appearance of the liver. These anomalies are essential for preliminary identification and advancement in primary and second-stage liver carcinoma malignancy [15]. Typically, manual techniques or methods are partially manual to interpret CT scans of the liver, but such approaches are laborious, expensive, unreliable, and susceptible to inaccuracy. Several computation techniques have been developed to address these issues that enhance the detection of liver cancer. Due to several problems, such as diminished contrast between the liver and its adjacent organs, modifications in the number of cancerous cells, the tumor's tiny dimensions, tissue anomalies, and sporadic expansion of tumors, these methods failed to segment and identify liver lesions effectively [12]. Therefore, an entirely novel approach is required to subdue these challenges.
As liver disease is one of the leading causes of early death in individuals, the treatment procedures for the patients must be cutting-edge and efficient. Liver surgical treatment is a typical therapy for liver disorders [13]. This procedure involves extracting the liver from CT images, conducting computational evaluation, gathering data on pathological conditions, and offering an empirical basis for surgical plans. Even physically designing the liver by skilled professionals can be highly individualistic and laborious due to the detailed background information, hazy boundaries, and various shapes [6]. As a result, the fundamental goal of liver surgery, and one with significant practical implications, is the study of automatic liver segmentation.
Several researchers have investigated the concept of medical image segmentation to increase the precision and efficacy of both evaluation and therapy. Automatic liver segmentation is necessary for several process-es, including a liver transplant and three-dimensional positioning in radiotherapy processes. Although semi or entirely automated approaches for segmenting liver CT scans have been presented recently [32], accurate liver segmentation is still reasonably challenging for specific reasons. Initially, the volume of the liver and its adjacent body parts, such as the heart and stomach, are identical. Second, the curvature of the liver is obscured by substantial quantity changes. Furthermore, medical imaging frequently reveals significant structural changes like hepatitis and big lesions. Its intensity, meanwhile, clearly differs from typical liver activity [31]. Finally, each person has a unique liver shape. Due to the aforementioned reasons, current methods find it challenging to segment the liver's tiny dimensions and intricate contour. Thus, they need help using the automatic liver segmentation method for clinical assessment and therapy.
The motivation behind this work is to address the limitations and challenges associated with traditional feature extraction techniques in computer-aided diagnosis (CAD) of liver cancer. The classic methods often suffer from duplicated features and substantial computational expenses, hindering their effectiveness in accurately predicting and diagnosing liver tumors.
The researchers are motivated to leverage the power of modern deep learning methods, which have shown great potential in detecting complex structures in large volumes of healthcare image data. By utilizing deep learning, they aim to overcome the drawbacks of traditional techniques and improve the outcomes of liver cancer prediction Furthermore, in the literature, features that have been manually generated constitute the foundation of the majority of suggested solutions for liver cancer detection. Numerous perceptual identifiers have been examined, including appearance, shape, and mixtures. To describe appearance and form, Machine learning approaches such as Support Vector Machines and Artificial Neural Networks have been employed frequently together with the grey gradient overlay vector features, Fourier exponent statistics, and initial-order metrics for categorization purposes [1]. Even though these techniques are adequate, creating custom characteristics that best fit a given classification problem might be challenging. Additionally, these techniques need to adequately convey the entire structure of features from image data. In recent One type of visualization learning method, deep learning, can extract significant and intermediate conceptual characteristics from visual data [5]. Deep learning can learn incredibly complicated patterns, which is one of its benefits. Deep learning algorithms model transitional representations of image data that other algorithms find challenging to understand by using hidden layers between inputs and output layers. Consequently, they can produce substantial feature representations precisely from unprocessed clinical imagery [9].
The proposed research is anticipated to answer the following research questions, a How can we develop a completely automated system for segmenting and categorizing liver lesions using deep learning?
b What is the most effective method for semantic segmenting liver tumors with improved accuracy?
c Is it possible to hybridize bio-inspired algorithms with deep learning approaches to provide a highly feasible solution for feature extraction and classification of liver lesions more precisely?
To resolve the above research questions, this paper proposes a hybrid bio-inspired deep learning approach to liver cancer detection employing CT images, contrary to contemporary technologies based on either feature extraction approaches or combinations of feature engineering and deep learning techniques. The processes of segmentation, feature extraction, and categorization of liver lesions are examined due to integrating deep learning models with Meta heuristic bio-inspired optimization techniques.
The main contributions of this work are, 1 To propose a novel semantic segmentation technique, UNet++, to segment the liver lesion CT images effectively. 2 To employ the Chaotic Cuckoo Search algorithm and AlexNet architecture as feature extractors and classifiers for liver tumor diagnosis. 3 To demonstrate the performance supremacy of the proposed UNet++-Chaotic Cuckoo Search algorithm-AlexNet approach by comparing it with the existing models for liver cancer detection in the literature.
The remainder of this paper is organized as follows. Section 2 investigates the state-of-the-art works on liver cancer diagnosis using machine learning and deep learning techniques. Section 3 presents the proposed methods, such as UNet++, Chaotic Cuckoo Search algorithm and AlexNet. Section 4 discusses the performance of the suggested approach by analyzing the experimental results obtained. Section 5 concludes the present research.

Related Works
This section emphasizes the current literature works that employ machine learning and deep learning techniques for Liver cancer detection and diagnosis. Most of the reports presently accessible have concentrated on the automatic segmentation of liver tumors using CT images [10]. Collecting information is generally simple in clinical settings because CT is frequently employed in preliminary preparation because of its inexpensive cost. Authors in [17] created a two-phase cancer segmentation approach based on conventional algorithms that involve initial delineation by threshold-setting and anatomical procedures and enhancement through grouping and a physical reconfigurable model. According to [21], the degree set technique with an adaptive computing ripple-scanning methodology for activation can segregate liver metastases in a semi-automated form.
The liver was segmented using a three-dimensional linear stable structure characterization by the researchers in [24], and the liver lesion was segregated using vertex slices with contour and augmentation restrictions. Using Grassmannian multivariate strategies for learning, the work in [2] was developed using an autonomous classification system for metastatic liver cancers based on distinguishing features between malignant and healthy tissue. To accomplish integrated separation of the liver and lesions with three-dimensional compact conditional chaotic fields, authors in [19] initially stacked two completely convolutional neural networks. To decrease the percentage of false positives, liver tumors were segmented using a two-dimensional U-net [28] and supervised learning-based selection screening.
Researchers in [29] developed a semi-dimensional complex CNN that used limited-range residual links  [33] developed a complete, separate-stage framework for liver and lesion identification that did not require routine additional processing. In the distinctive dual densely coupled UNet proposed in [26], interconnection characteristics were determined using an intense two-dimensional U-net. At the same time, geometric parameters were systematically consolidated using an intense three-dimensional U-net.
To improve liver and lesion categorization, authors in [30] changed the basic U-net layout by including image-dependent advanced characteristics.
A unique liver lesion fragmentation technique was suggested in [8] utilizing CT images. They used a three-dimensional asymmetrical residual network (3D ARN) and a dynamic boundary model to optimize the liver carcinoma cells. First, cancer contenders found using the 3D ARN are used to segment the liver. To develop contenders for segmenting the liver tumor, which may require more precise lesion data in the prospective area, this study [16] suggests modifying the hyper pixel delineation approach using information from the neighborhood at varying levels.
It improves the network's susceptibility to details of liver lesions and minimizes the processing challenge brought on by duplicate information.
Deep learning-based models for the detection of liver cancer using Horizon Transformation and Stochastic model were proposed by authors in [23]. This method depends on the Stochastic integrated model and indicator-controlled horizon transformation for accurate detection. Real-time medical setting evaluation of the proposed method uses clinical information from various individuals [7]. The dense neural model classifier generated an optimal reliability of 98.25% with little test loss, which is the key benefit of this automatic identification. The employment of the dense neural model in the detection process is the primary method for finding liver tumors. The proposed approach [25] is examined to locate the cancerous area on CT images, which would help with a timely diagnosis during therapeutic and surgical decisions.
The Dual Feature Extraction technique utilizing artificial neural networks and cross-validation techniques for liver cancer was developed in [18]. This approach is based on artificial intelligence and employs a ten-fold cross-validation of the network, validating the effect of the proposed system on 87 cancerous patients and 354 normal individuals. The proposed method's accuracy, characteristic measure, and time for computation are comparable to those of the oneway analysis of variance approach for identifying the feature group [11]. The accuracy of diagnosis in both approaches is considerably improved as the number of features increases.

Proposed Methodology
This part suggests a novel technique for detecting liver cancer using CT images. The method presented in this work is entirely based on deep learning, as opposed to the typical feature engineering techniques designed to suit particular healthcare pattern identification tasks. However, a deep learning approach needs more computational power to operate at an adequate rate. This paper explores the effects of combining various deep-learning networks with bio-inspired algorithms to enhance the segmentation, feature extraction, and categorization of liver lesions to address this issue. Figure 1 shows the three phases of the suggested method for diagnosing liver cancer.

UNet++
Regarding image segmentation, U-Net is specifically designed for use in the medical imaging sectors. The architecture consists of a bottleneck that functions as a turning point, a widening path, where the dimensions of the feature maps grow according to the mask's size, and a shrinking path where the dimensions of the feature maps reduce as the course broadens by an amount of 2 until it reaches the value of 1024, which is generally the highest acceptable value for CNNs.
In essence, UNet++ expanded the network by including deep multilayer blocks and an intricate supervisory model that stacks at the highest level. The addition of a deep multilayer block is the initial architectural modification. UNet++ changes the regular passing of the feature maps produced by the encoding process to the decoding process at the equivalent level in U-Net. The semantic separation between the feature maps of the encoding and decoding processes would be bridged by the recently implemented deep links, making learning for the model simpler because the feature maps would be more semantically equivalent. The feature maps represented by a m, n are determined using (1) when the value of n equals zero and (2) when the value of n is more significant than zero.
UNet++ is an extension of the original UNet architecture, which is widely used for semantic segmentation tasks. It improves upon the UNet model by introducing a nested and dense skip pathway structure, allowing for better feature representation and capturing more detailed contextual information.
Here is a how UNet++ works: Encoder: The UNet++ architecture begins with an encoder network, which consists of multiple convolutional layers. The encoder gradually reduces the spatial dimensions of the input image while extracting high-level features through downsampling operations, such as max pooling or strided convolutions. These features capture the global context of the image.
Skip Connections: Unlike the original UNet, UNet++ employs a nested and dense skip connection design. At each stage of the encoder, skip connections are established to connect the feature maps with corresponding decoder stages. This allows for the flow of information from the encoder to the decoder, preserving important features at different scales.

Decoder:
The decoder network in UNet++ is responsible for upsampling the feature maps to the original image size. It performs upsampling operations, such as transposed convolutions or upsampling followed by convolutions, to gradually increase the spatial dimensions while recovering fine-grained details. The decoder also incorporates the feature maps received from the skip connections, enabling the fusion of multi-scale information.
Dense Skip Pathways: In UNet++, the skip connections are made denser compared to the original UNet architecture. Instead of having a single skip connection at each level, UNet++ introduces additional skip connections from lower-level feature maps to higher-level feature maps. This dense connectivity enhances the flow of information and facilitates the extraction of more detailed and contextual information.
Final Prediction: At the end of the decoder, a final prediction layer produces the segmentation output. This layer typically uses a convolutional operation with an appropriate number of output channels, corresponding to the number of classes to be segmented. A suitable activation function, such as sigmoid or softmax, is applied to generate the pixel-wise segmentation probabilities or labels.
By utilizing the nested and dense skip pathway structure, UNet++ captures hierarchical features and contextual information at multiple scales. This enables more accurate and detailed segmentation of objects or regions of interest in the input images, such as liver lesions in the case of liver cancer diagnosis. (2) In the above equations, C denotes the convolution function and W represents the widening layer for the input a with levels m and n. As per the architecture, the feature maps generated for the first level are shown in Equations (3)-(6).

Chaotic Cuckoo Search Algorithm
A dataset with n features can choose 2n subsets of those features. The problem about the choice of features gradually transforms into a class of optimization problems when the value of n is sufficiently huge, since the main issue is how to choose a particular group of these feature combinations that improve the training efficiency of the machine model. In contrast to conventional optimization issues, the feature selection challenge is unique. It is recognized as a periodic linear problem, with the solution being to demonstrate and update at each corner of the hypercube. The search space is an n-dimensional matrix structure of the Boolean type as in (8), (2) In the above equations, C denotes the convolution function and W represents the widening layer for the input a with levels m and n. As per the architecture, the feature maps generated for the first level are shown in Equations (3)-(6).
C a a a a W a = .

Chaotic Cuckoo Search Algorithm
A dataset with n features can choose 2n subsets of those features. The problem about the choice of features gradually transforms into a class of optimization problems when the value of n is sufficiently huge, since the main issue is how to choose a particular group of these feature combinations that improve the training efficiency of the machine model. In contrast to conventional optimization issues, the feature selection challenge is unique. It is recognized as a periodic linear problem, with the solution being to demonstrate and update at each corner of the hypercube. The search space is an n-dimensional matrix structure of the Boolean type as in (8), In the above equations, C denotes the convolution function and W represents the widening layer for the input a with levels m and n.
As per the architecture, the feature maps generated for the first level are shown in Equations (3)-(6). (2) In the above equations, C denotes the convolution function and W represents the widening layer for the input a with levels m and n. As per the architecture, the feature maps generated for the first level are shown in Equations (3)-(6).
C a a a a W a = .

Chaotic Cuckoo Search Algorithm
A dataset with n features can choose 2n subsets of those features. The problem about the choice of features gradually transforms into a class of optimization problems when the value of n is sufficiently huge, since the main issue is how to choose a particular group of these feature combinations that improve the training efficiency of the machine model. In contrast to conventional optimization issues, the feature selection challenge is unique. It is recognized as a periodic linear problem, with the solution being to demonstrate and update at each corner of the hypercube. The search space is an n-dimensional matrix structure of the Boolean type as in (8),

Figure 1
Architecture of Proposed Methodology corresponding decoder stages. This allows for the flow of information from the encoder to the decoder, preserving important features at different scales.
Decoder: The decoder network in UNet++ is responsible for upsampling the feature maps to the original image size. It performs upsampling operations, such as transposed convolutions or upsampling followed by By utilizing the nested and dense skip pathway structure, UNet++ captures hierarchical features and contextual information at multiple scales. This enables more accurate and detailed segmentation of objects or regions of interest in the input images, such as liver lesions in the case of liver cancer diagnosis.

Figure 1
Architecture of Proposed Methodology

Figure 2 UNet++ Architecture
In the above equations, C denotes the convolution function and W represents the widening layer for the input a with levels m and n. As per the architecture, the feature maps generated for the first level are shown in Equations (3)-(6).
C a a a a W a = .
A new intricate supervisory model has been added to UNet++ as the second upgrade. Intricate Supervision is simpler and the model benefits from having two modes of operation such as Accurate and Fast mode. The former mode operates by taking an average of the outcomes from all the branches whereas the latter does not consider all the branches for selecting the outcomes. The loss function for the UNet++ model is constructed by combining binary cross entropy technique with the dice loss. This loss function is mathematically represented as in (7), In the abo to 0, it m chosen an that the fe Based on productive algorithm biological is a soluti Then the c novel solu less-than- (4) In the above equations, C denotes the convolution function and W represents the widening layer for the input a with levels m and n. As per the architecture, the feature maps generated for the first level are shown in Equations (3)-(6).
C a a a a W a = .
A new intricate supervisory model has been added to UNet++ as the second upgrade. Intricate Supervision is simpler and the model benefits from having two modes of operation such as Accurate and Fast mode. The former mode operates by taking an average of the outcomes from all the branches whereas the latter does not consider all the branches for selecting the outcomes. The loss function for the UNet++ model is constructed by combining binary cross entropy technique with the dice loss. This loss function is mathematically represented as in (7), those feat features optimizati sufficientl choose a combinati of the ma optimizati is unique problem, w update at space is a Boolean t 1 ( , In the abo to 0, it m chosen an that the fe Based on productive algorithm biological is a soluti Then the c novel solu less-than- (5) function and W represents the widening layer for the input a with levels m and n. As per the architecture, the feature maps generated for the first level are shown in Equations (3)-(6).
C a a a a W a = .
A new intricate supervisory model has been added to UNet++ as the second upgrade. Intricate Supervision is simpler and the model benefits from having two modes of operation such as Accurate and Fast mode. The former mode operates by taking an average of the outcomes from all the branches whereas the latter does not consider all the branches for selecting the outcomes. The loss function for the UNet++ model is constructed by combining binary cross entropy technique with the dice loss. This loss function is mathematically represented as in (7), those fea features optimizati sufficientl choose a combinati of the ma optimizati is unique problem, update at space is a Boolean t 1 ( , In the abo to 0, it m chosen an that the fe Based on productiv algorithm biological is a soluti Then the c novel sol less-than- (6) A new intricate supervisory model has been added to UNet++ as the second upgrade. Intricate Supervision is simpler and the model benefits from having two modes of operation such as Accurate and Fast mode. The former mode operates by taking an average of the outcomes from all the branches whereas the latter does not consider all the branches for selecting the outcomes. The loss function for the UNet++ model is constructed by combining binary cross entropy technique with the dice loss. This loss function is mathematically represented as in (7), (2) In the above equations, C denotes the convolution function and W represents the widening layer for the input a with levels m and n. As per the architecture, the feature maps generated for the first level are shown in Equations (3)-(6).
A new intricate supervisory model has been added to UNet++ as the second upgrade. Intricate Supervision is simpler and the model benefits from having two modes of operation such as Accurate and Fast mode. The former mode operates by taking an average of the outcomes from all the branches whereas the latter does not consider all the branches for selecting the outcomes. The loss function for the UNet++ model is constructed by combining binary cross entropy technique with the dice loss. This loss function is mathematically represented as in (7),

Chaotic Cuckoo Search Algorithm
A dataset with n features can choose 2n subsets of those features. The problem about the choice of features gradually transforms into a class of optimization problems when the value of n is sufficiently huge, since the main issue is how to choose a particular group of these feature combinations that improve the training efficiency of the machine model. In contrast to conventional optimization issues, the feature selection challenge is unique. It is recognized as a periodic linear problem, with the solution being to demonstrate and update at each corner of the hypercube. The search space is an n-dimensional matrix structure of the Boolean type as in (8), 1 2 ( , ,,,, , ) In the above equation, when the value of f k is equal to 0, it means that the particular feature is not chosen and alternatively if the value is 1, it indicates that the feature is chosen. Based on an overview of the invasive and productive habits of cuckoos in nature, the algorithm creates a method for searching using biologically inspired heuristics. Each egg in the nest is a solution for a series of optimization problems. Then the cuckoo egg can be described as an entirely novel solution, which is utilized to substitute the less-than-ideal solution in the nest while solving the

Chaotic Cuckoo Search Algorithm
A dataset with n features can choose 2n subsets of those features. The problem about the choice of features gradually transforms into a class of optimization problems when the value of n is sufficiently huge, since the main issue is how to choose a particular group of these feature combinations that improve the training efficiency of the machine model. In contrast to conventional optimization issues, the feature selection challenge is unique. It is recognized as a periodic linear problem, with the solution being to demonstrate and update at each corner of the hypercube. The search space is an n-dimensional matrix structure of the Boolean type as in (8), (2) In the above equations, C denotes the convolution function and W represents the widening layer for the input a with levels m and n. As per the architecture, the feature maps generated for the first level are shown in Equations (3)-(6).
A new intricate supervisory model has been added to UNet++ as the second upgrade. Intricate Supervision is simpler and the model benefits from having two modes of operation such as Accurate and Fast mode. The former mode operates by taking an average of the outcomes from all the branches whereas the latter does not consider all the branches for selecting the outcomes. The loss function for the UNet++ model is constructed by combining binary cross entropy technique with the dice loss. This loss function is mathematically represented as in (7),

Chaotic Cuckoo Search Algorithm
A dataset with n features can choose 2n subsets of those features. The problem about the choice of features gradually transforms into a class of optimization problems when the value of n is sufficiently huge, since the main issue is how to choose a particular group of these feature combinations that improve the training efficiency of the machine model. In contrast to conventional optimization issues, the feature selection challenge is unique. It is recognized as a periodic linear problem, with the solution being to demonstrate and update at each corner of the hypercube. The search space is an n-dimensional matrix structure of the Boolean type as in (8), 1 2 ( , ,,,, , ) In the above equation, when the value of f k is equal to 0, it means that the particular feature is not chosen and alternatively if the value is 1, it indicates that the feature is chosen. Based on an overview of the invasive and productive habits of cuckoos in nature, the algorithm creates a method for searching using biologically inspired heuristics. Each egg in the nest is a solution for a series of optimization problems. Then the cuckoo egg can be described as an entirely novel solution, which is utilized to substitute the less-than-ideal solution in the nest while solving the (8) where f k ⍷{0, 1}.
In the above equation, when the value of f k is equal to 0, it means that the particular feature is not chosen and alternatively if the value is 1, it indicates that the feature is chosen.
Based on an overview of the invasive and productive habits of cuckoos in nature, the algorithm creates a method for searching using biologically inspired heu- ristics. Each egg in the nest is a solution for a series of optimization problems. Then the cuckoo egg can be described as an entirely novel solution, which is utilized to substitute the less-than-ideal solution in the nest while solving the particular optimization problem. The host nest is referred to as a member of the population in the binary cuckoo algorithm, and the nest enables the cuckoo to lay a single or several more eggs. The nests with superior fitness scores continue to exist as the number of cycles grows. This indicates that the optimal eggs are preserved later in the algorithm's iteration, keeping the favorable characteristics. The problem is formulated mathematically as in (9) to (11) by employing a sigmoid function to map the features in the continuous space.
particular optimization problem. The host nest is referred to as a member of the population in the binary cuckoo algorithm, and the nest enables the cuckoo to lay a single or several more eggs. The nests with superior fitness scores continue to exist as the number of cycles grows. This indicates that the optimal eggs are preserved later in the algorithm's iteration, keeping the favorable characteristics.
The problem is formulated mathematically as in (9) to (11) by employing a sigmoid function to map the features in the continuous space.
The Chaotic Cuckoo Search algorithm is improvised with chaotic maps and Levi Flight components. While the feature selection problem can only be solved within the range [0, 1], the standard cuckoo search algorithm updates the cuckoo at any location, also known as the continuous space. There are Y cuckoos in the population, and there are X qualities that each cuckoo possesses for a population of size Y. It indicates that each person's search space is a X * Y matrix. Each nest in the Chaotic Cuckoo Search method generates a unique binary string, with every one of the bits indicating a distinct feature. If a bit is set to 1, it means that the feature has been chosen, and if set to 0, it is rejected.
The chaotic condition of randomness turns out to be pretty structured, as revealed by the chaos theory, which was first proposed to examine meteorological circulation trends. Minor modifications to the preliminary test setup can result in large modifications to the behavior that follows. Therefore, to provide a foundation for algorithmic resolution and to boost population variety during the preliminary phase, a chaotic map is included in the preliminary phase. Chebyshev map is employed as the chaotic map, which is mathematically represented as in (12) In Equation (12), the value of x a is any random number between 0 and 1.
The cuckoo must first search a wide area before settling on the best option better to find a suitable nest in the following generation. This algorithm is given the Levy flight dimension stride search to broaden its search scope within its practical range and enhance its ability to search globally. The dimension stride is computed as in (13) where and represents the previous and current locations of the nest which the cuckoo takes in its exploration path. The nest location is updated for every iteration depending on the Levy Flight formula as shown in (14), Algorithm 1.

Chaotic Cuckoo Search algorithm
Input: Initialize Population N, feature set F, chaotic map cb map Output: Current BEST cuckoo Step 1: Arrange individuals in Cuckoo Population based on fitness Step 2: Select BEST cuckoos Step 3: Modify dim stride using chebyshev chaotic map cb map as per Equation (12) Step 4: Pick a cuckoo in random and update its solution by leveraging Levy Flight Step 5: Assess the fitness function of chosen cuckoo, F old Step 6: Select a nest location (new location) Step 7: if (F old < F new ) Step 8: Update the cuckoo in new nest location Step 9: end if Step 10: For every iteration, WORST cuckoos are rejected and BEST ones are retained Step 11: Arrange individuals in current Cuckoo Population with BEST cuckoos Step 12: return Current BEST cuckoo (9) particular optimization problem. The host nest is referred to as a member of the population in the binary cuckoo algorithm, and the nest enables the cuckoo to lay a single or several more eggs. The nests with superior fitness scores continue to exist as the number of cycles grows. This indicates that the optimal eggs are preserved later in the algorithm's iteration, keeping the favorable characteristics.
The problem is formulated mathematically as in (9) to (11) by employing a sigmoid function to map the features in the continuous space.
The Chaotic Cuckoo Search algorithm is improvised with chaotic maps and Levi Flight components. While the feature selection problem can only be solved within the range [0, 1], the standard cuckoo search algorithm updates the cuckoo at any location, also known as the continuous space. There are Y cuckoos in the population, and there are X qualities that each cuckoo possesses for a population of size Y. It indicates that each person's search space is a X * Y matrix. Each nest in the Chaotic Cuckoo Search method generates a unique binary string, with every one of the bits indicating a distinct feature. If a bit is set to 1, it means that the feature has been chosen, and if set to 0, it is rejected.
The chaotic condition of randomness turns out to be pretty structured, as revealed by the chaos theory, which was first proposed to examine meteorological circulation trends. Minor modifications to the preliminary test setup can result in large modifications to the behavior that follows. Therefore, to provide a foundation for algorithmic resolution and to boost population variety during the preliminary phase, a chaotic map is included in the preliminary phase. Chebyshev map is employed as the chaotic map, which is mathematically represented as in (12) In Equation (12), the value of x a is any random number between 0 and 1.
The cuckoo must first search a wide area before settling on the best option better to find a suitable nest in the following generation. This algorithm is given the Levy flight dimension stride search to broaden its search scope within its practical range and enhance its ability to search globally. The dimension stride is computed as in (13) where and represents the previous and current locations of the nest which the cuckoo takes in its exploration path. The nest location is updated for every iteration depending on the Levy Flight formula as shown in (14), Algorithm 1.

Chaotic Cuckoo Search algorithm
Input: Initialize Population N, feature set F, chaotic map cb map Output: Current BEST cuckoo Step 1: Arrange individuals in Cuckoo Population based on fitness Step 2: Select BEST cuckoos Step 3: Modify dim stride using chebyshev chaotic map cb map as per Equation (12) Step 4: Pick a cuckoo in random and update its solution by leveraging Levy Flight Step 5: Assess the fitness function of chosen cuckoo, F old Step 6: Select a nest location (new location) Step 7: if (F old < F new ) Step 8: Update the cuckoo in new nest location Step 9: end if Step 10: For every iteration, WORST cuckoos are rejected and BEST ones are retained Step 11: Arrange individuals in current Cuckoo Population with BEST cuckoos Step 12: return Current BEST cuckoo (10) particular optimization problem. The host nest is referred to as a member of the population in the binary cuckoo algorithm, and the nest enables the cuckoo to lay a single or several more eggs. The nests with superior fitness scores continue to exist as the number of cycles grows. This indicates that the optimal eggs are preserved later in the algorithm's iteration, keeping the favorable characteristics.
The problem is formulated mathematically as in (9) to (11) by employing a sigmoid function to map the features in the continuous space.
The Chaotic Cuckoo Search algorithm is improvised with chaotic maps and Levi Flight components. While the feature selection problem can only be solved within the range [0, 1], the standard cuckoo search algorithm updates the cuckoo at any location, also known as the continuous space. There are Y cuckoos in the population, and there are X qualities that each cuckoo possesses for a population of size Y. It indicates that each person's search space is a X * Y matrix. Each nest in the Chaotic Cuckoo Search method generates a unique binary string, with every one of the bits indicating a distinct feature. If a bit is set to 1, it means that the feature has been chosen, and if set to 0, it is rejected.
The chaotic condition of randomness turns out to be pretty structured, as revealed by the chaos theory, which was first proposed to examine meteorological circulation trends. Minor modifications to the preliminary test setup can result in large modifications to the behavior that follows. Therefore, to provide a foundation for algorithmic resolution and to boost population variety during the preliminary phase, a chaotic map is included in the preliminary phase. Chebyshev map is employed as the chaotic map, which is mathematically represented as in (12) In Equation (12), the value of x a is any random number between 0 and 1.
The cuckoo must first search a wide area before settling on the best option better to find a suitable nest in the following generation. This algorithm is given the Levy flight dimension stride search to broaden its search scope within its practical range and enhance its ability to search globally. The dimension stride is computed as in (13) where and represents the previous and current locations of the nest which the cuckoo takes in its exploration path. The nest location is updated for every iteration depending on the Levy Flight formula as shown in (14) Step 2: Select BEST cuckoos Step 3: Modify dim stride using chebyshev chaotic map cb map as per Equation (12) Step 4: Pick a cuckoo in random and update its solution by leveraging Levy Flight Step 5: Assess the fitness function of chosen cuckoo, F old Step 6: Select a nest location (new location) Step 7: if (F old < F new ) Step 8: Update the cuckoo in new nest location Step 9: end if Step 10: For every iteration, WORST cuckoos are rejected and BEST ones are retained Step 11: Arrange individuals in current Cuckoo Population with BEST cuckoos Step 12: return Current BEST cuckoo (11) In Equations (10)- (11), δ can be considered as a value between 0 and 1.
The Chaotic Cuckoo Search algorithm is improvised with chaotic maps and Levi Flight components. While the feature selection problem can only be solved within the range [0, 1], the standard cuckoo search algorithm updates the cuckoo at any location, also known as the continuous space. There are Y cuckoos in the population, and there are X qualities that each cuckoo possesses for a population of size Y. It indicates that each person's search space is a X * Y matrix. Each nest in the Chaotic Cuckoo Search method generates a unique binary string, with every one of the bits indicating a distinct feature. If a bit is set to 1, it means that the feature has been chosen, and if set to 0, it is rejected.
The chaotic condition of randomness turns out to be pretty structured, as revealed by the chaos theory, which was first proposed to examine meteorological circulation trends. Minor modifications to the preliminary test setup can result in large modifications to the behavior that follows. Therefore, to provide a foundation for algorithmic resolution and to boost population variety during the preliminary phase, a chaotic map is included in the preliminary phase.
Chebyshev map is employed as the chaotic map, which is mathematically represented as in (12), or several more eggs. The nests with superior fitness scores continue to exist as the number of cycles grows. This indicates that the optimal eggs are preserved later in the algorithm's iteration, keeping the favorable characteristics.
The problem is formulated mathematically as in (9) to (11) by employing a sigmoid function to map the features in the continuous space.
The Chaotic Cuckoo Search algorithm is improvised with chaotic maps and Levi Flight components. While the feature selection problem can only be solved within the range [0, 1], the standard cuckoo search algorithm updates the cuckoo at any location, also known as the continuous space. There are Y cuckoos in the population, and there are X qualities that each cuckoo possesses for a population of size Y. It indicates that each person's search space is a X * Y matrix. Each nest in the Chaotic Cuckoo Search method generates a unique binary string, with every one of the bits indicating a distinct feature. If a bit is set to 1, it means that the feature has been chosen, and if set to 0, it is rejected.
which was first proposed to examine meteorological circulation trends. Minor modifications to the preliminary test setup can result in large modifications to the behavior that follows. Therefore, to provide a foundation for algorithmic resolution and to boost population variety during the preliminary phase, a chaotic map is included in the preliminary phase. Chebyshev map is employed as the chaotic map, which is mathematically represented as in (12) In Equation (12), the value of x a is any random number between 0 and 1.
The cuckoo must first search a wide area before settling on the best option better to find a suitable nest in the following generation. This algorithm is given the Levy flight dimension stride search to broaden its search scope within its practical range and enhance its ability to search globally. The dimension stride is computed as in (13) where and represents the previous and current locations of the nest which the cuckoo takes in its exploration path. The nest location is updated for every iteration depending on the Levy Flight formula as shown in (14), Algorithm 1.

Chaotic Cuckoo Search algorithm
Input: Initialize Population N, feature set F, chaotic map cb map Output: Current BEST cuckoo Step 1: Arrange individuals in Cuckoo Population based on fitness Step 2: Select BEST cuckoos Step 3: Modify dim stride using chebyshev chaotic map cb map as per Equation (12) Step 4: Pick a cuckoo in random and update its solution by leveraging Levy Flight Step 5: Assess the fitness function of chosen cuckoo, F old Step 6: Select a nest location (new location) Step 7: if (F old < F new ) Step 8: Update the cuckoo in new nest location Step 9: end if Step 10: For every iteration, WORST cuckoos are rejected and BEST ones are retained Step 11: Arrange individuals in current Cuckoo Population with BEST cuckoos Step 12: return Current BEST cuckoo

AlexNet
This architecture is stacked with convolutional layers (12) In Equation (12), the value of x a is any random number between 0 and 1.
The cuckoo must first search a wide area before settling on the best option better to find a suitable nest in the following generation. This algorithm is given the Levy flight dimension stride search to broaden its search scope within its practical range and enhance its ability to search globally. The dimension stride is computed as in (13), particular optimization problem. The host nest is referred to as a member of the population in the binary cuckoo algorithm, and the nest enables the cuckoo to lay a single or several more eggs. The nests with superior fitness scores continue to exist as the number of cycles grows. This indicates that the optimal eggs are preserved later in the algorithm's iteration, keeping the favorable characteristics. The problem is formulated mathematically as in (9) to (11) by employing a sigmoid function to map the features in the continuous space.
The Chaotic Cuckoo Search algorithm is improvised with chaotic maps and Levi Flight components. While the feature selection problem can only be solved within the range [0, 1], the standard cuckoo search algorithm updates the cuckoo at any location, also known as the continuous space. There are Y cuckoos in the population, and there are X qualities that each cuckoo possesses for a population of size Y. It indicates that each person's search space is a X * Y matrix. Each nest in the Chaotic Cuckoo Search method generates a unique binary string, with every one of the bits indicating a distinct feature. If a bit is set to 1, it means that the feature has been chosen, and if set to 0, it is rejected.
The chaotic condition of randomness turns out to be pretty structured, as revealed by the chaos theory, which was first proposed to examine meteorological circulation trends. Minor modifications to the preliminary test setup can result in large modifications to the behavior that follows. Therefore, to provide a foundation for algorithmic resolution and to boost population variety during the preliminary phase, a chaotic map is included in the preliminary phase. Chebyshev map is employed as the chaotic map, which is mathematically represented as in (12) In Equation (12), the value of x a is any random number between 0 and 1.
The cuckoo must first search a wide area before settling on the best option better to find a suitable nest in the following generation. This algorithm is given the Levy flight dimension stride search to broaden its search scope within its practical range and enhance its ability to search globally. The dimension stride is computed as in (13) (13) where and represents the previous and current locations of the nest which the cuckoo takes in its exploration path. The nest location is updated for every iteration depending on the Levy Flight formula as shown in (14) Step 2: Select BEST cuckoos Step 3: Modify dim stride using chebyshev chaotic map cb map as per Equation (12) Step 4: Pick a cuckoo in random and update its solution by leveraging Levy Flight Step 5: Assess the fitness function of chosen cuckoo, F old Step 6: Select a nest location (new location) Step 7: if (F old < F new ) Step 8: Update the cuckoo in new nest location Step 9: end if Step 10: For every iteration, WORST cuckoos are rejected and BEST ones are retained Step 11: Arrange individuals in current Cuckoo Population with BEST cuckoos Step 12: return Current BEST cuckoo

AlexNet
This architecture is stacked with convolutional layers (13) where η and β represents the previous and current locations of the nest which the cuckoo takes in its exploration path. The nest location is updated for every iteration depending on the Levy Flight formula as shown in (14), particular optimization problem. The host nest is referred to as a member of the population in the binary cuckoo algorithm, and the nest enables the cuckoo to lay a single or several more eggs. The nests with superior fitness scores continue to exist as the number of cycles grows. This indicates that the optimal eggs are preserved later in the algorithm's iteration, keeping the favorable characteristics. The problem is formulated mathematically as in (9) to (11) by employing a sigmoid function to map the features in the continuous space.
The Chaotic Cuckoo Search algorithm is improvised with chaotic maps and Levi Flight components. While the feature selection problem can only be solved within the range [0, 1], the standard cuckoo search algorithm updates the cuckoo at any location, also known as the continuous space. There are Y cuckoos in the population, and there are X qualities that each cuckoo possesses for a population of size Y. It indicates that each person's search space is a X * Y matrix. Each nest in the Chaotic Cuckoo Search method generates a unique binary string, with every one of the bits indicating a distinct feature. If a bit is set to 1, it means that the feature has been chosen, and if set to 0, it is rejected.
The chaotic condition of randomness turns out to be pretty structured, as revealed by the chaos theory, which was first proposed to examine meteorological circulation trends. Minor modifications to the preliminary test setup can result in large modifications to the behavior that follows. Therefore, to provide a foundation for algorithmic resolution and to boost population variety during the preliminary phase, a chaotic map is included in the preliminary phase. Chebyshev map is employed as the chaotic map, which is mathematically represented as in (12) In Equation (12), the value of x a is any random number between 0 and 1.
The cuckoo must first search a wide area before settling on the best option better to find a suitable nest in the following generation. This algorithm is given the Levy flight dimension stride search to broaden its search scope within its practical range and enhance its ability to search globally. The dimension stride is computed as in (13) where and represents the previous and current locations of the nest which the cuckoo takes in its exploration path. The nest location is updated for every iteration depending on the Levy Flight formula as shown in (14), Algorithm 1.

Chaotic Cuckoo Search algorithm
Input: Initialize Population N, feature set F, chaotic map cb map Output: Current BEST cuckoo Step 1: Arrange individuals in Cuckoo Population based on fitness Step 2: Select BEST cuckoos Step 3: Modify dim stride using chebyshev chaotic map cb map as per Equation (12) Step 4: Pick a cuckoo in random and update its solution by leveraging Levy Flight Step 5: Assess the fitness function of chosen cuckoo, F old Step 6: Select a nest location (new location) Step 7: if (F old < F new ) Step 8: Update the cuckoo in new nest location Step 9: end if Step 10: For every iteration, WORST cuckoos are rejected and BEST ones are retained Step 11: Arrange individuals in current Cuckoo Population with BEST cuckoos Step 12: return Current BEST cuckoo

AlexNet
This architecture is stacked with convolutional layers (14) Algorithm 1. Chaotic Cuckoo Search algorithm Input: Initialize Population N, feature set F, chaotic map cb map Output: Current BEST cuckoo Step 1: Arrange individuals in Cuckoo Population based on fitness Step 2: Select BEST cuckoos Step 3: Modify dim stride using chebyshev chaotic map cb map as per Equation (12) Step 4: Pick a cuckoo in random and update its solution by leveraging Levy Flight Step 5: Assess the fitness function of chosen cuckoo, F old Step 6: Select a nest location (new location) Step 7: if (F old < F new ) Step 8: Update the cuckoo in new nest location Step 9: end if Step 10: For every iteration, WORST cuckoos are rejected and BEST ones are retained Step 11: Arrange individuals in current Cuckoo Population with BEST cuckoos Step 12: return Current BEST cuckoo

AlexNet
This architecture is stacked with convolutional layers and fully connected layers. Each neuron is a convolutional kernel in the filters that comprise each convolutional layer. The kernel is a numeric vector that multiplies its weights by the corresponding values of a selected portion of the pixels involved in the original image. The pixels that are chosen from the original image share the same kernel size. The values that are produced are then added together to produce a single value that corresponds to the numerical value of each pixel in the final result. The outcome of the convolutional layer is generated by moving the kernel across the input image. The kernel moves across the pixels in both dimensions in every single layer. The mathematical formulation of the convolution process is represented in (15),

Results and Discussion
This section presents the results obtained by applying LiTS dataset to the proposed methodology for liver segmentation and liver classification

Dataset Description
The dataset utilized in this research is the LiTS, Liver Tumor Segmentation related benchmark (15) In the above equation, F map (m, n) denotes the feature map with m columns and n rows, I p (a, b) represents the input vector with coordinates a and b, S k (i, j) represents kernel with elements across i and j.  (16)

Results and Discussion
This section presents the results obtained by applying LiTS dataset to the proposed methodology for liver segmentation and liver classification

Dataset Description
The dataset utilized in this research is the LiTS, Liver Tumor Segmentation related benchmark dataset. The research sample in this dataset includes a combination of primary and metastatic liver tumor diseases. The ratio between the tumors and background varied for each of the lesions. The dataset contains 201 CT images of the abdomen with 194 images representing the lesions. The dataset is split into training and testing sets with 131 CT images for training and 70 for testing. The imaging quality extends from 0.56 mm to 1.0 mm, A neighborhood feature map is produced by a pooling operation that adds up similar information in that region to produce a single value. There are five convolutional layers in AlexNet, with pooling layers placed after each of the first three convolutional lay-ers. Rectified Linear Unit is included as the activation function for each layer and batch normalization is implemented to overcome the issue of overfitting. After the five convolutional layers which perform the task of feature extraction, three fully connected layers accomplish the classification purposes. Dropout is introduced after fully combined layers to skip certain units to generalize the network. The softmax activation function is incorporated in the output layer of the AlexNet architecture, which is represented as in (16),

Results and Discussion
This section presents the results obtained by applying LiTS dataset to the proposed methodology for liver segmentation and liver classification.

Dataset Description
The dataset utilized in this research is the LiTS, Liver Tumor Segmentation related benchmark dataset. The research sample in this dataset includes a combination of primary and metastatic liver tumor diseases. The ratio between the tumors and background varied for each of the lesions.

Performance Evaluation of Liver Lesion Segmentation Using UNet++
The performance of the proposed UNet++ segmentation method is assessed using metrics such as Dice similarity coefficient and Correlation coefficient.
The Dice similarity coefficient (DSC) is a popular metric for evaluating the accuracy of automated or partially automated segmentation techniques. DSC is a preferred technique for contrasting binary portions of an image. The DSC has been customized for imagery segmentation. It is a common practice to compare the separated portion of the fundamental truth with the output of automated or partially automated segmentation methods. A collection must be constructed for each DSC calculation between two sections. It is mathematically formulated, as shown in (17) The correlation coefficient compares the image with the fundamental truth based on the intensity of the pixels in the images. It is defined mathematically as in (18), where x and y parameters denote the pixel positions in the images. . (18) The performance of the UNet++ model used in the proposed research is compared with some of the best deep learning-based semantic segmentation models such as YOLACT, YOLOV7, Mask  (17) The correlation coefficient compares the image with the fundamental truth based on the intensity of the pixels in the images. It is defined mathematically as in   (18), where x and y parameters denote the pixel positions in the images.
2 or up to 1026 in number. There are typically between and 12 malignancies. The lesions range in dimension om 38 mm to 1231 mm. Comparing the test set to the aining set, more tumor incidences are seen in the test t. The statistical analysis results reveal no significant ifference between the liver volumes in the training and st sets. In the train and test sets, the mean lesion HU alue is 65 and 59, correspondingly. The dataset used in e experimentation of the proposed research is available the following link, ttps://www.kaggle.com/datasets/andrewmvd/lits-png .2 Performance Evaluation of Liver Lesion egmentation using UNet++ he performance of the proposed UNet++ segmentation ethod is assessed using metrics such as Dice similarity oefficient and Correlation coefficient. he Dice similarity coefficient (DSC) is a popular metric r evaluating the accuracy of automated or partially utomated segmentation techniques. DSC is a preferred chnique for contrasting binary portions of an image. he DSC has been customized for imagery segmentation. is a common practice to compare the separated portion f the fundamental truth with the output of automated or artially automated segmentation methods. A collection ust be constructed for each DSC calculation between o sections. It is mathematically formulated, as shown (17). Here B represents the image in binary form, and means the fundamental truth.
The correlation coefficient compares the image with the fundamental truth based on the intensity of the pixels in the images. It is defined mathematically as in (18), where x and y parameters denote the pixel positions in the images. . (18) The performance of the UNet++ model used in the proposed research is compared with some of the best deep learning-based semantic segmentation models such as YOLACT, YOLOV7, Mask In Figure 4 various segmentation performance is compared. However, Unet++ with semantic segmnetation works better in obtaining high accuracy compared to exsisting technique

Figure 4
Performance Comparison of Segmentation Methods In Figure 4 various segmentation performance is compared. However, Unet++ with semantic segmnetation works better in obtaining high accuracy compared to exsisting technique

Performance Evaluation of Liver Lesion Classification Using Chaotic Cuckoo Search and AlexNet
The performance of the proposed Chaotic Cuckoo Search with AlexNet architecture for liver lesion classification is evaluated using metrics such as Accuracy, Precision, Recall, F1 Score, and Specificity.

Performance Evaluation of Liver Lesion Classification Using Chaotic Cuckoo Search and AlexNet
The performance of the proposed Chaotic Cuckoo Search with AlexNet architecture for liver lesion classification is evaluated using metrics such as Accuracy, Precision, Recall, F1 Score, and Specificity.
Pretrained CNN architectures such as VGG16, Res-Net50, InceptionV3, DenseNet121, and MobileNetV2 are implemented for the LiTS dataset to assess the performance of these models for liver tumor classification and compare it against the results produced by the AlexNet model. The obtained results are shown in Table 5

Figure 6
Performance Comparison of Existing Vs Proposed Methods The performance of the proposed system is compared with few existing works on liver lesion segmentation and liver tumor classification. The existing models considered for comparison includes UNet-3DCNN-CLSTM [20], GAN-ResNet50-Inception Resnet V2 [21], Watershed-Gaussian Mixture Model-DNN [27], UNet-Grey Wolf Class Topper Optimization [29] and DFS U-Net -Improved CNN [30]. UNet-3DCNN-C LSTM model utilizes UNet for image segmentation, 3DCNN for feature optimization and C-LSTM for classification [31][32][33]. This model produces accuracy, precision and recall as 93.6%, 92.8% and 93.2% respectively. GAN-ResNet50-Inceptionresnetv2 produces an accuracy of 94.7%, precision of 94.2% and recall of 94.4%. Watershed-Gaussian Mixture Model-DNN produces slightly better accuracy of 96.1% and UNet with Grey Wolf Class Topper Optimization is accurate with 97.6%. DFS U-Net -Improved CNN employs hybrid DFS U-Net for segmentation and Improved CNN for classification to produce an accuracy of 98.6%. The proposed model is capable enough to exhibit highest accuracy, precision and recall values such as 99.2%, 98.6% and 98.8% respectively compared to the existing models.

Limitations of the Present Research
One clear drawbacks of the sample dataset that is employed in the current research is that it prohibits us from extending the findings. Despite the favorable outcomes generated by UNet++, there are some restrictions. By setting up additional epochs, adopting larger amounts of information, incorporating a variety of datasets, or using various methods of preprocessing could help to get over these restrictions. Moreover, it is important to interpret the results delivered carefully and to do additional research with a more substantial sample to ensure that the results are adequately validated. Like other deep learning models, UNet++ may be prone to overfitting, especially when trained on limited data. Overfitting occurs when the model learns to perform well on the training data but fails to generalize to new, unseen data. Regularization techniques such as dropout or data augmentation can help mitigate this issue.

Conclusion
This study put forth a novel method for detecting Net-SSO is quite close, with discounts of 97.3% and 97.9%. The Traditional Cuckoo Search algorithm was 98.5% accurate in making predictions compared to different algorithms. However, it is lower than the accuracy produced by the proposed AlexNet-Chaotic Cuckoo Search Algorithm, which is 99.2%.

Limitations of the Present Research
One clear drawbacks of the sample dataset that is employed in the current research is that it prohibits us from extending the findings. Despite the favorable outcomes generated by UNet++, there are some restrictions. By setting up additional epochs, adopting larger amounts of information, incorporating a variety of datasets, or using various methods of preprocessing could help to get over these restrictions. Moreover, it is important to interpret the results delivered carefully and to do additional research with a more substantial sample to ensure that the results are adequately validated. Like other deep learning models, UNet++ may be prone to overfitting, especially when trained on limited data. Overfitting occurs when the model learns to perform well on the training data but fails to generalize to new, unseen data. Regularization techniques such as dropout or data augmentation can help mitigate this issue.

Conclusion
This study put forth a novel method for detecting liver cancer from CT scans that combined a variety of deep learning models with an optimization algorithm that was bio-inspired. First, a unique semantic segmentation technique called UNet++ was suggested for extracting liver lesions from CT images. In contrast to previous studies on the diagnosis of liver cancer, which use the conventional feature-based classification approaches, the suggested approach utilizes the Chaotic Cuckoo Search algorithm as a feature extractor and AlexNet architecture as classifier. In contrast to the other algorithms, AlexNet with Chaotic Cuckoo Search algorithm produced the highest accuracy of 99.2%. The performance exhibited by the proposed method is also evaluated against existing approaches in the literature and found the proposed method excels with highest accuracy, precision and recall. The drawback of the proposed approach is that it was employed only to CT images and the results were evaluated. Future research on the identification of liver cancer will combine various modalities, including ultrasound imaging and magnetic resonance imaging, to create a multimodal predictive strategy incorporating deep learning. Through the advantages of comprehensive multimodal integration of medical imagery, this strategy has the potential to boost diagnosis reliability.