Reliable Multipath Flow for Link Failure Recovery in 5G Networks Using SDN Paradigm

In modern networks and cloud evolution, as new nodes to internetwork growing rapidly and use of gaming and video streaming over the network require high availability with very small latency rate. 5G networks provides much faster services than 4G but link failure occurrence can affect the quality of service. In 5G networks environmental factors also affect the efficiency of wireless signals. To overcome such type of issues, base stations are placed in distributed manner around urban areas when decision is required for the placement. However, in some scenarios we can have few similarities, as in general, highways are same. In existed systems signals distribution is performed homogenously, so it will be generating issues like fractal and environmental. It will be cause of great economic loss. However, being an emerging network paradigm, Software defined Network (SDN) is easy to manage due to logical separation of control plane and data plane. SDN supports numerous advantages, one of them is link robustness to avoid service unavailability. To cope with network link failures there are many mechanisms existed like proactive and reactive mechanisms, but these mechanisms calculate multiple paths and stored in flow tables without considering reliability of link. Therefore, it will be cause of high latency rate due to calculation of many multiple paths and increased traffic overhead too. To overcome these issues, we proposed a new approach in which multipath numbers depend on reliability of primary path. Number of alternative paths decreased as reliability of link increased it leads to less time required to calculate alternative path. In addition, traffic overhead decreases as compared to existing approaches. Secondly, we also integrate the shortest distance factor with reliability factor, and we get better results than existing approaches. Our proposed system will be helpful in increasing the availability of services in 5G network due to low latency rate and small traffic overhead required in link failure recovery.


Introduction
In traditional networks, control and management functions along with data were performed on network devices e.g. switches, gateways, and routers. That is why many problems were faced such as misconfiguration, network fluctuation, etc. Mostly problems in traditional networks were due to manual configuration of devices, low level programming commands and no less involvement of operating system in it [22]. To overcome these issues software defined network (SDN) introduced as a new architecture of network in which new manner used with separate control and management plane at logically centralized point known as SDN controller [21]. Data plane of forwarding devices separation improve the working of SDN controller due to easy control and management of network. Due to centralized controller, a network can efficiently configure with load balancing, traffic engineering, security enforcement, etc. Central controller in SDN make easy to maintain tasks like reachability map, enforcing Access Control List (ACL) at a single logical point [40]. SDN divides in three moduli. First, application plane in which Northbound API use to forwarding updates to control plane. Secondly control plan controlled the data plane by decision making by itself use southbound API.
(Commonly used open flow protocol). Each device in a network manual register itself with SDN controller and regularly update the central controller by providing latest link state information. Thus, it is cleared that SDN controller has complete view of overall network and this quality makes SDN more efficient as compare to traditional network [23]. SDN introduced as easy management of network by decoupling the planes.
Application plane is also known as management plane because end user is controlling it by using SDN programming languages like Frenetic [7].This plane interrelate with control plane like security application is answerable to counter attacks and load balancer is responsible for distribution of traffic among given links. Application plane can be customized by the network administrator for modifying the performance of network. Thus, this unit makes network easy to program and modify by the developers [11]. In the control plane perform decision logic at data packets which is depend on application plane and topology which is used. Control plane sends the specific action to the centralize controller and controller apply these decisions on data packets. These flow rules installed at switch located in communication path. Network administrator can obtain complete view of network topology at centralize controller [26].
The third module in SDN is data plane consist of network nodes and devices all these devices are controlled by controller using protocol( like open flow) until packet arrived at destination point. Data plane provide secure communication with controller and memorize all pattern and corresponding actions in form of table. When data packet arrived, controller perform action on it by taking information from flow table. Corresponding action performed on given packet where pattern matched with flow table's pattern. In case no similar pattern found then switch sends idle message to controller through OpenFlow protocol to calculate the flow rule for given data packet [23]. SDN has many advantages as compare to traditional internet protocol networks. Easy to update the policies at single point (i.e. controller) and quite easier for network administrator to manage the whole network. In traditional network difficult to find failure specially when network consist of large number of nodes but in case of SDN controller has view of whole network and have awareness about failures that occurs in network. Due to these advantages in this era SDN adopted by many organizations e.g. Huawei, Google, VMware and Microsoft [20,30,35]. These organizations using SDN parallelly with already established functional network. As we discussed that data plane and controlled plane decoupled in SDN and it provide better programming capabilities, easy to manage flow of data packets and network virtualization, etc. [32]. All these advantages are admirable however, separation of data and control plane also cause of some difficulties. In which fast link failure recovery is also included because whole network is depending on SDN controller for failure handling, which is cause of large delay in between [15]. Failure recovery delay is also cause of packet loss and badly effect the network services.

Challenges in 5G networks
In this era, the growing usage of video streaming online gaming and smart technologies are the main causes for development of 5G network system [1]. To overcome these challenges 5G technology introduced which is providing ten times more data than 4G network system [4]. 5G network set to provide the services with high availability in cost effective way. Different technologies such as SDN and NFV currently using by many cloud service provider for providing high throughput, Resilience and reliability with low latency rate in 5G [24]. As we discussed that video streaming and gaming, etc. Are causes of utilization of bandwidth and delay in service availability. To overcome these issues multi path flow protocol and forwarding devices with latest technology for fast communication between different planes of SDN paradigm while it is integrated with 5G network. SDN paradigm works very efficiently using Multipath flow protocol for achieving better utilization of 5G resources with high throughput with low latency.
In 5G networks many environmental factors which can affect the efficiency of wireless signals. To overcome this issue radio, microwaves distributed Heterogenous based on urban development and user distribution territory. If decision about place of base station made. However, in different scenarios few similarities can be occur. Because mostly developed cities, highways have same architecture. In existed systems signals distribution performed homogenously so, it will be generating issues like fractal and environmental [6,38].
Besides these advantages of integration of 5G and SDN in large number of end users, requirement of high bandwidth, and reliability can be badly affect by link/node failure. In SDN occurrence of link failure not only in centralized point but also can be effect data plane as well. To overcome these issue 5G network should be able to predict/detect the failure occurrence either in central point or in data plane. To recover these failures in very small period which will almost negligible. In our proposed system SDN and 5G integrated with multipath flow protocol but in modified form of it. Multipath flow protocol using reliability of given path it will be reduce the failure recovery time and effectively reduced the bandwidth in 5G network.

Related Work
In SDN failure recovery possible with two mechanisms. Now in this section we will discuss some existing techniques and their deficiencies.

Proactive Failure Recovery Mechanisms
FF is failure recovery mechanism which is mostly suitable for port failure detection and recovery. In group tables few actions are predefine in buckets. Watch group detect a failure occur at any post that Information Technology and Control 2022/1/51 8 flag down indicator then any alternative port with liveliness will used instead of failed one [9,3].
In SDN by adopting Bidirectional Forwarding Detection (BDF) failure detection performed by using control messages and echo in between two nodes. Link current state checked by control messages and these messages sends to each node. Nodes can be making judgement for status of existing session by echo messages. In FF group less involvement of controller after computation of primary path. However, drawback is alternative paths have not predefined, so if primary path fails these is no alternative path for failure recovery.
SPIDER project is the one where researchers overcome the problems faced in FF group. In SPIDER, failure recovered without communicating to controller when there is no alternative path available [27]. By using link probing failure detected and can be resend with low latency without involvement of Controller. However, this solution is completely based on data plane.
It is also a proactive solution in which two flow entries must be installed for each switch for every incoming packet for its associated path.one of them is used as active and other one is alternative path when failure occurred [19]. However, it is suitable only small networks where number of failures in very small numbers and another issue is TCAM memory limitation which can be overflow when number of matching and actions increased in network.
In searching for an alternating path if congestion factor considered then alternative paths can be computed with low packet loss rate [33]. In this technique back paths predefined for each primary path. Any flow in which failure occur it can be retransmitted by using alternative path. In Congestion aware techniques researcher have overcome these issues like less involvement of controller, Reducing flow entries, etc. However, if alternative path calculated for each link, it will be cause of traffic over head [34,28].

Challanges in Proactive Failure Recovery Mechanisms
1 SDN Switches available with limited number of flow entries. E.g. 8000 flow rules can store in State of art switches. Cost increased when more switches required [13]. 2 In large scale SDN network when number of flow entries increased then in flow entries matching process (for alternative paths) will be cause of greater latency rate [5].
3 Proactive approach is suitable for small scale network, because when number of flow entries increased then Data plane scales upward. 4 In dynamic conditions may be possible that backup path may be fail earlier than first configured path. In this case, there will be no alternative path when failure occurs.

Reactive Failure Recovery Mechanisms
In reactive failure recovery consists of the following steps.
1 Monitoring the status of network by heartbeat mechanism.
2 Detection of failure based on heartbeat messages.
3 Controller computation for alternative path for failure recovery. 4 Replacement of old entries with new flow entries for updating path.

Challenges in Reactive Failure Recovery Mechanisms
Shortest distance mechanism was proposed in [8]. In which priority-based flow used. A packet with highest priority takes minimum delay for failure recovery. Due to avidness of congestion this mechanism is not suitable for large scale SDN. Because when as size of network increased it will be cause in increasing complexity of algorithm. This technique has not been applied on standard topology.
As number of flow operations increases then average failure recovery delay increased. Thus, to overcome this issue flow operations minimized as described in [37]. If alternative path selected with low cost (small number of flow operations) then overall failure recovery delay can be reduced. In described that a single flow entry consumes 11 ms. [17]. In realistic SDN minimum 200 to 300 ms required to recover a failure. In following table, a comparative view of proactive and reactive failure recovery techniques.
As we have already discussed two methods of link failure recovery in integration of SDN with 5G. In these existing systems in which proactive mecha-9 Information Technology and Control 2022/1/51 nism used computation of alternative paths cause of overburden at centralized controller specially when number of nodes in data plane increased and utilization of switch's memory also increased. Specifically, when end users of 5G demands high service availability with minimum ten times faster as compare to 4G [2]. In predefined mechanisms mostly researchers focused on how we can reduce the utilization of memory or by reducing the load at SDN controller (less involvement in failure recovery). However, these defined mechanisms can work more efficiently if multipath flow protocol calculate different number alternative paths after measuring the reliability of primary path. Like If reliability of primary path is maximum then no need to compute alternate paths for this link. It will be creating great impact on memory consumptions with low latency rate because number of alternative paths decreases.

Proposed Methodology
Reliable multipath flow mechanism proposed in which in first step controller compute a primary path between sender and receiver node when sender node send a request to SDN controller for path computa- Large networks tion as we discussed in above section. After that Using proposed methodology calculate reliability of primary path on basis of predefined factors. Then how many numbers of alternate paths will be stored in forwarding table is depend on reliability ratio. After first phase then we also include distance calculation and find the shortest path using minimum spanning tree or Dijkstra can be used to find shortest path. It will be more effective when reliability and distance both attributes integrate in proposed method. Dijkstra algorithm is also used for calculating shortest path.

Bootstrapping process
When an OpenFlow channel is established between controller and switch, Symmetric packets like Hello, Echo request, and Echo response are exchanged among the controller and all switches. Controller initiates a Feature-Request message for the switch.
In response to said request, switch generate an asynchronous message Feature-Reply for Controller [31]. Multiple packets are exchanged using OpenFlow channel which is initiated by the controller for switch states inspection, state modification, interface statistics, flow rule statistics, and capabilities. By caching these response packets in the proposed methodology, controller maintains the network-wide view dynamically and periodically [16].

Graph Composition process
Controller C in our proposed methodology has an application for transforming the data plane information and attributes into weighted undirected graph Ĝ . Controller periodically updates graph connectivity in the response of end node and devices discovery events. It also updates the nodes joining like end-user and forwarding devices in data plane as vertices V END_USER , links ꝲ END_USER their attributes ꝲ -attribute and V SWITCHES , links ꝲ SWITCHES . Reliability inquired by controller reflects the stability of link among forwarding devices and end-user connectivity with devices and proposed approach use procedural programming fashion for its processing reliable flow rule installation.
In this phase, after the graph composition, now network can start working. At First stage, flow tables will be empty. After that switch 1 receives a packet from host A. Switch s1 sending a message to SDN controller for path computation after checking its own forwarding Reliability Computation process. Controller application use periodically probe packets for inspection of link failure frequency within a time slot (10 seconds) and compute the link reliability. Higher the number of failures cause in lower the reliability percentage and introduce signal of flow rule installation for controller. presents the failure frequency between 0 and 1, where ʎ is recovery constant of link for all re-channelized links in topology heuristically.
Ф in Equation 4 presents the link aliveness time in stipulated 10 seconds time slot while Ф × ʎ is reliability. A path � computed from source to destination is a set of all intermediate forwarding devices. Equations 6-7 present the path from source to destination along with associated aliveness time between source and destination and the sum of path aliveness time of any path.

Reliable multipath flow.
In this phase, first primary path will be computed based on highest reliability which is computed in last phase D. All alternative paths will be calculated according to rules defined as follows.
i. IF S reliable> 90 % Then no alternative path.
vi. IF S reliable< 50 % Then all alternative paths.

Disstance Based Reliable multipath flow.
It is a modified form of multipath flow in which primary path calculated not only reliability based but shortest distance attributes also involved in it. It will be decreasing the latency rate better as compare to RMF.
Computation process. pplication use periodically probe packets on of link failure frequency within a time conds) and compute the link reliability. number of failures cause in lower the ercentage and introduce signal of flow tion for controller. presents the failure etween 0 and 1, where ʎ is recovery f link for all re-channelized links in uristically.
ion 4 presents the link aliveness time in 0 seconds time slot while Ф × ʎ is � devices. Equations 6-7 present the path from source to destination along with associated aliveness time between source and destination and the sum of path aliveness time of any path.

Reliable multipath flow.
In this phase, first primary path will be computed based on highest reliability which is computed in last phase D. All alternative paths will be calculated according to rules defined as follows.
i. IF S reliable> 90 % Then no alternative path.
vi. IF S reliable< 50 % Then all alternative paths.

Disstance Based Reliable multipath flow.
It is a modified form of multipath flow in which primary path calculated not only reliability based but shortest distance attributes also involved in it. It will be decreasing the latency rate better as compare to RMF.

Primary path computation.
In this phase, after the graph composition, now network can start working. At First stage, flow tables will be empty. After that switch 1 receives a packet from host A. Switch s1 sending a message to SDN controller for path computation after checking its own forwarding table. If entry Ϸ − ӗ matched with forwarding table entries then corresponding action will be performed, otherwise now SDN controller decides whether this packet will be forwarded or not according to already defined network policy(Ά).
If Ϸ ∋ Ϸ − ӗ Permit or Deny else Reliability Computation process. Controller application use periodically probe packets for inspection of link failure frequency within a time slot (10 seconds) and compute the link reliability. Higher the number of failures cause in lower the reliability percentage and introduce signal of flow rule installation for controller. presents the failure frequency between 0 and 1, where ʎ is recovery constant of link for all re-channelized links in topology heuristically.
Ф in Equation 4 presents the link aliveness time in stipulated 10 seconds time slot while Ф × ʎ is reliability. A path � computed from source to destination is a set of all intermediate forwarding devices. Equations 6-7 present the path from source to destination along with associated aliveness time between source and destination and the sum of path aliveness time of any path.

Reliable multipath flow.
In this phase, first primary path will be computed based on highest reliability which is computed in last phase D. All alternative paths will be calculated according to rules defined as follows.
i. IF S reliable> 90 % Then no alternative path.
vi. IF S reliable< 50 % Then all alternative paths.

Disstance Based Reliable multipath flow.
It is a modified form of multipath flow in which primary path calculated not only reliability based but shortest distance attributes also involved in it. It will be decreasing the latency rate better as compare to RMF.

Reliability Computation process
Controller application use periodically probe packets for inspection of link failure frequency within a time slot (10 seconds) and compute the link reliability. Higher the number of failures φ cause in lower the reliability percentage and introduce signal of flow rule installation for controller. φ presents the failure frequency between 0 and 1, where ʎ is recovery constant of link for all re-channelized links in topology heuristically.
i. Procedure Activate ( (event_publisher, event_subscriber))  Control functionality for getting data plane event and handling.

Primary path computation.
In this phase, after the graph composition, now network can start working. At First stage, flow tables will be empty. After that switch 1 receives a packet from host A. Switch s1 sending a message to SDN controller for path computation after checking its own forwarding table. If entry Ϸ − ӗ matched with forwarding table entries then corresponding action will be performed, otherwise now SDN controller decides whether this packet will be forwarded or not according to already defined network policy(Ά).
If Ϸ ∋ Ϸ − ӗ Permit or Deny else Reliability Computation process. Controller application use periodically probe packets for inspection of link failure frequency within a time slot (10 seconds) and compute the link reliability. Higher the number of failures cause in lower the reliability percentage and introduce signal of flow rule installation for controller. presents the failure frequency between 0 and 1, where ʎ is recovery constant of link for all re-channelized links in Ф in Equation 4 presents the link aliveness time in stipulated 10 seconds time slot while Ф × ʎ is reliability. A path � computed from source to destination is a set of all intermediate forwarding devices. Equations 6-7 present the path from source to destination along with associated aliveness time between source and destination and the sum of path aliveness time of any path.

Reliable multipath flow.
In this phase, first primary path will be computed based on highest reliability which is computed in last phase D. All alternative paths will be calculated according to rules defined as follows.
i. IF S reliable> 90 % Then no alternative path.
vi. IF S reliable< 50 % Then all alternative paths.

Disstance Based Reliable multipath flow.
It is a modified form of multipath flow in which primary path calculated not only reliability based but shortest distance attributes also involved in it. It will be decreasing the latency rate better as compare to RMF. for inspection of link failure frequency within a time slot (10 seconds) and compute the link reliability. Higher the number of failures cause in lower the reliability percentage and introduce signal of flow rule installation for controller. presents the failure frequency between 0 and 1, where ʎ is recovery constant of link for all re-channelized links in Equation 4 presents the link aliveness time in stipulated 10 seconds time slot while Ф × ʎ is reliability. A path � computed from source to destination is a set of all intermediate forwarding Controller application use periodically probe packets for inspection of link failure frequency within a time slot (10 seconds) and compute the link reliability. Higher the number of failures cause in lower the reliability percentage and introduce signal of flow rule installation for controller. presents the failure frequency between 0 and 1, where ʎ is recovery constant of link for all re-channelized links in Equation 4 presents the link aliveness time in stipulated 10 seconds time slot while Ф × ʎ is reliability. A path � computed from source to destination is a set of all intermediate forwarding Equation 4 presents the link aliveness time in stipulated 10 seconds time slot while Φ × ʎ is reliability. A path Controller application use periodically probe packets for inspection of link failure frequency within a time slot (10 seconds) and compute the link reliability. Higher the number of failures cause in lower the reliability percentage and introduce signal of flow rule installation for controller. presents the failure frequency between 0 and 1, where ʎ is recovery constant of link for all re-channelized links in topology heuristically. Equation 4 presents the link aliveness time in stipulated 10 seconds time slot while Ф × ʎ is reliability. A path � computed from source to destination is a set of all intermediate forwarding Equations 6-7 present the path from source to destination along with associated aliveness time between source and destination and the sum of path aliveness time of any path. (3)

Reliability Computation process.
Controller application use periodically probe packets for inspection of link failure frequency within a time slot (10 seconds) and compute the link reliability. Higher the number of failures cause in lower the reliability percentage and introduce signal of flow rule installation for controller. presents the failure frequency between 0 and 1, where ʎ is recovery constant of link for all re-channelized links in topology heuristically.

Primary path computation.
In this phase, after the graph composition, now network can start working. At First stage, flow tables will be empty. After that switch 1 receives a packet from host A. Switch s1 sending a message to SDN controller for path computation after checking its own forwarding table. If entry Ϸ � � matched with forwarding table entries then corresponding action will be performed, otherwise now SDN controller decides whether this packet will be forwarded or not according to already defined network policy(Ά). If

Reliability Computation process.
Controller application use periodically probe packets for inspection of link failure frequency within a time slot (10 seconds) and compute the link reliability. Higher the number of failures cause in lower the reliability percentage and introduce signal of flow rule installation for controller. presents the failure stipulated 10 seconds time slot while Ф × ʎ is reliability. A path � computed from source to destination is a set of all intermediate forwarding devices. Equations 6-7 present the path from source to destination along with associated aliveness time between source and destination and the sum of path aliveness time of any path.

Reliable multipath flow.
In this phase, first primary path will be computed based on highest reliability which is computed in last phase D. All alternative paths will be calculated according to rules defined as follows.
i. IF S reliable> 90 % Then no alternative path.
vi. IF S reliable< 50 % Then all alternative paths.

Disstance Based Reliable multipath flow.
It is a modified form of multipath flow in which primary path calculated not only reliability based but shortest distance attributes also involved in it. It will be decreasing the latency rate better as compare to RMF.

Reliable multipath flow
In this phase, first primary path will be computed based on highest reliability which is computed in last phase D. All alternative paths will be calculated according to rules defined as follows.

Primary path computation.
In this phase, after the graph composition, now network can start working. At First stage, flow tables will be empty. After that switch 1 receives a packet from host A. Switch s1 sending a message to SDN controller for path computation after checking its own forwarding table. If entry Ϸ � � matched with forwarding table entries then corresponding action will be performed, otherwise now SDN controller decides whether this packet will be forwarded or not according to already defined network policy(Ά). If

Reliability Computation process.
Controller application use periodically probe packets for inspection of link failure frequency within a time slot (10 seconds) and compute the link reliability. Higher the number of failures cause in lower the reliability percentage and introduce signal of flow rule installation for controller. presents the failure frequency between 0 and 1, where ʎ is recovery constant of link for all re-channelized links in topology heuristically.

Reliable multipath flow.
In this phase, first primary path will be computed based on highest reliability which is computed in last i. IF S reliable> 90 % Then no alternative path.
vi. IF S reliable< 50 % Then all alternative paths.

Disstance Based Reliable multipath flow.
It is a modified form of multipath flow in which primary path calculated not only reliability based but shortest distance attributes also involved in it. It will be decreasing the latency rate better as compare to RMF.
Reliability Computation process.
Controller application use periodically probe packets for inspection of link failure frequency within a time slot (10 seconds) and compute the link reliability.
Higher the number of failures cause in lower the reliability percentage and introduce signal of flow rule installation for controller. presents the failure frequency between 0 and 1, where ʎ is recovery constant of link for all re-channelized links in topology heuristically. Primary path computation. In this phase, after the graph composition, now network can start working. At First stage, flow tables will be empty. After that switch 1 receives a packet from host A. Switch s1 sending a message to SDN controller for path computation after checking its own forwarding table. If entry Ϸ − ӗ matched with forwarding table entries then corresponding action will be performed, otherwise now SDN controller decides whether this packet will be forwarded or not according to already defined network policy(Ά).
If Ϸ ∋ Ϸ − ӗ Permit or Deny else Reliability Computation process.  Primary path computation. In this phase, after the graph composition, now network can start working. At First stage, flow tables will be empty. After that switch 1 receives a packet from host A. Switch s1 sending a message to SDN controller for path computation after checking its own forwarding table. If entry Ϸ − ӗ matched with forwarding table entries then corresponding action will be performed, otherwise now SDN controller decides whether this packet will be forwarded or not according to already defined network policy(Ά).

Primary path computation.
In this phase, after the graph composition, now network can start working. At First stage, flow tables will be empty. After that switch 1 receives a packet from host A. Switch s1 sending a message to SDN controller for path computation after checking its own forwarding table. If entry Ϸ − ӗ matched with forwarding table entries then corresponding action will be performed, otherwise now SDN controller decides whether this packet will be forwarded or not according to already defined network policy(Ά).

Reliability Computation process.
Controller application use periodically probe packets for inspection of link failure frequency within a time slot (10 seconds) and compute the link reliability. Higher the number of failures cause in lower the reliability percentage and introduce signal of flow rule installation for controller. presents the failure frequency between 0 and 1, where ʎ is recovery constant of link for all re-channelized links in topology heuristically.

Primary path computation.
In this phase, after the graph composition, now network can start working. At First stage, flow tables will be empty. After that switch 1 receives a packet from host A. Switch s1 sending a message to SDN controller for path computation after checking its own forwarding table. If entry Ϸ − ӗ matched with forwarding table entries then corresponding action will be performed, otherwise now SDN controller decides whether this packet will be forwarded or not according to already defined network policy(Ά).

Primary path computation.
In this phase, after the graph composition, now network can start working. At First stage, flow tables will be empty. After that switch 1 receives a packet from host A. Switch s1 sending a message to SDN controller for path computation after checking its own forwarding table. If entry Ϸ − ӗ matched with forwarding table entries then corresponding action will be performed, otherwise now SDN controller decides whether this packet will be forwarded or not according to already defined network policy(Ά).

Disstance Based Reliable multipath flow
It is a modified form of multipath flow in which primary path calculated not only reliability based but shortest distance attributes also involved in it. It will be decreasing the latency rate better as compare to RMF.

Primary path computation.
In this phase, after the graph composition, now network can start working. At First stage, flow tables will be empty. After that switch 1 receives a packet from host A. Switch s1 sending a message to SDN controller for path computation after checking its own forwarding table. If entry Ϸ � � matched with forwarding table entries then corresponding action will be performed, otherwise now SDN controller decides whether this packet will be forwarded or not according to already defined network policy(Ά).

Reliability Computation process.
Controller application use periodically probe packets for inspection of link failure frequency within a time slot (10 seconds) and compute the link reliability. Higher the number of failures cause in lower the reliability percentage and introduce signal of flow stipulated 10 seconds time slot while Ф × ʎ is reliability. A path � computed from source to destination is a set of all intermediate forwarding devices. Equations 6-7 present the path from source to destination along with associated aliveness time between source and destination and the sum of path aliveness time of any path.

Reliable multipath flow.
In this phase, first primary path will be computed based on highest reliability which is computed in last i. IF S reliable> 90 % Then no alternative path.
vi. IF S reliable< 50 % Then all alternative paths.

Disstance Based Reliable multipath flow.
It is a modified form of multipath flow in which primary path calculated not only reliability based but shortest distance attributes also involved in it. It will be decreasing the latency rate better as compare to RMF.  For path in K: OpenFlow_Configuration(path)

Flow of Proposed Methodology
In working of reliable multiple path flow starting by controller take a network view of all nodes (switches) after that primary reliable path computed by controller when switch requested for it. Reliability measuring algorithm computed reliability if calculated reliability is less than threshold then all alternative paths updated in flow table but in case more than threshold then MRF cases decides how many alternative paths assigned. Primary path calculation based on reliability and shortest distance. In our propsed system we have used dijekstra algorithm for calculating shortest path. Flow diagram repeated for n number of paths and each time update latest paths in flow entries.

Simulation
Perform series of experiments of proposed approaches POX To controller and Mininet simulator have used in it.

Mininet Simulator
Mininet simulator have used in experimentation because it is feasible for both large scale and small-scale simulation of network. Hundreds and thousands of nodes can be tested easily using simple tools for command line and API. The simulator has the benefit of interface for multiple SDN controller like Pox, Floodlight, Ryu and Open daylight regardless of topology development programming interface level as a programming model in SDN comprises of low, mid, and high-level programming [8,37,31]. Mininet provide easy customization, sharing, and testing nodes of SDN [19]. It also provides a virtually separate interface for any host node for processing of host granular applications. Mininet simulator suitable for both (real and simulated controller).It can also use to simulate connections between different types of controllers like POX, Ryu, etc. [40]. Various types of switches can be created and modify using Mininet according to required simulation. NASA, ICSI, and many other researchers used in world used Mininet for multi controller simulations [14].

Experimental Results
In our emulated network following components used.    Figure 2 presents comparison between overheads of existing and new system. Overhead of existing system greater due to installing large number of alternative paths which is cause of increase in communication between controller and intermediate switches. Figure 3 elaborates delay comparison of packet transmission. RMF has smaller delay due to less involvement of controller.  All or selective path existing approach insta memory. Figure 7 pre by accessing the pa entertained by flow r has yielded that RMF for said situation. Th the highest percentag approaches and useful

Rule Entri Open
Existin mid, and high-level programming [8,37,31]. Mininet provide easy customization, sharing, and testing nodes of SDN [19]. It also provides a virtually separate interface for any host node for processing of host granular applications. Mininet simulator suitable for both (real and simulated controller).It can also use to simulate connections between different types of controllers like POX, Ryu, etc. [40]. Various types of switches can be created and modify using Mininet according to required simulation. NASA, ICSI, and many other researchers used in world used Mininet for multi controller simulations [14].

POX Controller.
Stateless switch communication based OpenFlow protocol can be controlled by POX framework [12]. Python language can used POX for design a SDN controller. It is efficient tool used in research for developing a basic SDN controller [8]. By adding more components, a complex SDN controller can be designed. POX can support 1.0 and 1.3 versions of OpenFlow switches. POX also provide interface for Mininet, open source availability, and integration with other simulators like NS3 [36]. Experimental Results.
In our emulated network following components used. Better results obtained generated by 5 hosts because of limited resources. All experiments performed using POX controller (version 2.2).Mininet simulator improves our work performance and competitively better results than existing approaches ./we have used python for scripting due to commutability of python with POX and Mininet. Figure 2 presents comparison between overheads of existing and new system. Overhead of existing system greater due to installing large number of alternative paths which is cause of increase in communication between controller and intermediate switches. Figure 3 elaborates delay comparison of packet transmission. RMF has smaller delay due to less involvement of controller.

Figure 3 Average packet transmission delay
In Figure 4, elaboration of flow rule installation is shown. If length of path increased, then number of alternative paths also increase which will be cause of large number of flow rules installation in existing system. In RMF/DRMF consumes less memory for flow rules installations. Mininet simulator have used in experimentation because it is feasible for both large scale and smallscale simulation of network. Hundreds and thousands of nodes can be tested easily using simple tools for command line and API. The simulator has the benefit of interface for multiple SDN controller like Pox, Floodlight, Ryu and Open daylight regardless of topology development programming interface level as a programming model in SDN comprises of low, mid, and high-level programming [8,37,31]. Mininet provide easy customization, sharing, and testing nodes of SDN [19]. It also provides a virtually separate interface for any host node for processing of host granular applications. Mininet simulator suitable for both (real and simulated controller).It can also use to simulate connections between different types of controllers like POX, Ryu, etc. [40]. Various types of switches can be created and modify using Mininet according to required simulation. NASA, ICSI, and many other researchers used in world used Mininet for multi controller simulations [14].

POX Controller.
Stateless switch communication based OpenFlow protocol can be controlled by POX framework [12]. Python language can used POX for design a SDN controller. It is efficient tool used in research for developing a basic SDN controller [8]. By adding more components, a complex SDN controller can be designed. POX can support 1.0 and 1.3 versions of OpenFlow switches. POX also provide interface for Mininet, open source availability, and integration with other simulators like NS3 [36]. Experimental Results.
In our emulated network following components used. Better results obtained generated by 5 hosts because of limited resources. All experiments performed using POX controller (version 2.2).Mininet simulator improves our work performance and competitively better results than existing approaches ./we have used python for scripting due to commutability of python with POX and Mininet. Figure 2 presents comparison between overheads of existing and new system.

Figure 2
Overhead comarison between existing and RMF Overhead of existing system greater due to installing large number of alternative paths which is cause of increase in communication between controller and intermediate switches. Figure 3 elaborates delay comparison of packet transmission. RMF has smaller delay due to less involvement of controller.

Figure 3 Average packet transmission delay
In Figure 4, elaboration of flow rule installation is shown. If length of path increased, then number of alternative paths also increase which will be cause of large number of flow rules installation in existing system. In RMF/DRMF consumes less memory for flow rules installations.   Figure 2 Overhead comarison between existing and RMF Overhead of existing system greater due to installing large number of alternative paths which is cause of increase in communication between controller and intermediate switches. Figure 3 elaborates delay comparison of packet transmission. RMF has smaller delay due to less involvement of controller.

Path Computation Controller Overhead
Existing RMF DRMF in experimentation large scale and smallndreds and thousands using simple tools for ulator has the benefit controller like Pox, ylight regardless of ming interface level N comprises of low, g [8,37,31]. Mininet sharing, and testing provides a virtually node for processing of inet simulator suitable troller).It can also use en different types of [40]. Various types of odify using Mininet on. NASA, ICSI, and n world used Mininet [14].
on based OpenFlow POX framework [12]. X for design a SDN used in research for roller [8]. By adding DN controller can be 0 and 1.3 versions of provide interface for ility, and integration

Path Computation Controller Overhead
Existing RMF DRMF

Comperitive Averge Packet Delay
In Figure 4, elaboration of flow rule installation is shown. If length of path increased, then number of alternative paths also increase which will be cause of large number of flow rules installation in existing system. In RMF/DRMF consumes less memory for flow rules installations. Figure 5 illustrates the observation of flow entry encapsulated packet drop at switch. The reason behind this behavior of OpenFlow switch has limited memory. These memory constraints insist on the mechanism of flow rule installation wisely by considering the forwarding devices' capacity. In this situation, our proposed solution is relatively more suitable in the SDN production network.

Figure 5
Switch flow entry drop percentage in case of switch memory Overflow

Figure 6
Average delay computed at controller for installation of computed Flow Rule Entries

Figure 7
Flow Rule Entries entertaining the production packets according to specified action in a flow rule entry Figure 5 illustrates the observation of flow entry encapsulated packet drop at switch. The reason behind this behavior of OpenFlow switch has limited memory. These memory constraints insist on the mechanism of flow rule installation wisely by considering the forwarding devices' capacity. In this situation, our proposed solution is relatively more suitable in the SDN production network. Observations regarding the average delay of flow rule entries installation in Figure 6 conclude that as the number of entries increases, the time of their installation also increases proportionally. When the controller computes the higher frequency of these flow entries along with another application interface then control traffic for configuration approaches to All or selective path installation procedures in the existing approach install the useless entries in switch memory. Figure 7 presents the experimental results by accessing the packets and number of bytes entertained by flow rule entries in switches and it has yielded that RMF and DRMF are much efficient for said situation. The proposed approach presents the highest percentage comparative to the existing approaches and useful entries in switches.

Figure 7
Flow Rule Entries entertaining the production packets according to specified action in a flow rule entry

Conclusion
As we have discussed that 5G networks used to provide high availability to end users in services  Figure 5 illustrates the observation of flow entry encapsulated packet drop at switch. The reason behind this behavior of OpenFlow switch has limited memory. These memory constraints insist on the mechanism of flow rule installation wisely by considering the forwarding devices' capacity. In this situation, our proposed solution is relatively more suitable in the SDN production network. Observations regarding the average delay of flow rule entries installation in Figure 6 conclude that as the number of entries increases, the time of their installation also increases proportionally. When the controller computes the higher frequency of these flow entries along with another application interface then control traffic for configuration approaches to All or selective path installation procedures in the existing approach install the useless entries in switch memory. Figure 7 presents the experimental results by accessing the packets and number of bytes entertained by flow rule entries in switches and it has yielded that RMF and DRMF are much efficient for said situation. The proposed approach presents the highest percentage comparative to the existing approaches and useful entries in switches.

Figure 7
Flow Rule Entries entertaining the production packets according to specified action in a flow rule entry

Conclusion
As we have discussed that 5G networks used to provide high availability to end users in services   Observations regarding the average delay of flow rule entries installation in Figure 6 conclude that as the number of entries increases, the time of their installation also increases proportionally. When the controller computes the higher frequency of these flow entries along with another application interface then control traffic for configuration approaches to delay. RMF and DRMF are relatively less delayed oriented and perform configurations according to the network policies timely. Figure 6 Average delay computed at controller for installation of computed Flow Rule Entries All or selective path installation procedures in the existing approach install the useless entries in switch memory. Figure 7 presents the experimental results by accessing the packets and number of bytes entertained by flow rule entries in switches and it has yielded that RMF and DRMF are much efficient for said situation. The proposed approach presents the highest percentage comparative to the existing approaches and useful entries in switches.

Figure 7
Flow Rule Entries entertaining the production packets according to specified action in a flow rule entry

Conclusion
As we have discussed that 5G networks used to provide high availability to end users in services 0 10 20 Observations regarding the average delay of flow rule entries installation in Figure 6 conclude that as the number of entries increases, the time of their installation also increases proportionally. When the controller computes the higher frequency of these flow entries along with another application interface then control traffic for configuration approaches to delay. RMF and DRMF are relatively less delayed oriented and perform configurations according to the network policies timely.
All or selective path installation procedures in the existing approach install the useless entries in switch memory. Figure 7 presents the experimental results by accessing the packets and number of bytes en-  Observations regarding the average delay of flow rule entries installation in Figure 6 conclude that as the number of entries increases, the time of their installation also increases proportionally. When the controller computes the higher frequency of these flow entries along with another application interface then control traffic for configuration approaches to delay. RMF and DRMF are relatively less delayed oriented and perform configurations according to the network policies timely. Figure 6 Average delay computed at controller for installation of computed Flow Rule Entries All or selective path installation procedures in the existing approach install the useless entries in switch memory. Figure 7 presents the experimental results by accessing the packets and number of bytes entertained by flow rule entries in switches and it has yielded that RMF and DRMF are much efficient for said situation. The proposed approach presents the highest percentage comparative to the existing approaches and useful entries in switches.

Figure 7
Flow Rule Entries entertaining the production packets according to specified action in a flow rule entry

Conclusion
As we have discussed that 5G networks used to provide high availability to end users in services    Observations regarding the average delay of flow rule entries installation in Figure 6 conclude that as the number of entries increases, the time of their installation also increases proportionally. When the controller computes the higher frequency of these flow entries along with another application interface then control traffic for configuration approaches to delay. RMF and DRMF are relatively less delayed oriented and perform configurations according to the network policies timely. Figure 6 Average delay computed at controller for installation of computed Flow Rule Entries All or selective path installation procedures in the existing approach install the useless entries in switch memory. Figure 7 presents the experimental results by accessing the packets and number of bytes entertained by flow rule entries in switches and it has yielded that RMF and DRMF are much efficient for said situation. The proposed approach presents the highest percentage comparative to the existing approaches and useful entries in switches.

Figure 7
Flow Rule Entries entertaining the production packets according to specified action in a flow rule entry

Conclusion
As we have discussed that 5G networks used to provide high availability to end users in services   Figure 5 illustrates the observation of flow entry encapsulated packet drop at switch. The reason behind this behavior of OpenFlow switch has limited memory. These memory constraints insist on the mechanism of flow rule installation wisely by considering the forwarding devices' capacity. In this situation, our proposed solution is relatively more suitable in the SDN production network. Observations regarding the average delay of flow rule entries installation in Figure 6 conclude that as the number of entries increases, the time of their installation also increases proportionally. When the controller computes the higher frequency of these flow entries along with another application interface then control traffic for configuration approaches to All or selective path installation procedures in the existing approach install the useless entries in switch memory. Figure 7 presents the experimental results by accessing the packets and number of bytes entertained by flow rule entries in switches and it has yielded that RMF and DRMF are much efficient for said situation. The proposed approach presents the highest percentage comparative to the existing approaches and useful entries in switches.

Figure 7
Flow Rule Entries entertaining the production packets according to specified action in a flow rule entry

Conclusion
As we have discussed that 5G networks used to provide high availability to end users in services 0 10 20  Figure 5 illustrates the observation of flow entry encapsulated packet drop at switch. The reason behind this behavior of OpenFlow switch has limited memory. These memory constraints insist on the mechanism of flow rule installation wisely by considering the forwarding devices' capacity. In this situation, our proposed solution is relatively more suitable in the SDN production network. Observations regarding the average delay of flow rule entries installation in Figure 6 conclude that as the number of entries increases, the time of their installation also increases proportionally. When the controller computes the higher frequency of these flow entries along with another application interface then control traffic for configuration approaches to All or selective path installation procedures in the existing approach install the useless entries in switch memory. Figure 7 presents the experimental results by accessing the packets and number of bytes entertained by flow rule entries in switches and it has yielded that RMF and DRMF are much efficient for said situation. The proposed approach presents the highest percentage comparative to the existing approaches and useful entries in switches.

Figure 7
Flow Rule Entries entertaining the production packets according to specified action in a flow rule entry

Conclusion
As we have discussed that 5G networks used to provide high availability to end users in services 0 10 20   Observations regarding the average delay of flow rule entries installation in Figure 6 conclude that as the number of entries increases, the time of their installation also increases proportionally. When the controller computes the higher frequency of these flow entries along with another application interface then control traffic for configuration approaches to delay. RMF and DRMF are relatively less delayed oriented and perform configurations according to the network policies timely. Figure 6 Average delay computed at controller for installation of computed Flow Rule Entries All or selective path installation procedures in the existing approach install the useless entries in switch memory. Figure 7 presents the experimental results by accessing the packets and number of bytes entertained by flow rule entries in switches and it has yielded that RMF and DRMF are much efficient for said situation. The proposed approach presents the highest percentage comparative to the existing approaches and useful entries in switches.

Figure 7
Flow Rule Entries entertaining the production packets according to specified action in a flow rule entry

Conclusion
As we have discussed that 5G networks used to provide high availability to end users in services  tertained by flow rule entries in switches and it has yielded that RMF and DRMF are much efficient for said situation. The proposed approach presents the highest percentage comparative to the existing approaches and useful entries in switches.