Abstract

With the mobility and ease of connection, wireless sensor networks have played a significant role in communication over the last few years, making them a significant data carrier across networks. Additional security, lower latency, and dependable standards and communication capability are required for future-generation systems such as millimeter-wave LANs, broadband wireless access schemes, and 5G/6G networks, among other things. Effectual congestion control is regarded as of the essential aspects of 5G/6G technology. It permits operators to run many network illustrations on a single organization while maintaining higher service quality. A sophisticated decision-making system for arriving network traffic is necessary to confirm load balancing, limit network slice letdown, and supply alternative slices in slice letdown or congestion. Because of the massive amount of data being generated, artificial intelligence (AI) and machine learning (ML) play a vital role in reconfiguring and improving a 5G/6G wireless network. In this research work, a hybrid deep learning method is being applied to forecast optimal congestion improvement in the wireless sensors of 5G/6G IoT networks. This proposed model is applied to a training dataset to govern the congestion in a 5G/6G network. The proposed approach provided promising results, with 0.933 accuracy, and 0.067 miss rate.

1. Introduction

The wireless sensor networks have played a significant part in communiqué in the last few years [13]. In the end, the revolution in wireless communication includes a wide range of standard networks. With various wireless networks, the most popular is mobile wireless communication, which has a high data bandwidth. It uses transport layer to transmit data and a precise protocol for broadcasting mobile wireless communication. The transport layer protocol utilizes a congestion control device to maintain reasonable network resource distribution even when demand exceeds the network’s capacity and resources. If this occurrence is utilized in a mobile wireless communication medium like 5G and does not meet the platform’s necessities, the wireless network’s routine will deteriorate. Managing wireless network congestion is one of the most severe aspects of the transport layer protocols.

Decreases in wireless sensor network (WSN) nodes are primarily due to a lack of congestion control [4, 5]. Another problem resulting from this decrease is packet loss and bandwidth deterioration [6]. Buffer size, queue size, bandwidth, link bandwidth, and packet harm of interconnection amongst nodes are all examples of network resources. A wide range of applications is affected by congestion control [7], including event-based, continuous, query-driven, and hybrid. Various protocols in the Transmission Control Protocol (TCP) and Stream Control Transmission Protocol (SCTP) [8] use congestion control to keep traffic moving. Congestion control is required in many situations, like packet collisions, interference, channel conflict, and buffer overflow, but not in others, such as dead links, buffer size shortages, and slow mainframes.

The fifth development of cellular networks [915] is 5G. It is a brand-new wireless technology that is sweeping the globe. More users will enjoy multigigabit per second peak data rates, ultra-fast reaction times, upgraded reliability, massive network volume, prolonged accessibility, and other reliable user experience thanks to 5G wireless technology. The experiences of a new user are possible due to upgraded performance and efficiency, as well as the connection of new sectors.

Because of the fast development of (5G) applications and rising demand for high-speed communiqué networks, the advent of a new 6G technology is expected inside the coming ten years. Through the ability to scale down in data rates, power, and mobility, 5G is intended to seamlessly link many embedded sensors in nearly anything, giving incredibly lean and low-cost connection options. 6G systems are expected to be more diverse than their predecessors, allowing for applications such as augmented and virtual reality, available instant messaging, constant intelligence, the Internet of Things, and existing mobile usage scenarios. Numerous sources claimed that the 6G technology might be available around 2030.

Millimeter-wave LANs, broadband wireless access schemes, and 5G or 6G networks require more safety, reduced latency, more secure standards, and higher transmission efficiency. One of the primary aspects of 5G/6G technology is efficient congestion control, which allows operatives to route many network instances on the same arrangement for increased service superiority. Due to the enormous data volume, artificial intelligence and machine learning are critical to reconfigure and enhance a 5G/6G technology routine. The application of ML to networking systems can control congestion in 5G/6G networks. Because of its predicted performance when applied to complicated issues, machine learning [16] will be a significant driver of future communications.

Machine learning is a practical and broadly applicable approach employed with traditional congestion control mechanisms to meet the strains of 5G Internet of Things (IoT) networks. The 5G environment is being used to improve the performance in a variety of fields, including smart cities [17], e-Health [18], and environmental surveillance [19, 20]. Machine learning is a practical and appropriate approach that traditional congestion control mechanisms can apply to meet the requirements of 5G networks. This study uses this study for prediction based on naive Bayes and deep learning to improve congestion management.

Machine learning techniques have become popular in predicting optimal congestion control in 5G/6G networks. Machine learning is mainly efficient for evaluating data and predicting the consequence of confident events based on the accessible sample inputs, which shape an appropriate model for making the right decisions [3, 21].

Machine learning, a subset of deep learning, is a branch of artificial intelligence. Automated decision-making is a feature of machine learning. In contrast, deep learning is a feature of computers that imitate human brain structures to learn to think and act autonomously. Deep learning typically prompts less ongoing human intervention than machine learning as it uses less computing power. Deep learning can analyze images, videos, and unstructured information in a way that machine learning cannot.

The primary purpose of this research work is to develop a congestion control model to reduce the amount of congestion experienced by 5G/6G networks and make to reduce the amount of congestion experienced by 5G/6G networks and efficiently utilize the resources already present within these networks. This research article is established in multiple sections: Introduction, Literature review, Methodology, Simulation results, Research contributions, and Conclusion.

2. Literature Review

The authors explained the network architecture of heterogeneous MTC networks in this research and suggested an innovative hybrid random access technique for 5G/6G-enabled smart cities [13, 22]. The numerical results show that, compared to the standard schemes, the suggested technique considerably improves the chance of successful access while meeting the various quality of service standards of URLLC and MTC devices.

The authors suggested a hybrid deep learning-enabled efficient congestion charge [16]. This deep learning hybrid model combines a support vector machine with lengthy, short-term memory. Simulating the specified model for one week with various potential devices, slice failure scenarios, and overflowing possibilities demonstrates its use.

The proposed hybrid model has an accuracy rate of 93.23 percent, demonstrating its applicability. Aside from that, other performance indicators were used to evaluate the recommended model’s performance, including specificity, accuracy, computation complexity, variable training, test sets, true-false rates, and f-score.

The authors introduced a multiarchitecture for URLLC that allows for device intelligence, edge intellectual capacity, and cloud intelligence [23]. The fundamental concept of training deep neural networks is to employ theoretical replicas combined with real-world data to gauge latency and dependability. In nonstationary networks, deep transfer learning is employed in the design to fine-tune the pretrained DNNs. Due to the restricted processing capabilities of the individual user and mobile edge computing server, federated learning is used to increase learning efficiency. Finally, they talked about potential future possibilities and gave experimental and modeling data.

The authors proposed a cascaded NN structure. The first NN tries to approximate the ideal bandwidth distribution, while the second NN produces the convey power needed to meet the QoS criteria with the provided bandwidth distribution [24]. The nonstationary distribution of wireless channels and service kinds of deep transfer learning apprise NNs in nonstationary wireless networks. In terms of QoS assurance, simulation studies show that cascaded NNs outperform fully linked NNs. Deep transfer learning may also drastically minimize the number of data points needed to train the NN.

In this article, the authors demonstrated a deep neural network model that utilizes a digital duplicate of the actual network situation to train the DL algorithm offline on a dominant server [25]. The MME might use a pretrained deep neural network to construct a user suggestion system in real-time. They provided an optimization approach for determining the best resource distribution and settling probability at each AP for a particular user association scheme. Their simulations indicate that their strategy can attain lower normalized energy use while requiring less computational complexity, with the integration of prevailing technique besides method to the performance of the optimal worldwide clarification.

According to the author’s theoretical study, (6G) technology has begun concurrently with fifth generation (5G) technology [26]. As mobile communication technology develops, a more stable information flow in the intelligent transportation method can be expected. There are also significant benefits to ITS reliability, which is the most critical aspect of ITS.

They described 5G/6G and artificial intelligence as the two most important technologies in the upcoming intelligent transportation system. The goal is to explain the existing state of both domains and cross-research advancements achieved among them before examining the blockage in recent intelligent transportation system development and pointing forth future research directions. These two disciplines will also get much attention, probably leading to groundbreaking research findings. As a result, this research will be made public.

The authors described 5G/6G URLLC spectrum sharing [27]. According to a consensus, in 6G, ultra-Reliable Low Latency Communication (URLLC) is still a critical application, just like it was in 5G. Spectrum resources are limited to meet increasing bandwidth demands, subsequent in offensive expectancy. Because of crashes in the typical spectrum, the channels are also unstable. Interference between various communication technologies exacerbates the issues in unlicensed bands. As a result, it is necessary to develop practical spectrum distribution algorithms to enable URLLC in 5G and 6G.

The authors presented a cluster content caching topology that uses distributed caching and centralized signal dispensation to its maximum potential [28]. Using the cluster content cache, remote radio heads linked to a shared edge cloud can avoid unnecessary traffic on the backhaul. The proposed structure enhances QoS guarantees while decreasing local storage power costs using traceable expressions for sufficient capacity and energy efficiency performance [29].

Two distributed techniques can be used in tandem with the suggested cluster content caching framework to realize its potential fully. The simulation results back up the analytical findings and show how cluster content caching in C-RANs improves performance.

Authors studied mobile terminal digital media application technologies employing edge computing and virtual reality techniques [30]. They used simulated experiments to evaluate the performance of the SD-CEN design and FWA in edge computing to the PSO–CO technique, WRR algorithm, and Pick-XK algorithm. The results suggested that it may lower the response time of real-time face recognition systems while also improving user experience. The SD-CEN network design built on the FWA approach offers additional benefits than the standard cloud computing design and a single MEC device.

In this work, the authors examine the valuable connections between CAVs and an ITS and suggest a unique architectural paradigm [31]. Their suggested system may support multilayer applications across different Radio Access Technologies (RATs) and includes a smart arrangement line for optimizing each RAT’s performance. In this study [32], this research demonstrated the future 5G cellular network, through its expansion of machine infrastructures and the idea of mobile edge computing, delivers appropriate surroundings for dispersed monitoring, and control activities in the smart grids. They showed how advanced distributed to state estimate methods in a 5G environment may help Smart Grids. These indicated novel distributed state estimation approaches, concentrating on distributed optimization and discussing how they may be integrated into future 5G Smart Grid services.

The most current research accomplishments in H-CRAN system architecture and essential technologies are discussed. The authors’ research describes a heterogeneous cloud radio access network that uses cloud computing to achieve integrated large-scale mutual processing for decreasing co-channel interferences [33]. The H-CRAN system design is characterized as software-defined and consistent with software-specified networks. Node C is the latest communication entity that will meet the prevailing inherited base places and function as the baseband element pool for all accessible distant radio minds.

The standards show advantages, and unresolved barriers of adaptive big-scale collaborative longitudinal signal, collaborative radio strategic planning, network task configuration management, and self-administration are all being investigated. The main roadblocks to H-CRAN promotion are explored in frontal restricted resource distribution optimization and energy harvesting.

In contrast to the previous studies, this research paper presents a smart traffic congestion control in 5G/6G networks using hybrid deep learning techniques to predict the traffic congestion. The proposed research combined the 5G/6G network with the hybrid deep learning techniques while managing large amounts of data in huge networks to identify the network traffic congestion. The proposed approach effectively constructs a generic architecture for detecting network traffic congestion centered on the identified significant characteristics to address the known challenges.

3. Methodology

The prime objective of this research work is to design a congestion control model to alleviate 5G network congestion while providing better use of available network resources. An intelligent model is proposed to accommodate the complexities in predicting the optimal congestion of the 5G network. The primary purpose of this proposed congestion control approach is to reduce 5G/6G network congestion by making the best possible use of the currently available resources. Hybrid deep learning is a distinguishing feature of this approach, and it is currently playing an active role in developing it. The proposed model is revealed below in Figure 1.

Figure 1 shows that the planned model is comprised of two stages: training as well as validation. The training phase is separated into three steps: data collection, preprocessing, and training model. The first step is data collection, which collects the data from input parameters and supplies it into the database. The stored data in the database is then preprocessed to mitigate the noisy data using feature selection, handling missing values, moving averages, and normalization. The processed data is then forwarded to the training model for the data via Naïve Bayes and the SVM algorithm.

By knowing the calculation of line [4].Where “ʜ” is a gradient of a line and “” is the overlap, therefore

Let and then above balance can be composed as:

This calculation is obtained from 2-dimensional paths. It works for dimensions, known as a hyper lane.

The route of a vector is defined as:Where

As we know that

Written as (4)

The dot product may be associated with the overhead for dimensional vectors.

Let

If sign (Β) > 0 then appropriately classified and if sign (Β) < 0 then imperfectly classified.

Calculate f on a training dataset by dataset Π,

The functional margin of the dataset is þþ will be selected while comparing hyperplanes, where þ is the geometric margin. The objective is an optimal hyperplane; we need to find and b value of the optimal hyperplane.

The Lagrangian function isFrom the above two equations (11) and (12), we get

After substituting the Lagrangian function , we get

Thus

Provided that

Since the constraints have disparities, we outspread the Lagrangian multipliers technique to the Karush–Kuhn–Tucker (KKT) circumstances. The complementary state of KKT is is the point/points where we reach the optimal.

μ is the positive value and for the other points are0.

So

Representing support vectors, as (17),

To compute the value of , we get

By multiplying on equal sides by Μ in (19), we getWhere  = 1

Thenя is the support vector. We will have the hyperplane on one occasion, and then we can utilize the hyperplane to predict. Where the hypothesis function is

The hyperplane is classified as class +1 (congestion found), and the fact beneath the hyperplane will be classified as −1 (congestion not found). So, fundamentally the area of the SVM algorithm is to discover a hyperplane that could disperse the data precisely and discover the best one, which is often stated as the optimal hyperplane.

Then, it is checked that if the learning criteria are met, the trained output is stored on the cloud, and if not, it is updated, and so on. The trained patterns are sent to the Fused Machine Learning (FML) approach. FML is accountable for fusing the predictions of both algorithms utilizing a fuzzy inference system. In FML, the decision level fusion technique is tangled with machine learning to attain higher accuracy and better decision-making.

Figure 2 shows the graphical representation of the proposed model performance in good satisfactory and bad with yellow, green, and blue shading, respectively. Figure 3 shows that if Naïve Bayes is no, and SVM is no, then 5G congestion control is no. Figure 4 shows that if Naïve Bayes is yes, and SVM is yes, then 5G congestion control is yes.

The trained patterns are then imported from the cloud for prediction purposes in the validation phase. If the congestion is found, a message will be displayed that congestion is found, and the process will be rejected in case of no.

4. Simulation Results

This research introduces an intelligent system to predict 5G congestion better and more efficiently empowered with a fused machine learning approach. The proposed approach is applied to a dataset from the Kaggle data repository [34]. Naïve Bayes and SVM techniques are used on the total number of instances, 7558 to predict a real-time cyber-attack. Moreover, the dataset is divided into training founds of 70% (5291 samples) and 30% (2267 samples) for the revealed training and validation purposes. Different parameters used for performance calculation with other metrics are consequent by the formulas:

Table 1 shows the proposed system prediction of 5G congestion during the training period. 5291 samples are used during training, divided into 2354, 2937 positive and negative samples. 1989 true positives are successfully predicted, and no 5G congestion is identified, but 365 records are mistakenly predicted as negatives, indicating 5G congestion. Similarly, 2937 samples are obtained, with negative showing 5G congestion and positive representing no 5G congestion, with 2833 samples correctly identified as negative showing 5G congestion and 104 samples inaccurately predicted as positive, indicating no 5G congestion despite the presence of 5G congestion.

The proposed model predicts intrusion during the validation phase, as shown in Table 2. During validation, correspondingly, a total of 2267 samples are used, divided into 1064, 1203 positive and negative samples. It is determined that 937 samples have true positives that are successfully forecast, and no 5G congestion is found. However, 127 records are mistakenly predicted as negatives, showing 5G congestion. Similarly, a total of 1234 samples are gathered, with negative representing 5G congestion and positive indicating no 5G congestion, with 1107 samples correctly predicted as negative indicating cyber-attack and 96 samples imperfectly predicted as positive specifying no 5G congestion found the existence of 5G congestion.

Table 3 shows the proposed system prediction of 5G congestion during the training period. 5291 samples are used during training, divided into 2383, 2908 positive and negative samples. 1911 true positives are successfully predicted, and no 5G congestion is identified, but 472 records are mistakenly predicted as negatives, indicating 5G congestion. Similarly, 2908 samples are obtained, with negative showing 5G congestion and positive indicating no 5G congestion, with 2726 samples correctly identified as negative showing 5G congestion and 182 samples inaccurately predicted as positive indicating no 5G congestion despite the existence of 5G congestion.

The proposed model predicts 5G congestion during the validation phase, as shown in Table 4. During validation, 2267 samples are used, which are divided into 1132, 1135 positive and negative samples, respectively. It is determined that 913 samples have true positives that are successfully predicted, and no 5G congestion is found; however, 219 records are mistakenly predicted as negatives, showing 5G congestion. Similarly, a total of 1135 samples are gathered, with negative indicating 5G congestion and positive indicating no 5G congestion, with 1015 samples correctly predicted as negative indicating cyber-attack and 120 samples imperfectly predicted as positive signifying no 5G congestion found the existence of 5G congestion.

It is shown in Table 5 (SVM) that during training, the performance of the proposed system in terms of accuracy sensitivity, specificity, miss rate, and precision gives 0.911, 0.950, 0.089, 0.093, and 0.844 in terms of accuracy sensitivity, specificity, miss rate, as well as precision, respectively. And during validation, the proposed model gives 0.901, 0.907, 0.897, 0.099, and 0.881 in terms of accuracy sensitivity, specificity, miss rate, and precision, respectively. In addition, the proposed system during training gives 0.114, 8.33, 1.044, and 0.964, and during validation, 0.102, 8.89, 0.102, and 0.920 in terms of fall out likelihood positive ratio, likelihood negative ratio, as well as negative predictive value, respectively.

It is shown in Table 6 (Naïve Bayes) that during training, the performance of the proposed system in terms of accuracy sensitivity, specificity, miss rate, and precision gives 0.876, 0.913, 0.852, 0.916, and 0.802 in terms of accuracy sensitivity, specificity, miss rate, and precision respectively. And during validation, the proposed model gives 0.851, 0.884, 0.823, 0.149, and 0.806 in terms of accuracy sensitivity, specificity, miss rate, and precision. In addition, the proposed system during training gives 0.148, 6.168, 1.075, and 0.937, and during validation, 0.178, 4.966, 0.181, and 0.894 in terms of fall out likelihood positive ratio, likelihood negative ratio, and negative predictive value, respectively.

It is shown in Table 7 that there are 15 tests taken in which only one is opposite to the proposed system and human-based decision system. Also, it is shown in Table 8 that the comparison of the performance of the proposed system indicates that the Naïve Bayes accuracy and miss rate are 0.851 and 0.149. In SVM, it is 0.901 and 0.099, respectively. It is demonstrated that the performance of the proposed fusion-based approach is improved in terms of 0.933 accuracy and 0.067 miss rate.

Table 9 compares the performance of the proposed 5G/6G network congestion control, which employs a hybrid deep learning technique with the previous approaches [16, 35, 36]. It is clearly shown that the proposed technique is better than the previous results in terms of accuracy and miss rate.

5. Research Contribution

The congestion of data traffic in 5G/6G networks has posed as major challenges in terms of congestion and delay to the current networks. In order to cope with these challenges, a hybrid deep learning approach has been proposed to predict an optimal congestion control approach that aims to alleviate network congestion in the emerging 5G/6G network environment.

6. Conclusion

The primary goal of this research to reduce congestion and maximizing is to develop a congestion for 5G/6G networks to reduce congestion and maximize the utilization of the resources already present in these networks. 5G/6G network communication is a difficult task and essential for next-generation wireless networks and commercial businesses. Developing a smart decision-making structure for arriving network traffic to confirm load balancing, limiting network communication catastrophe, and providing another in a case of catastrophe or overcapacity situations is a big challenge for the research community. This research work addressed the 5G congestion control challenge by suggesting a model based on a hybrid deep learning technique to forecast the optimal congestion in 5G networks. The proposed method can enhance network performance and provide better outcomes regarding 0.933 accuracy and 0.067 miss rate. [37, 38].

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest regarding the publication of this work.