https://ojs.wiserpub.com/index.php/CNC/issue/feedComputer Networks and Communications2024-11-18T15:11:34+08:00CNC Editorial Officecnc@wiserpub.comOpen Journal Systems<p><em>Computer Networks and Communications </em>(<a href="https://ojs.wiserpub.com/index.php/CNC/" target="_blank" rel="noopener"><em>CNC</em></a>) is an international, peer-reviewed, open access journal in science and technology for original research papers focused on networks and communications, published biannually online by Universal Wiser Publisher (<a href="https://www.wiserpub.com/" target="_blank" rel="noopener">UWP</a>).</p> <p><strong>></strong> fully open access - free for readers<br /><strong>></strong> no article processing charge (APC) paid by authors or their institutions until 2025<br /><strong>></strong> thorough double-blind peer-review<br /><strong>></strong> free post-publication promotion service by the Editorial Office</p>https://ojs.wiserpub.com/index.php/CNC/article/view/5534Mutual Exploration for Missing Data Imputation, QoS Parameter Selection, and QoS Prediction in 5G Networks Using a Novel Skewness Driven Distribution Imputation Algorithm, Pearson Correlation, and XGBoost2024-11-18T15:11:34+08:00Saifullah Khansaifullahedu0@gmail.comOnel Luis Alcaraz Lópezonel.alcarazlopez@oulu.fiAbdul Basit Khattakabdul.khattak@oulu.fi<p>Pre-processing is a key stage in the Machine Learning (ML) pipeline. In such a stage, data is prepared and organized for feeding it to the ML models for a prediction task. One of the problems that might incur in this stage is that data may have missing values, requiring that either the data is deleted or imputed with data points that resembles/ correlate with the original one. Imputation is desirable, as having more data to be fed to ML models means the model will have more context and thus better prediction results. From an imputation perspective, since the goal is that the imputed data faithfully relates with the original data, the result of correlation metric is desirable to show that the imputed data and original data are closely correlated. Herein, we present a novel imputation algorithm of Skewness Driven Distribution Imputation (SDDI), and evaluate its efficacy compared to multiple state-of-the-art methods including, K-Nearest Neighbors (KNN), Mean, Mode, Forward and Backward Fill (F&B-Fill) imputation methods. The comparison is done using accuracy metrics, Root Mean Square Error (RMSE), correlation and computation time. Furthermore, a correlation analysis is conducted on the subject 5G Vehicle-to-Everything (V2X) Quality of Service (QoS) dataset, aiming to enhance the understanding of parameter selection in assessing the QoS for 5G networks by providing the comparison regarding the significance in terms of correlation of various parameters in influencing 5G network's QoS. The state-of-the-art Pearson correlation method is used for said purpose. Moreover, we exploit an Extreme Gradient Boosting or XGBoost algorithm which is an ensemble of other techniques and not as complex as deep learning algorithms, to predict QoS, given certain conditions relevant to the 5G network. The comparative analysis of various imputation methods revealed average correlation values (as a measure of faithful data imputation) to be relatively close for Mean, Mode, F&B-Fill, SDDI, and K-Nearest Neighbors imputation methods at 0.161, 0.176, 0.143, 0.143, and 0.196, respectively. In terms of accuracy, all methods achieved high rates, with Mean and Mode at 93%, F&B-Fill at 90%, and both SDDI and KNN at 92%. Notably, in the second part of the research, when only the 15 most correlated features were used, we observed a substantial 60.5% reduction in the amount of data affected, with only a minimal impact of 3% on accuracy, achieving an impressive 93% accuracy. These results highlight the effectiveness of targeted feature selection of parameters for QoS of 5G networks and underscore the potential of our novel SDDI method in maintaining high data integrity while efficiently handling missing data, thereby enhancing the predictive reliability of the XGBoost algorithm.</p>2024-12-10T00:00:00+08:00Copyright (c) 2024 Saifullah Khan, et al.https://ojs.wiserpub.com/index.php/CNC/article/view/5407An Enhanced Detection Method of Hardware Trojan Based on CNN- Attention-LSTM2024-10-22T14:24:27+08:00Nanmin Wang2365072041@qq.comHaiyan Kangkanghy@bistu.edu.cn<p>The security issues caused by the insertion of hardware Trojans seriously threaten the security and reliability of the entire hardware device. This article constructs a detection model that combines convolutional neural networks (CNN) and long short-term memory networks (LSTM), and introduces attention mechanism to enhance the model's ability to recognize complex circuits. This method can automatically learn and optimize feature extraction and classification models, reduce reliance on manual experience through training on large amounts of data, and improve the intelligence level of detection. Especially, by combining attention mechanisms and LSTM models, it is possible to more effectively capture small anomalies in circuit design and improve the accuracy and efficiency of hardware Trojan detection. The experimental results show that the proposed CNN-Attention-LSTM model exhibits superior Trojan detection performance and good generalization ability on different datasets, with an precision of 96.3%, a recall rate of 94.7%, and an F1 score of 95.5%.</p>2024-11-12T00:00:00+08:00Copyright (c) 2024 Nanmin Wang, et al.https://ojs.wiserpub.com/index.php/CNC/article/view/5390Channel Estimation and Signal Detection for Massive MIMO under 5G Communication2024-10-25T12:41:36+08:00Pratibha Ranipratibhaymca@gmail.comArti M. K.arti_mk@yahoo.comPradeep Kumar Dimripkdimri@gmail.com<p>In this article, the space-time transmit technique (STTT) is examined in a massive MIMO environment with Rayleigh fading. The article characterizes the instantaneous signal-to-noise ratio (SNR) for statistical analysis after channel estimation. Singular value decomposition (SVD) is used to get SNR. It provides a closed-form expression for the moment-generating function (MGF) and uses the incomplete moment-generating function (IMGF) to construct the moment's closed-form expression. Key concepts in the fundamentals of communication theory, such as the probability density function (PDF) and cumulative distribution function (CDF), are explored. PDF and CDF plots are obtained for a range of degrees of freedom. Simulation results show that as the degree of freedom increases, a properly normalized sum of the channel information tends toward a normal distribution in STTT which follows central limit theorem.</p>2024-11-07T00:00:00+08:00Copyright (c) 2024 Pratibha Rani, et al.https://ojs.wiserpub.com/index.php/CNC/article/view/5573Performance Evaluation of HTTP, DHCP and DNS Protocols of Data Packets for Vulnerabilities Using the Isolation Forest Algorithm2024-09-23T16:54:42+08:00Tawo Godwin Aedwardtawomeji@gmail.comAyansi Francis Efrancanyasi2000@gmail.comFaith Praise Offaithpraise@gmail.comOsahon Okoro Oosahonokoro@gmail.comVincent N Ogarv.ogar.1@research.gla.ac.uk<p>In the contemporary digital landscape, network security is paramount to safeguard data integrity and prevent unauthorized access as data have been structured in the network through protocols. In term of data structure IPv6 protocol has extended data size due to the robust addresses of 128 bits as compare to 32 bits of IPv4. Due to the improved security data structure of IPv6, the study focuses on identifying vulnerabilities in HTTP, DHCP, and DNS packets using the Isolation Forest algorithm approach, a machine-learning technique designed for anomaly detection. By analyzing packet lengths, size, payload and addressing, the study visualizes normal and anomalous behavior, providing insights into potential security threats in IPv4 network structure. The results highlight the effectiveness of Goodput, Quality of service and risk as essential factors in the network, using Little's theorem analysis and the Isolation Forest in detecting anomalies across these different network protocols, offering valuable implications for network security structures, due to IoT in recent networks. The time response determination in this paper explained details information on the time the treats entered the network, the duration of the vulnerabilities within the network, leading to a certain threshold, and traffic delay factors due to deviation of packet length and other social engineering activities. Sensitive multipurpose security devices are involved; MikroTik routers were configured and installed in the network under evaluation. Which the normal DPI technique was unable to effectively and efficiently addressed. ADPI principles and operations where the needed security measures adopted to detect those vulnerabilities which were eventually addressed and have contributed to recent measures of network security.</p>2024-10-18T00:00:00+08:00Copyright (c) 2024 Tawo Godwin A, et al.https://ojs.wiserpub.com/index.php/CNC/article/view/4762An Analysis of Different Security Models and the Obstacles of Ensuring Security and Privacy while Storing Data on the Cloud2024-07-26T10:12:17+08:00Shahid Naseemshahid.naseem@ue.edu.pkSalbia Sidratalbiaofficial@gmail.comMuhammad Mueed Hussain203852@pakaims.edu.pk<p>Every day, cloud computing—which stores user data online—becomes more and more popular. Instead of being aware of security risks, the main goal of a cloud-based system is to use the internet as a storage medium. Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) are the three primary categories of cloud computing models. But as cloud computing has gotten more popular, security has grown to be a major worry. The difficulties with data storage in various cloud computing scenarios are examined in this research. It suggests using the Internet Protocol Security (IPsec) architecture as the foundation for the Security Encryption Privacy (SEP) paradigm. To improve security and stop assaults on cloud storage, it generates and encrypts data using the RSA algorithm and public key cryptography. In addition, our suggested approach is contrasted with alternative security models. This study’s primary goal is to comprehend cloud computing.</p>2024-09-11T00:00:00+08:00Copyright (c) 2024 Shahid Naseem, et al.https://ojs.wiserpub.com/index.php/CNC/article/view/4860Device to Device Disaster Management: Squirrel Search Algorithm Approach2024-08-12T10:20:16+08:00Lithungo K Murrylithungo@nitnagaland.ac.inR. Kumarrajagopal.kumar@nitnagaland.ac.in<p>Cellular networks can overcome their bandwidth bottleneck problem through the use of device-to-device communication paradigm. In instances of traffic congestion and natural calamities, these technologies aid in preserving the essential connection between user devices. It overcomes the need for nearby user equipment (UEs) to send their radio signals through the BS or the core network to facilitate immediate information sharing. This work addresses relay-assisted device-to-device (D2D) communication in disaster scenarios. An adaptive architecture for D2D communications is developed that can be used in emergency conditions. In particular, fuzzy based a new nature-inspired squirrel search algorithm (SSA) is proposed for CH selection. By using this method, a UERCH is selected and the information is passed through this relay CH. To enable communication between victims of natural catastrophes and emergency personnel, the architecture uses an SDN controller to help it establish a multi-hop routing path. Additionally, we offer a public safety scenario in which a portion of the network goes down as a result of exceptional occurrences like natural disaster, showcasing the robustness and potential of our proposed method. On comparing with the existing approach, the simulation results show that the suggested method achieves low energy usage with enhanced device battery life.</p>2024-08-23T00:00:00+08:00Copyright (c) 2024 Lithungo K Murry, et al.https://ojs.wiserpub.com/index.php/CNC/article/view/5027Energy Efficient Routing Protocol and Cluster Head Selection Using Modified Spider Monkey Optimization2024-08-12T09:51:44+08:00Pranati Mishrapranatimishracse@outr.ac.inRanjan Kumar Dashrkdash@outr.ac.in<p>Wireless sensor networks (WSNs) offer advantages in deployment flexibility and affordability due to their compact size and low cost. However, real-world WSN implementations face challenges, particularly regarding energy consumption. The limited battery capacity of sensor nodes restricts their operational lifespan, often resulting in; it is difficult to periodically replace the batteries in sensor nodes or recharge them, which reduces the system's overall operating time. To address the energy consumption challenge, WSNs are often divided into clusters. This clustering approach reduces communication costs and the energy required for routing data packets. Consequently, selecting the most efficient cluster head (CH) is crucial for maximizing the network's overall lifespan. This paper proposes a protocol that prioritizes energy efficiency, proximity to the base station, and even distribution within clusters when selecting CHs for intra-cluster communication. For inter-cluster communication, the protocol draws inspiration from the SMO protocol to identify the optimal next-hop CH based on remaining energy and distance to the base station. The performance of this proposed protocol is then compared against LEACH routing protocols. The simulation results indicate that the network becomes more optimized and energy efficient by using the proposed protocol.</p>2024-08-23T00:00:00+08:00Copyright (c) 2024 Pranati Mishra, et al.https://ojs.wiserpub.com/index.php/CNC/article/view/4753Analysis of Path Convergence in Chord DHT2024-07-01T11:03:33+08:00Vladimir Rochavladimir.rocha@ufabc.edu.brDaniel Czeresniadaniel.czeresnia@aluno.ufabc.edu.brCarlo Kleber da Silva Rodriguescarlo.kleber@ufabc.edu.br<p>Chord is a Distributed Hash Table widely used for its efficiency in searching for information. The efficiency of this structure relies on creating short paths of O(log<sub>2</sub> n) between two nodes, in which n is the number of nodes. To enhance efficiency, several studies use replication in the nodes belonging to the network, assuming that searches will converge in these replicated nodes. This work proposes a convergence formula and analyzes the number of searches converging on nodes for different network sizes (small, medium, and large), up to one million nodes. The experiments show that the convergence creates three zones and the results support the replication techniques from previous studies and demonstrate that it is feasible to replicate in nodes that were not considered in these studies.</p>2024-07-25T00:00:00+08:00Copyright (c) 2024 Vladimir Rocha, et al.https://ojs.wiserpub.com/index.php/CNC/article/view/4747Optimizing Cloud Resource Allocation in Federated Environments through Outsourcing Strategies2024-06-14T17:41:02+08:00Arash Mazidiarash_mazidi_67@yahoo.com<p>Cloud computing enables users to access required resources, with the invention of high-end devices leading to exponential increases in cloud resource requests. This poses significant challenges for resource management due to the scale of the cloud and unpredictable user demands. This paper presents an approach to managing resources during peak request periods for virtual machines (VMs) by leveraging cloud federation, outsourcing requests to other federation members. An algorithm is proposed to initiate cloud federation and allocate customer requests within it. The primary objectives are to increase the profit of cloud providers and improve resource utilization. An ensemble algorithm maximizes profit using both the proposed algorithm and three established ones. Experimental results demonstrate that our method outperforms existing approaches in profit, resource utilization, and rejected requests in most scenarios.</p>2024-07-08T00:00:00+08:00Copyright (c) 2024 Arash Mazidihttps://ojs.wiserpub.com/index.php/CNC/article/view/4281DeAuth: A Decentralized Authentication and Authorization Scheme for Secure Private Data Sharing2024-04-15T15:59:06+08:00Phillipe Austriaaustrp1@unlv.nevada.eduYoohwan Kimyoohwan.kim@unlv.eduJu-Yeon Jojuyeon.jo@unlv.edu<p>The sharing of private information is a daunting, multifaceted, and expensive undertaking. Furthermore, identity management is an additional challenge that poses significant technological, operational, and legal obstacles. Present solutions and their accompanying infrastructures rely on centralized models that are susceptible to hacking and can hinder data control by the rightful owner. Consequently, blockchain technology has generated interest in the fields of identity and access control. This technology is viewed as a potential solution due to its ability to offer decentralization, transparency, provenance, security, and privacy benefits. Nevertheless, a completely decentralized and private solution that enables data owners to control their private data has yet to be presented. In this research, we introduce DeAuth, a novel decentralized, authentication and authorization scheme for secure private data transfer. DeAuth combines blockchain, smart-contracts, decentralized identity, and distributed peer-to-peer (P2P) storage to give users more control of their private data, and permissioning power to share without centralized services. For this scheme, identity is proven using decentralized identifiers and verifiable credentials, while authorization to share data is performed using the blockchain. A prototype was developed using the Ethereum Blockchain and the InterPlanetary Files System, a P2P file sharing protocol. We evaluated DeAuth through a use-case study and metrics such as security, performance, and cost. Our findings indicate DeAuth to be viable alternative to using centralized services; however, the underlying technologies are still in its infancies and require more testing before it can supplant traditional services.</p>2024-07-05T00:00:00+08:00Copyright (c) 2024 Phillipe Austria, et al.