Hai Cheng

PhD Candidate

Education

  • Ph.D. in Computer Engineering - Northeastern University (2025)
  • M.S. in Electrical Engineering - ShanghaiTech University (2018)
  • B.S. in Electrical Engineering - Xidian University (2015)

Research Interests

  • Open RAN (O-RAN), 5G and beyond cellular networks
  • Wireless Networks Emulation and Evaluation
  • Machine Learning for Wireless Networks Optimization

Bio

Hai Cheng is a Ph.D. candidate in Computer Engineering at the Institute for the Wireless Internet of Things, Northeastern University. His research focus on NextG networks and SDR-based wireless systems. His work combines wireless network system simulation and experimentation, as well as AI/ML for wireless optimization.

Publications

Network slicing allows Telecom Operators (TOs) to support service provisioning with diverse Service Level Agreements (SLAs). The combination of network slicing and Open Radio Access Network (RAN) enables TOs to provide more customized network services and higher commercial benefits. However, in the current Open RAN community, an open-source end-to-end slicing solution for 5G is still missing. To bridge this gap, we developed ORANSlice, an open-source network slicing-enabled Open RAN system integrated with popular open-source RAN frameworks. ORANSlice features programmable, 3GPP-compliant RAN slicing and scheduling functionalities. It supports RAN slicing control and optimization via xApps on the near-real-time RAN Intelligent Controller (RIC) thanks to an extension of the E2 interface between RIC and RAN, and service models for slicing. We deploy and test ORANSlice on different O-RAN testbeds and demonstrate its capabilities on different use cases, including slice prioritization and minimum radio resource guarantee.

Link

Provided herein are systems for controlling a network of distributed non-terrestrial nodes including a control framework operative to train and control a plurality of the non-terrestrial nodes, the control framework including a control interface in communication with a network operator to receive one or more specified control objectives, and a learning engine operative to train a virtual non-terrestrial network, wherein the control framework is further operative to transfer knowledge gained through the training of the virtual non-terrestrial network to the network of distributed non-terrestrial nodes as data-driven logic unit configurations tailored for the specified control objectives.

Link

Wireless Sensor Networks (WSNs) are pivotal in various applications, including precision agriculture, ecological surveillance, and the Internet of Things (IoT). However, energy limitations of battery-powered nodes are a critical challenge, necessitating optimization of energy efficiency for maximal network lifetime. Existing strategies like duty cycling and Wake-up Radio (WuR) technology have been employed to mitigate energy consumption and latency, but they present challenges in scenarios with sparse deployments and short communication ranges. This paper introduces and evaluates the performance of Unmanned Aerial Vehicle (UAV)-assisted mobile data collection for WuR-enabled WSNs through physical and simulated experiments. We propose two one-hop UAV-based data collection strategies: a naïve strategy, which follows a predetermined fixed path, and an adaptive strategy, which optimizes the collection route based on recorded metadata. Our evaluation includes multiple experiment categories, measuring collection reliability, collection cycle duration, successful data collection time (latency), and node awake time to infer network lifetime. Results indicate that the adaptive strategy outperforms the naïve strategy across all metrics. Furthermore, WuR-based scenarios demonstrate lower latency and considerably lower node awake time compared to duty cycle-based scenarios, leading to several orders of magnitude longer network lifetime. Remarkably, our results suggest that the use of WuR technology alone achieves unprecedented network lifetimes, regardless of whether data collection paths are optimized. This underscores the significance of WuR as the technology of choice for all energy critical WSN applications.

Link

Recent years have witnessed the Open Radio Access Network (RAN) paradigm transforming the fundamental ways cellular systems are deployed, managed, and optimized. This shift is led by concepts such as openness, softwarization, programmability, interoperability, and intelligence of the network, which have emerged in wired networks through Software-defined Networking (SDN) but lag behind in cellular systems. The realization of the Open RAN vision into practical architectures, intelligent data-driven control loops, and efficient software implementations, however, is a multifaceted challenge, which requires (i) datasets to train Artificial Intelligence (AI) and Machine Learning (ML) models; (ii) facilities to test models without disrupting production networks; (iii) continuous and automated validation of the RAN software; and (iv) significant testing and integration efforts. This paper is a tutorial on how Colosseum—the world’s largest wireless network emulator with hardware in the loop—can provide the research infrastructure and tools to fill the gap between the Open RAN vision, and the deployment and commercialization of open and programmable networks. We describe how Colosseum implements an Open RAN digital twin through a high-fidelity Radio Frequency (RF) channel emulator and endto- end softwarized O-RAN and 5G-compliant protocol stacks, thus allowing users to reproduce and experiment upon topologies representative of real-world cellular deployments. Then, we detail the twinning infrastructure of Colosseum, as well as the automation pipelines for RF and protocol stack twinning. Finally, we showcase a broad range of Open RAN use cases implemented on Colosseum, including the real-time connection between the digital twin and real-world networks, and the development, prototyping, and testing of AI/ML solutions for Open RAN.

Link

The adoption of Next-Generation cellular networks is rapidly increasing, together with their achievable throughput and their latency demands. Optimizing existing transport protocols for such networks is challenging, as the wireless channel becomes critical for performance and reliability studies. The performance assessment of transport protocols for wireless networks has mostly relied on simulation-based environments. While providing valuable insights, such studies are influenced by the simulator's specific settings. Employing more advanced and flexible methods for collecting and analyzing end-to-end transport layer datasets in realistic wireless environments is crucial to the design, implementation and evaluation of transport protocols that are effective when employed in real-world 5G networks. We present Hercules, a containerized 5G standalone framework that collects data employing the OpenAirInterface 5G protocol stack. We illustrate its potential with an initial transport layer and 5G stack measurement campaign on the Colosseum wireless network testbed. In addition, we present preliminary post-processing results from testing various TCP Congestion Control techniques over multiple wireless channels.

Link

The Open Radio Access Network (Open RAN) - being standardized, among others, by the O-RAN Alliance - brings a radical transformation to the cellular ecosystem through disaggregation and RAN intelligent controllers (RICs). The latter enable closed-loop control through custom logic applications (xApps and rApps), supporting control decisions at different timescales. However, the current O-RAN specifications lack a practical approach to execute real-time control loops operating at timescales below 10 ms. In this article, we propose the notion of dApps, distributed applications that complement existing xApps/rApps by allowing operators to implement fine-grained data-driven management and control in real time at the central/distributed units (DUs). dApps receive real-time data from the RAN, as well as enrichment information from the near-real-time RIC, and execute inference and control of lower-layer functionalities, thus enabling use cases with stricter timing requirements than those considered by the RICs, such as beam management and user scheduling. We propose feasible ways to integrate dApps in the O-RAN architecture by leveraging and extending interfaces and components already present therein. Finally, we discuss challenges specific to dApps, and provide preliminary results that show the benefits of executing network intelligence through dApps.

Link