Publications

Modern cellular networks, characterized by heterogeneous deployments, diverse requirements, and mission-critical reliability needs, face significant complexity in end-to-end management. This challenge is exacerbated in private 5G systems, where enterprise Information Technology (IT) teams struggle with costly, inflexible deployment and operational workflows. While software-driven cellular architectures introduce flexibility, they lack robust automation frameworks comparable to cloud-native ecosystems, impeding efficient configuration, scalability, and vendor integration. This paper presents AutoRAN, an automated, intent-driven framework for zero-touch provisioning of open, programmable cellular networks. Leveraging cloud-native principles, AutoRAN employs virtualization, declarative infrastructure-as-code templates, and disaggregated micro-services to abstract physical resources and protocol stacks. Its orchestration engine integrates Language Models (LLMs) to translate high-level intents into machine-readable configurations, enabling closed-loop control via telemetry-driven observability. Implemented on a multi-architecture OpenShift cluster with heterogeneous compute (x86/ARM CPUs, NVIDIA GPUs) and multi-vendor Radio Access Network (RAN) hardware (Foxconn, NI), AutoRAN automates deployment of O-RAN-compliant stacks-including OpenAirInterface, NVIDIA ARC RAN, Open5GS core, and O-RAN Software Community (OSC) RIC components-using CI/CD pipelines. Experimental results demonstrate that AutoRAN is capable of deploying an end-to-end Private 5G network in less than 60 seconds with 1.6 Gbps throughput, validating its ability to streamline configuration, accelerate testing, and reduce manual intervention with similar performance than non cloud-based implementations. With its novel LLM-assisted intent translation mechanism, and performance-optimized automation workflow for multi-vendor environments, AutoRAN has the potential of advancing the robustness of next-generation cellular supply chains through reproducible, intent-based provisioning across public and private deployments.

Link

Reconfigurable Intelligent Surfaces (RISs) pose as a transformative technology to revolutionize the cellular architecture of Next Generation (NextG) Radio Access Networks (RANs). Previous studies have demonstrated the capabilities of RISs in optimizing wireless propagation, achieving high spectral efficiency, and improving resource utilization. At the same time, the transition to softwarized, disaggregated, and virtualized architectures, such as those being standardized by the O-RAN ALLIANCE, enables the vision of a reconfigurable Open RAN. In this work, we aim to integrate these technologies by studying how different resource allocation policies enhance the performance of RIS-assisted Open RANs. We perform a comparative analysis among various network configurations and show how proper network optimization can enhance the performance across the Enhanced Mobile Broadband (eMBB) and Ultra Reliable and Low Latency Communications (URLLC) network slices, achieving up to ~34% throughput improvement. Furthermore, leveraging the capabilities of OpenRAN Gym, we deploy an xApp on Colosseum, the world's largest wireless system emulator with hardware-in-the-loop, to control the Base Station (BS)'s scheduling policy. Experimental results demonstrate that RIS-assisted topologies achieve high resource efficiency and low latency, regardless of the BS's scheduling policy.

Link

The O-RAN ALLIANCE is defining architectures, interfaces, operations, and security requirements for cellular networks based on Open Radio Access Network (RAN) principles. In this context, O-RAN introduced the RAN Intelligent Controllers (RICs) to enable dynamic control of cellular networks via data-driven applications referred to as rApps and xApps. RICs enable for the first time truly intelligent and self-organizing cellular networks. However, enabling the execution of many Artificial Intelligence (AI) algorithms making autonomous control decisions to fulfill diverse (and possibly conflicting) goals poses unprecedented challenges. For instance, the execution of one xApp aiming at maximizing throughput and one aiming at minimizing energy consumption would inevitably result in diametrically opposed resource allocation strategies. Therefore, conflict management becomes a crucial component of any functional intelligent O-RAN system. This article studies the problem of conflict mitigation in O-RAN and proposes PACIFISTA, a framework to detect, characterize, and mitigate conflicts generated by O-RAN applications that control RAN parameters. PACIFISTA leverages a profiling pipeline to tests O-RAN applications in a sandbox environment, and combines hierarchical graphs with statistical models to detect the existence of conflicts and evaluate their severity. Experiments on Colosseum and OpenRAN Gym demonstrate PACIFISTA’s ability to predict conflicts and provide valuable information before potentially conflicting xApps are deployed in production systems. We use PACIFISTA to demonstrate that users can experience a 16% throughput loss even in the case of xApps with similar goals, and that applications with conflicting goals might cause severe instability and result in up to 30% performance degradation. We also show that PACIFISTA can help operators to identify conflicting applications and maintain performance degradation below a tolerable threshold.

Link

Open Radio Access Networks (RANs) leverage disaggregated and programmable RAN functions and open interfaces to enable closed-loop, data-driven radio resource management. This is performed through custom intelligent applications on the RAN Intelligent Controllers (RICs), optimizing RAN policy scheduling, network slicing, user session management, and medium access control, among others. In this context, we have proposed dApps as a key extension of the O-RAN architecture into the real-time and user-plane domains. Deployed directly on RAN nodes, dApps access data otherwise unavailable to RICs due to privacy or timing constraints, enabling the execution of control actions within shorter time intervals. In this paper, we propose for the first time a reference architecture for dApps, defining their life cycle from deployment by the Service Management and Orchestration (SMO) to real-time control loop interactions with the RAN nodes where they are hosted. We introduce a new dApp interface, E3, along with an Application Protocol (AP) that supports structured message exchanges and extensible communication for various service models. By bridging E3 with the existing O-RAN E2 interface, we enable dApps, xApps, and rApps to coexist and coordinate. These applications can then collaborate on complex use cases and employ hierarchical control to resolve shared resource conflicts. Finally, we present and open-source a dApp framework based on OpenAirInterface (OAI). We benchmark its performance in two real-time control use cases, i.e., spectrum sharing and positioning in a 5th generation (5G) Next Generation Node Base (gNB) scenario. Our experimental results show that standardized real-time control loops via dApps are feasible, achieving average control latency below 450 microseconds and allowing optimal use of shared spectral resources.

Link

The development of 6G wireless technologies is rapidly advancing, with the 3rd Generation Partnership Project (3GPP) entering the pre-standardization phase and aiming to deliver the first specifications by 2028. This paper explores the OpenAirInterface (OAI) project, an open-source initiative that plays a crucial role in the evolution of 5G and the future 6G networks. OAI provides a comprehensive implementation of 3GPP and O-RAN compliant networks, including Radio Access Network (RAN), Core Network (CN), and software-defined User Equipment (UE) components. The paper details the history and evolution of OAI, its licensing model, and the various projects under its umbrella, such as RAN, the CN, as well as the Operations, Administration and Maintenance (OAM) projects. It also highlights the development methodology, Continuous Integration/Continuous Delivery (CI/CD) processes, and end-to-end systems powered by OAI. Furthermore, the paper discusses the potential of OAI for 6G research, focusing on spectrum, reflective intelligent surfaces, and Artificial Intelligence (AI)/Machine Learning (ML) integration. The open-source approach of OAI is emphasized as essential for tackling the challenges of 6G, fostering community collaboration, and driving innovation in next-generation wireless technologies.

Link

The Open Radio Access Network (RAN) is a new networking paradigm that builds on top of cloud-based, multi-vendor, open and intelligent architectures to shape the next generation of cellular networks for 5G and beyond. While this new paradigm comes with many advantages in terms of observatibility and reconfigurability of the network, it inevitably expands the threat surface of cellular systems and can potentially expose its components and the Machine Learning (ML) infrastructure to several cyber attacks, thus making securing O-RAN networks a necessity. In this paper, we explore security aspects of O-RAN systems by focusing on the specifications, architectures, and intelligence proposed by the O-RAN Alliance. We address the problem of securing O-RAN systems with a holistic perspective, including considerations on the open interfaces used to interconnect the different O-RAN components, on the overall platform, and on the intelligence used to monitor and control the network. For each focus area we identify threats, discuss relevant solutions to address these issues, and demonstrate experimentally how such solutions can effectively defend O -RAN systems against selected cyber attacks. This article is the first work in approaching the security aspect of O-RAN holistically and with experimental evidence obtained on a state-of-the-art programmable O-RAN platform, providing unique guideline for researchers in the field.

Link

Deploying and testing cellular networks is a complex task due to the multitude of components involved from the core to the radio access network (RAN) and user equipment (UE), all of which requires integration and constant monitoring. Additional challenges are posed by the nature of the wireless channel, whose inherent randomness hinders the repeatability and consistency of the testing process. Consequently, existing solutions for both private and public cellular systems still rely heavily on human intervention for operations such as network reconfiguration, performance monitoring, and end-to-end testing. This reliance significantly slows the pace of innovation in cellular systems. To address these challenges, we introduce 5G-CT, an automation framework based on OpenShift and the GitOps workflow, capable of deploying a soft-warized end-to-end 5G and O-RAN-compliant system in a matter of seconds without the need for any human intervention. We have deployed 5G-CT to test the integration and performance of open-source cellular stacks, including OpenAirInterface, and have collected months of automated over-the-air testing results involving software-defined radios. 5G-CT brings cloud-native continuous integration and delivery to the RAN, effectively addressing the complexities associated with managing spectrum, radios, heterogeneous devices, and distributed components. Moreover, it endows cellular networks with much needed automation and continuous testing capabilities, providing a platform to evaluate the robustness and resiliency of Open RAN software.

Link

Reconfigurable Intelligent Surfaces (RISs) are a promising technique for enhancing the performance of Next Generation (NextG) wireless communication systems in terms of both spectral and energy efficiency, as well as resource utilization. However, current RIS research has primarily focused on theoretical modeling and Physical (PHY) layer considerations only. Full protocol stack emulation and accurate modeling of the propagation characteristics of the wireless channel are necessary for studying the benefits introduced by RIS technology across various spectrum bands and use-cases. In this paper, we propose, for the first time: (i) accurate PHY layer RIS-enabled channel modeling through Geometry-Based Stochastic Models (GBSMs), leveraging the QUAsi Deterministic RadIo channel GenerAtor (QuaDRiGa) open-source statistical ray-tracer; (ii) optimized resource allocation with RISs by comprehensively studying energy efficiency and power control on different portions of the spectrum through a single-leader multiple-followers Stackelberg game theoretical approach; (iii) full-stack emulation and performance evaluation of RIS-assisted channels with SCOPE/srsRAN for Enhanced Mobile Broadband (eMBB) and Ultra Reliable and Low Latency Communications (URLLC) applications in the worlds largest emulator of wireless systems with hardware-in-the-loop, namely Colosseum. Our findings indicate (i) the significant power savings in terms of energy efficiency achieved with RIS-assisted topologies, especially in the millimeter wave (mmWave) band; and (ii) the benefits introduced for Sub-6 GHz band User Equipments (UEs), where the deployment of a relatively small RIS (e.g., in the order of 100 RIS elements) can result in decreased levels of latency for URLLC services in resource-constrained environments.

Link

5G and beyond cellular systems embrace the disaggregation of Radio Access Network (RAN) components, exemplified by the evolution of the fronthual (FH) connection between cellular baseband and radio unit equipment. Crucially, synchronization over the FH is pivotal for reliable 5G services. In recent years, there has been a push to move these links to an Ethernet-based packet network topology, leveraging existing standards and ongoing research for Time-Sensitive Networking (TSN). However, TSN standards, such as Precision Time Protocol (PTP), focus on performance with little to no concern for security. This increases the exposure of the open FH to security risks. Attacks targeting synchronization mechanisms pose significant threats, potentially disrupting 5G networks and impairing connectivity. In this paper, we demonstrate the impact of successful spoofing and replay attacks against PTP synchronization. We show how a spoofing attack is able to cause a production-ready O-RAN and 5G-compliant private cellular base station to catastrophically fail within 2 seconds of the attack, necessitating manual intervention to restore full network operations. To counter this, we design a Machine Learning (ML)-based monitoring solution capable of detecting various malicious attacks with over 97.5% accuracy.

Link

While the availability of large datasets has been instrumental to advance fields like computer vision and natural language processing, this has not been the case in mobile networking. Indeed, mobile traffic data is often unavailable due to privacy or regulatory concerns. This problem becomes especially relevant in Open Radio Access Network (RAN), where artificial intelligence can potentially drive optimization and control of the RAN, but still lags behind due to the lack of training datasets. While substantial work has focused on developing testbeds that can accurately reflect production environments, the same level of effort has not been put into twinning the traffic that traverse such networks.To fill this gap, in this paper, we design a methodology to twin real-world cellular traffic traces in experimental Open RAN testbeds. We demonstrate our approach on the Colosseum Open RAN digital twin, and publicly release a large dataset (more than 500 hours and 450 GB) with PHY-, MAC-, and App-layer Key Performance Measurements (KPMs), and protocol stack logs. Our analysis shows that our dataset can be used to develop and evaluate a number of Open RAN use cases, including those with strict latency requirements.

Link

RAN Intelligent Controllers (RICs) are programmable platforms that enable data-driven closed-loop control in the O-RAN architecture. They collect telemetry and data from the RAN, process it in custom applications, and enforce control or new configurations on the RAN. Such custom applications in the Near-Real-Time (RT) RIC are called xApps, and enable a variety of use cases related to radio resource management. Despite numerous open-source and commercial projects focused on the Near-RT RIC, developing and testing xApps that are interoperable across multiple RAN implementations is a time-consuming and technically challenging process. This is primarily caused by the complexity of the protocol of the E2 interface, which enables communication between the RIC and the RAN while providing a high degree of flexibility, with multiple Service Models (SMs) providing plug-and-play functionalities such as data reporting and RAN control. In this paper, we propose xDevSM, an open-source flexible framework for O-RAN service models, aimed at simplifying xApp development for the O-RAN Software Community (OSC) Near-RT RIC. xDevSM reduces the complexity of the xApp development process, allowing developers to focus on the control logic of their xApps and moving the logic of the E2 service models behind simple Application Programming Interfaces (APIs). We demonstrate the effectiveness of this framework by deploying and testing xApps across various RAN software platforms, including OpenAirInterface and srsRAN. This framework significantly facilitates the development and validation of solutions and algorithms on O-RAN networks, including the testing of data-driven solutions across multiple RAN implementations.

Link

Network slicing allows Telecom Operators (TOs) to support service provisioning with diverse Service Level Agreements (SLAs). The combination of network slicing and Open Radio Access Network (RAN) enables TOs to provide more customized network services and higher commercial benefits. However, in the current Open RAN community, an open-source end-to-end slicing solution for 5G is still missing. To bridge this gap, we developed ORANSlice, an open-source network slicing-enabled Open RAN system integrated with popular open-source RAN frameworks. ORANSlice features programmable, 3GPP-compliant RAN slicing and scheduling functionalities. It supports RAN slicing control and optimization via xApps on the near-real-time RAN Intelligent Controller (RIC) thanks to an extension of the E2 interface between RIC and RAN, and service models for slicing. We deploy and test ORANSlice on different O-RAN testbeds and demonstrate its capabilities on different use cases, including slice prioritization and minimum radio resource guarantee.

Link

The fifth-generation new radio (5G NR) technology is expected to provide precise and reliable positioning capabilities along with high data rates. The Third Generation Partnership Project (3GPP) has started introducing positioning techniques from Release-16 based on time, angle, and signal strength using reference signals. However, validating these techniques with experimental prototypes is crucial before successful real-world deployment. This work provides useful tools and implementation details that are required in performing 5G positioning experiments with OpenAirInterface (OAI). As an example use case, we present an round trip time (RTT) estimation test-bed based on OAI and discusses the real-word experiment and measurement process.

Link

The next generation of cellular networks will be characterized by openness, intelligence, virtualization, and distributed computing. The Open Radio Access Network (Open RAN) framework represents a significant leap toward realizing these ideals, with prototype deployments taking place in both academic and industrial domains. While it holds the potential to disrupt the established vendor lock-ins, Open RAN's disaggregated nature raises critical security concerns. Safeguarding data and securing interfaces must be integral to Open RAN's design, demanding meticulous analysis of cost/benefit tradeoffs. In this paper, we embark on the first comprehensive investigation into the impact of encryption on two pivotal Open RAN interfaces: the E2 interface, connecting the base station with a near-real-time RAN Intelligent Controller, and the Open Fronthaul, connecting the Radio Unit to the Distributed Unit. Our study leverages a full-stack O-RAN ALLIANCE compliant implementation within the Colosseum network emulator and a production-ready Open RAN and 5G-compliant private cellular network. This research contributes quantitative insights into the latency introduced and throughput reduction stemming from using various encryption protocols. Furthermore, we present four fundamental principles for constructing security by design within Open RAN systems, offering a roadmap for navigating the intricate landscape of Open RAN security.

Link

This demo presents SeizNet, an innovative system for predicting epileptic seizures benefiting from a multi-modal sensor network and utilizing Deep Learning (DL) techniques. Epilepsy affects approximately 65 million people worldwide, many of whom experience drug-resistant seizures. SeizNet aims at providing highly accurate alerts, allowing individuals to take preventive measures without being disturbed by false alarms. SeizNet uses a combination of data collected through either invasive (intracranial electroencephalogram (iEEG)) or non-invasive (electroencephalogram (EEG) and electrocardiogram (ECG)) sensors, and processed by advanced DL algorithms that are optimized for real-time inference at the edge, ensuring privacy and minimizing data transmission. SeizNet achieves > 97% accuracy in seizure prediction while keeping the size and energy restrictions of an implantable device.

Link

Accurate channel modeling in real-time faces remarkable challenge due to the complexities of traditional methods such as ray tracing and field measurements. AI-based techniques have emerged to address these limitations, offering rapid, precise predictions of channel properties through ground truth data. This paper introduces an innovative approach to real-time, high-fidelity propagation modeling through advanced deep learning. Our model integrates 3D geographical data and rough propagation estimates to generate precise path gain predictions. By positioning the transmitter centrally, we simplify the model and enhance its computational efficiency, making it amenable to larger scenarios. Our approach achieves a normalized Root Mean Squared Error of less than 0.035 dB over a 37,210 square meter area, processing in just 46 ms on a GPU and 183 ms on a CPU. This performance significantly surpasses traditional high-fidelity ray tracing methods, which require approximately three orders of magnitude more time. Additionally, the model's adaptability to real-world data highlights its potential to revolutionize wireless network design and optimization, through enabling real-time creation of adaptive digital twins of real-world wireless scenarios in dynamic environments.

Link

Next-generation wireless systems, already widely deployed, are expected to become even more prevalent in the future, representing challenges in both environmental and economic terms. This paper focuses on improving the energy efficiency of intelligent and programmable Open Radio Access Network (RAN) systems through the near-real-time dynamic activation and deactivation of Base Station (BS) Radio Frequency (RF) frontends using Deep Reinforcement Learning (DRL) algorithms, i.e., Proximal Policy Optimization (PPO) and Deep Q-Network (DQN). These algorithms run on the RAN Intelligent Controllers (RICs), part of the Open RAN architecture, and are designed to make optimal network-level decisions based on historical data without compromising stability and performance. We leverage a rich set of Key Performance Measurements (KPMs), serving as state for the DRL, to create a comprehensive representation of the RAN, alongside a set of actions that correspond to some control exercised on the RF frontend. We extend ns-O-RAN, an open-source, realistic simulator for 5G and Open RAN built on ns-3, to conduct an extensive data collection campaign. This enables us to train the agents offline with over 300,000 data points and subsequently evaluate the performance of the trained models. Results show that DRL agents improve energy efficiency by adapting to network conditions while minimally impacting the user experience. Additionally, we explore the trade-off between throughput and energy consumption offered by different DRL agent designs.

Link

In the context of fifth-generation new radio (5G NR) technology, it is not possible to directly obtain an absolute uplink (UL) channel impulse response (CIR) at the base station (gNB) from a user equipment (UE). The UL CIR obtained through the sounding reference signal (SRS) is always time-shifted by the timing advance (TA) applied at the UE. The TA is crucial for maintaining UL synchronization, and transmitting SRS without applying the TA will result in interference. In this work, we propose a new method to obtain absolute UL CIR from a UE and then use it to estimate the round trip time (RTT) at the gNB. This method requires enhancing the current 5G protocol stack with a new Zadoff-Chu (ZC) based wideband uplink reference signal (URS). Capitalizing on the cyclic shift property of the URS sequence, we can obtain the RTT with a significant reduction in overhead and latency compared to existing schemes. The proposed method is experimentally validated using a real-world testbed based on OpenAirInterface (OAI).

Link

This demo paper presents a dApp-based real-time spectrum sharing scenario where a 5th generation (5G) base station implementing the NR stack adapts its transmission and reception strategies based on the incumbent priority users in the Citizen Broadband Radio Service (CBRS) band. The dApp is responsible for obtaining relevant measurements from the Next Generation Node Base (gNB), running the spectrum sensing inference, and configuring the gNB with a control action upon detecting the primary incumbent user transmissions. This approach is built on dApps, which extend the O-RAN framework to the real-time and user plane domains. Thus, it avoids the need of dedicated Spectrum Access Systems (SASs) in the CBRS band. The demonstration setup is based on the open-source 5G OpenAirInterface (OAI) framework, where we have implemented a dApp interfaced with a gNB and communicating with a Commercial Off-the-Shelf (COTS) User Equipment (UE) in an over-the-air wireless environment. When an incumbent user has active transmission, the dApp will detect and inform the primary user presence to the gNB. The dApps will also enforce a control policy that adapts the scheduling and transmission policy of the Radio Access Network (RAN). This demo provides valuable insights into the potential of using dApp-based spectrum sensing with O-RAN architecture in next generation cellular networks.

Link

Softwarized and programmable Radio Access Networks (RANs) come with virtualized and disaggregated components, increasing the supply chain robustness and the flexibility and dynamism of the network deployments. This is a key tenet of Open RAN, with open interfaces across disaggregated components specified by the O-RAN ALLIANCE. It is mandatory, however, to validate that all components are compliant with the specifications and can successfully interoperate, without performance gaps with traditional, monolithic appliances. Open Testing & Integration Centers (OTICs) are entities that can verify such interoperability and adherence to the standard through rigorous testing. However, how to design, instrument, and deploy an OTIC which can offer testing for multiple tenants, heterogeneous devices, and is ready to support automated testing is still an open challenge. In this paper, we introduce a blueprint for a programmable OTIC testing infrastructure, based on the design and deployment of the Open6G OTIC at Northeastern University, Boston, and provide insights on technical challenges and solutions for O-RAN testing at scale.

Link

Wireless network emulators are being increasingly used for developing and evaluating new solutions for Next Generation (NextG) wireless networks. However, the reliability of the solutions tested on emulation platforms heavily depends on the precision of the emulation process, model design, and parameter settings. To address, obviate, or minimize the impact of errors of emulation models, in this work, we apply the concept of Digital Twin (DT) to large-scale wireless systems. Specifically, we demonstrate the use of Colosseum, the world's largest wireless network emulator with hardware-in-the-loop, as a DT for NextG experimental wireless research at scale. As proof of concept, we leverage the Channel emulation scenario generator and Sounder Toolchain (CaST) to create the DT of a publicly available over-the-air indoor testbed for sub-6 GHz research, namely, Arena. Then, we validate the Colosseum DT through experimental campaigns on emulated wireless environments, including scenarios concerning cellular networks and jamming of Wi-Fi nodes, on both the real and digital systems. Our experiments show that the DT is able to provide a faithful representation of the real-world setup, obtaining an average similarity of up to 0.987 in throughput and 0.982 in Signal to Interference plus Noise Ratio (SINR).

Link

Provided herein are methods and systems for beyond line of sight control of autonomous vehicles including a wireless network having an Open RAN infrastructure in communication with a core network and a MEC infrastructure, a MEC orchestrator deployed in the Open RAN infrastructure and including MEC computing nodes and a ground control station, a catalog in communication with the orchestrator and including MEC function apps for operating the MEC infrastructure and/or the core network, and Open RAN apps for operating the Open RAN infrastructure, wherein the orchestrator is configured to process data from the MEC infrastructure, the core network, the open RAN infrastructure, or combinations thereof, and instantiate the MEC function apps in the MEC infrastructure and/or core network, instantiate the Open RAN apps in the Open RAN infrastructure, or combinations thereof to manage a plurality of vehicle functions and/or wireless network functions responsive to data from the vehicles.

Link

The next generation of cellular networks will be characterized by softwarized, open, and disaggregated architectures exposing analytics and control knobs to enable network intelligence via innovative data-driven algorithms. How to practically realize this vision, however, is largely an open problem. Specifically, for a given intent, it is still unclear how to select which data-driven models should be deployed and where, which parameters to control, and how to feed them appropriate inputs. In this article, we take a decisive step forward by presenting OrchestRAN, a network intelligence orchestration framework for next generation systems that embraces and builds upon the Open Radio Access Network (RAN) paradigm to provide a practical solution to these challenges. OrchestRAN has been designed to execute in the non-Real-time (RT) RAN Intelligent Controller (RIC) as an rApp and allows Network Operators (NOs) to specify high-level control/inference objectives (i.e., adapt scheduling, and forecast capacity in near-RT, e.g., for a set of base stations in Downtown New York). OrchestRAN automatically computes the optimal set of data-driven algorithms and their execution location (e.g., in the cloud, or at the edge) to achieve intents specified by the NOs while meeting the desired timing requirements and avoiding conflicts between different data-driven algorithms controlling the same parameters set. We show that the intelligence orchestration problem in Open RAN is NP-hard. To support real-world applications, we also propose three complexity reduction techniques to obtain low-complexity solutions that, when combined, can compute a solution in 0.1 s for large network instances. We prototype OrchestRAN and test it at scale on Colosseum, the world's largest wireless network emulator with hardware in the loop. Our experimental results on a network with 7 base stations and 42 users demonstrate that OrchestRAN is able to instantiate data-driven services on demand with minimal control overhead and latency.

Link

As Fifth generation (5G) cellular systems transition to softwarized, programmable, and intelligent networks, it becomes fundamental to enable public and private 5G deployments that are (i) primarily based on software components while (ii) maintaining or exceeding the performance of traditional monolithic systems and (iii) enabling programmability through bespoke configurations and optimized deployments. This requires hardware acceleration to scale the Physical (PHY) layer performance, programmable elements in the Radio Access Network (RAN) and intelligent controllers at the edge, careful planning of the Radio Frequency (RF) environment, as well as end-to-end integration and testing. In this paper, we describe how we developed the programmable X5G testbed, addressing these challenges through the deployment of the first 8-node network based on the integration of NVIDIA Aerial RAN CoLab (ARC), OpenAirInterface (OAI), and a near-real-time RAN Intelligent Controller (RIC). The Aerial Software Development Kit (SDK) provides the PHY layer, accelerated on Graphics Processing Unit (GPU), with the higher layers from the OAI open-source project interfaced with the PHY through the Small Cell Forum (SCF) Functional Application Platform Interface (FAPI). An E2 agent provides connectivity to the O-RAN Software Community (OSC) near-real-time RIC. We discuss software integration, the network infrastructure, and a digital twin framework for RF planning. We then profile the performance with up to 4 Commercial Off-the-Shelf (COTS) smartphones for each base station with iPerf and video streaming applications, measuring a cell rate higher than 500 Mbps in downlink and 45 Mbps in uplink.

Link

Provided herein are systems for controlling a network of distributed non-terrestrial nodes including a control framework operative to train and control a plurality of the non-terrestrial nodes, the control framework including a control interface in communication with a network operator to receive one or more specified control objectives, and a learning engine operative to train a virtual non-terrestrial network, wherein the control framework is further operative to transfer knowledge gained through the training of the virtual non-terrestrial network to the network of distributed non-terrestrial nodes as data-driven logic unit configurations tailored for the specified control objectives.

Link

Wireless Sensor Networks (WSNs) are pivotal in various applications, including precision agriculture, ecological surveillance, and the Internet of Things (IoT). However, energy limitations of battery-powered nodes are a critical challenge, necessitating optimization of energy efficiency for maximal network lifetime. Existing strategies like duty cycling and Wake-up Radio (WuR) technology have been employed to mitigate energy consumption and latency, but they present challenges in scenarios with sparse deployments and short communication ranges. This paper introduces and evaluates the performance of Unmanned Aerial Vehicle (UAV)-assisted mobile data collection for WuR-enabled WSNs through physical and simulated experiments. We propose two one-hop UAV-based data collection strategies: a naïve strategy, which follows a predetermined fixed path, and an adaptive strategy, which optimizes the collection route based on recorded metadata. Our evaluation includes multiple experiment categories, measuring collection reliability, collection cycle duration, successful data collection time (latency), and node awake time to infer network lifetime. Results indicate that the adaptive strategy outperforms the naïve strategy across all metrics. Furthermore, WuR-based scenarios demonstrate lower latency and considerably lower node awake time compared to duty cycle-based scenarios, leading to several orders of magnitude longer network lifetime. Remarkably, our results suggest that the use of WuR technology alone achieves unprecedented network lifetimes, regardless of whether data collection paths are optimized. This underscores the significance of WuR as the technology of choice for all energy critical WSN applications.

Link

In satellite communication systems, the high sensitivity and vast coverage area make them prime targets for potential attackers. Given the integral role satellites play in modern communication, navigation, and observation systems, any vulnerability can have cascading effects on various sectors, from military to civil applications. On the other hand, recent exponentially growing IoT devices expose a potential security issue to the satellite communication running on the shared spectrum. In this paper, we for the first time propose to launch a Man-in-the-middle attack from ubiquitous IoT devices to spectrum-shared Satellite communication (I2S Attack) at a low cost but vast impact. The key idea is to use a compromised IoT device’s OFDM signal to emulate satellite’s MSK signals. Specifically, we discussed the feasibility of signal emulation, introduced the theory of I2S Attack, captured real-world satellite signals, and conducted simulations. The simulation result shows that we can achieve up to 65% emulation similarity between OFDM and MSK signals.

Link

The transition of fifth generation (5G) cellular systems to softwarized, programmable, and intelligent networks depends on successfully enabling public and private 5G deployments that are (i) fully software-driven and (ii) with a performance at par with that of traditional monolithic systems. This requires hardware acceleration to scale the Physical (PHY) layer performance, end-to-end integration and testing, and careful planning of the Radio Frequency (RF) environment. In this paper, we describe how the X5G testbed at Northeastern University has addressed these challenges through the first 8-node network deployment of the NVIDIA Aerial RAN CoLab (ARC), with the Aerial Software Development Kit (SDK) for the PHY layer, accelerated on Graphics Processing Unit (GPU), and through its integration with higher layers from the OpenAirInterface (OAI) open-source project through the Small Cell Forum (SCF) Functional Application Platform Interface (FAPI). We discuss software integration, the network infrastructure, and a digital twin framework for RF planning. We then profile the performance with up to 4 Commercial Off-the-Shelf (COTS) smartphones for each base station with iPerf and video streaming applications, measuring a cell rate higher than 500 Mbps in downlink and 45 Mbps in uplink.

Link

Network virtualization, software-defined infrastructure, and orchestration are pivotal elements in contemporary networks, yielding new vectors for optimization and novel capabilities. In line with these principles, O-RAN presents an avenue to bypass vendor lock-in, circumvent vertical configurations, enable network programmability, and facilitate integrated artificial intelligence (AI) support. Moreover, modern container orchestration frameworks (e.g., Kubernetes, Red Hat OpenShift) simplify the way cellular base stations, as well as the newly introduced RAN Intelligent Controllers (RICs), are deployed, managed, and orchestrated. While this enables cost reduction via infrastructure sharing, it also makes it more challenging to meet O-RAN control latency requirements, especially during peak resource utilization. For instance, the Near-real-time RIC is in charge of executing applications (xApps) that must take control decisions within one second, and we show that container platforms available today fail in guaranteeing such timing constraints. To address this problem, we propose ScalO-RAN, a control framework rooted in optimization and designed as an O-RAN rApp that allocates and scales AI-based O-RAN applications (xApps, rApps, dApps) to: (i) abide by application-specific latency requirements, and (ii) monetize the shared infrastructure while reducing energy consumption. We prototype ScalO-RAN on an OpenShift cluster with base stations, RIC, and a set of AI-based xApps deployed as micro-services. We evaluate ScalO-RAN both numerically and experimentally. Our results show that ScalO-RAN can optimally allocate and distribute O-RAN applications within available computing nodes to accommodate even stringent latency requirements. More importantly, we show that scaling O-RAN applications is primarily a time-constrained problem rather than a resource-constrained one, where scaling policies must account for stringent inference time of AI applications, and not only how many resources they consume.

Link

Obtaining access to exclusive spectrum, cell sites, Radio Access Network (RAN) equipment, and edge infrastructure imposes major capital expenses to mobile network operators. A neutral host infrastructure, by which a third-party company provides RAN services to mobile operators through network virtualization and slicing techniques, is seen as a promising solution to decrease these costs. Currently, however, neutral host providers lack automated and virtualized pipelines for onboarding new tenants and to provide elastic and on-demand allocation of resources matching operators’ requirements. To address this gap, this paper presents NeutRAN, a zero-touch framework based on the O-RAN architecture to support applications on neutral hosts and automatic operator onboarding. NeutRAN builds upon two key components: (i) an optimization engine to guarantee coverage and to meet quality of service requirements while accounting for the limited amount of shared spectrum and RAN nodes, and (ii) a fully virtualized and automated infrastructure that converts the output of the optimization engine into deployable micro-services to be executed at RAN nodes and cell sites. NeutRAN was prototyped on an OpenShift cluster and on a programmable testbed with 4 base stations and 10 users from 3 different tenants. We evaluate its benefits, comparing it to a traditional license-based RAN where each tenant has dedicated physical and spectrum resources. We show that NeutRAN can deploy a fully operational neutral host-based cellular network in around 10 seconds. Experimental results show that it increases the cumulative network throughput by 2.18× and the per-user average throughput by 1.73× in networks with shared spectrum blocks of 30 MHz. NeutRAN provides a 1.77× cumulative throughput gain even when it can only operate on a shared spectrum block of 10 MHz (one third of the spectrum used in license-based RANs).

Link

The fifth generation new radio (5G NR) technology is expected to fulfill reliable and accurate positioning requirements of industry use cases, such as autonomous robots, connected vehicles, and future factories. Starting from Third Generation Partnership Project (3GPP) Release-16, several enhanced positioning solutions are featured in the 5G standards, including the multi-cell round trip time (multi-RTT) method. This work presents a novel framework to estimate the round-trip time (RTT) between a user equipment (UE) and a base station (gNB) in 5G NR. Unlike the existing scheme in the standards, RTT can be estimated without the need to send timing measurements from both the gNB and UE to a central node. The proposed method relies on obtaining multiple coherent uplink wide-band channel measurements at the gNB by circumventing the timing advance control loops and the clock drift. The performance is evaluated through experiments leveraging a real world 5G testbed based on OpenAirInterface (OAI). Under a moderate system bandwidth of 40MHz, the experimental results show meter level range accuracy even in low signal-to-noise ratio (SNR) conditions.

Link

5G and beyond mobile networks will support heterogeneous use cases at an unprecedented scale, thus demanding automated control and optimization of network functionalities customized to the needs of individual users. Such fine-grained control of the Radio Access Network (RAN) is not possible with the current cellular architecture. To fill this gap, the Open RAN paradigm and its specification introduce an “open” architecture with abstractions that enable closed-loop control and provide data-driven, and intelligent optimization of the RAN at the user-level. This is obtained through custom RAN control applications (i.e., xApps) deployed on near-real-time RAN Intelligent Controller (near-RT RIC) at the edge of the network. Despite these premises, as of today the research community lacks a sandbox to build data-driven xApps, and create large-scale datasets for effective Artificial Intelligence (AI) training. In this paper, we address this by introducing ns-O-RAN, a software framework that integrates a real-world, production-grade near-RT RIC with a 3GPP-based simulated environment on ns-3, enabling at the same time the development of xApps, automated large-scale data collection and testing of Deep Reinforcement Learning (DRL)-driven control policies for the optimization at the user-level. In addition, we propose the first user-specific O-RAN Traffic Steering (TS) intelligent handover framework. It uses Random Ensemble Mixture (REM), a Conservative QQ-learning (CQL) algorithm, combined with a state-of-the-art Convolutional Neural Network (CNN) architecture, to optimally assign a serving base station to each user in the network. Our TS xApp, trained with more than 40 million data points collected by ns-O-RAN, runs on the near-RT RIC and controls the ns-O-RAN base stations. We evaluate the performance on a large-scale deployment with up to 126 users with 8 base stations, showing that the xApp-based handover improves throughput and spectral efficiency by an average of 50% over traditional handover heuristics, with less mobility overhead.

Link

This paper introduces an innovative framework designed for progressive (granular in time to onset) prediction of seizures through the utilization of a Deep Learning (DL) methodology based on non-invasive multimodal sensor networks. Epilepsy, a debilitating neurological condition, affects an estimated 65 million individuals globally, with a substantial proportion facing drug-resistant epilepsy despite pharmacolog-ical interventions. To address this challenge, we advocate for predictive systems that provide timely alerts to individuals at risk, enabling them to take precautionary actions. Our framework employs advanced DL techniques and uses personalized data from a network of non-invasive electroencephalogram (EEG) and electrocardiogram (ECG) sensors, thereby enhancing prediction accuracy. The algorithms are optimized for real-time processing on edge devices, mitigating privacy concerns and minimizing data transmission overhead inherent in cloud-based solutions, ultimately preserving battery energy. Additionally, our system predicts the countdown time to seizures (with 15-minute intervals up to an hour prior to the onset), offering critical lead time for preventive actions. Our multimodal model achieves 95% sensitivity, 98% specificity, and 97% accuracy, averaged among 29 patients.

Link

In this paper, we introduce SeizNet, a closed-loop system for predicting epileptic seizures through the use of Deep Learning (DL) method and implantable sensor networks. While pharmacological treatment is effective for some epilepsy patients (with ~65M people affected worldwide), one out of three suffer from drug-resistant epilepsy. To alleviate the impact of seizure, predictive systems have been developed that can notify such patients of an impending seizure, allowing them to take precautionary measures. SeizNet leverages DL techniques and combines data from multiple recordings, specifically intracranial electroencephalogram (iEEG) and electrocardiogram (ECG) sensors, that can significantly improve the specificity of seizure prediction while preserving very high levels of sensitivity. SeizNet DL algorithms are designed for efficient real-time execution at the edge, minimizing data privacy concerns, data transmission overhead, and power inefficiencies associated with cloud-based solutions. Our results indicate that SeizNet outperforms traditional single-modality and non-personalized prediction systems in all metrics, achieving up to 99% accuracy in predicting seizure, offering a promising new avenue in refractory epilepsy treatment.

Link

Recent years have witnessed the Open Radio Access Network (RAN) paradigm transforming the fundamental ways cellular systems are deployed, managed, and optimized. This shift is led by concepts such as openness, softwarization, programmability, interoperability, and intelligence of the network, which have emerged in wired networks through Software-defined Networking (SDN) but lag behind in cellular systems. The realization of the Open RAN vision into practical architectures, intelligent data-driven control loops, and efficient software implementations, however, is a multifaceted challenge, which requires (i) datasets to train Artificial Intelligence (AI) and Machine Learning (ML) models; (ii) facilities to test models without disrupting production networks; (iii) continuous and automated validation of the RAN software; and (iv) significant testing and integration efforts. This paper is a tutorial on how Colosseum—the world’s largest wireless network emulator with hardware in the loop—can provide the research infrastructure and tools to fill the gap between the Open RAN vision, and the deployment and commercialization of open and programmable networks. We describe how Colosseum implements an Open RAN digital twin through a high-fidelity Radio Frequency (RF) channel emulator and endto- end softwarized O-RAN and 5G-compliant protocol stacks, thus allowing users to reproduce and experiment upon topologies representative of real-world cellular deployments. Then, we detail the twinning infrastructure of Colosseum, as well as the automation pipelines for RF and protocol stack twinning. Finally, we showcase a broad range of Open RAN use cases implemented on Colosseum, including the real-time connection between the digital twin and real-world networks, and the development, prototyping, and testing of AI/ML solutions for Open RAN.

Link

The highly heterogeneous ecosystem of Next Generation (NextG) wireless communication systems calls for novel networking paradigms where functionalities and operations can be dynamically and optimally reconfigured in real time to adapt to changing traffic conditions and satisfy stringent and diverse Quality of Service (QoS) demands. Open Radio Access Network (RAN) technologies, and specifically those being standardized by the O-RAN Alliance, make it possible to integrate network intelligence into the once monolithic RAN via intelligent applications, namely, xApps and rApps. These applications enable flexible control of the network resources and functionalities, network management, and orchestration through data-driven intelligent control loops. Recent work has showed how Deep Reinforcement Learning (DRL) is effective in dynamically controlling O-RAN systems. However, how to design these solutions in a way that manages heterogeneous optimization goals and prevents unfair resource allocation is still an open challenge, with the logic within DRL agents often considered as a black box. In this paper, we introduce PandORA, a framework to automatically design and train DRL agents for Open RAN applications, package them as xApps and evaluate them in the Colosseum wireless network emulator. We benchmark 23 xApps that embed DRL agents trained using different architectures, reward design, action spaces, and decision-making timescales, and with the ability to hierarchically control different network parameters. We test these agents on the Colosseum testbed under diverse traffic and channel conditions, in static and mobile setups. Our experimental results indicate how suitable fine-tuning of the RAN control timers, as well as proper selection of reward designs and DRL architectures can boost network performance according to the network conditions and demand. Notably, finer decision-making granularities can improve Massive Machine-Type Communications (mMTC)'s performance by \sim56% and even increase Enhanced Mobile Broadband (eMBB) Throughput by \sim99%.

Link

Network slicing is a 5G paradigm that enables the creation of on-demand logical networks over shared physical infrastructure. In this paper, we present a framework that allows users to make advance slice reservations with the End-to-End Orchestrator (EEO). Our reservation mechanism enables the EEO to make admission decisions instantly upon request arrival, providing guarantees as to when the request can be enabled. We then proceed to address a relevant revenue maximization problem through an optimal solution, which has factorial time complexity. We also propose a low-complexity algorithm that can efficiently allocate resources for the online version of the problem. We conduct evaluations that demonstrate how the reservation mechanism can potentially improve EEO’s revenue. Additionally, we conduct a study on scenarios where the arrival rates of slice requests exhibit a positive correlation with reservation discounts provided by EEO.

Link

The highly heterogeneous ecosystem of Next Generation (NextG) wireless communication systems calls for novel networking paradigms where functionalities and operations can be dynamically and optimally reconfigured in real time to adapt to changing traffic conditions and satisfy stringent and diverse Quality of Service (QoS) demands. Open Radio Access Network (RAN) technologies, and specifically those being standardized by the O-RAN Alliance, make it possible to integrate network intelligence into the once monolithic RAN via intelligent applications, namely, xApps and rApps. These applications enable flexible control of the network resources and functionalities, network management, and orchestration through data-driven control loops. Despite recent work demonstrating the effectiveness of Deep Reinforcement Learning (DRL) in controlling O-RAN systems, how to design these solutions in a way that does not create conflicts and unfair resource allocation policies is still an open challenge. In this paper, we perform a comparative analysis where we dissect the impact of different DRL-based xApp designs on network performance. Specifically, we benchmark 12 different xApps that embed DRL agents trained using different reward functions, with different action spaces and with the ability to hierarchically control different network parameters. We prototype and evaluate these xApps on Colosseum, the world's largest O-RAN-compliant wireless network emulator with hardware-in-the-loop. We share the lessons learned and discuss our experimental results, which demonstrate how certain design choices deliver the highest performance while others might result in a competitive behavior between different classes of traffic with similar objectives.

Link

The Open Radio Access Network (RAN) paradigm is transforming cellular networks into a system of disaggregated, virtualized, and software-based components. These self-optimize the network through programmable, closed-loop control, leveraging Artificial Intelligence (AI) and Machine Learning (ML) routines. In this context, Deep Reinforcement Learning (DRL) has shown great potential in addressing complex resource allocation problems. However, DRL-based solutions are inherently hard to explain, which hinders their deployment and use in practice. In this paper, we propose EXPLORA, a framework that provides explainability of DRL-based control solutions for the Open RAN ecosystem. EXPLORA synthesizes network-oriented explanations based on an attributed graph that produces a link between the actions taken by a DRL agent (i.e., the nodes of the graph) and the input state space (i.e., the attributes of each node). This novel approach allows EXPLORA to explain models by providing information on the wireless context in which the DRL agent operates. EXPLORA is also designed to be lightweight for real-time operation. We prototype EXPLORA and test it experimentally on an O-RAN-compliant near-real-time RIC deployed on the Colosseum wireless network emulator. We evaluate EXPLORA for agents trained for different purposes and showcase how it generates clear network-oriented explanations. We also show how explanations can be used to perform informative and targeted intent-based action steering and achieve median transmission bitrate improvements of 4% and tail improvements of 10%.

Link

The ever-growing number of wireless communication devices and technologies demands spectrum-sharing techniques. Effective coexistence management is crucial to avoid harmful interference, especially with critical systems like nautical and aerial radars in which incumbent radios operate missioncritical communication links. In this demo, we showcase a framework that leverages Colosseum, the world’s largest wireless network emulator with hardware-in-the-loop, as a playground to study commercial radar waveforms coexisting with a cellular network in CBRS band in complex environments. We create an ad-hoc high-fidelity spectrum-sharing scenario for this purpose. We deploy a cellular network to collect IQ samples with the aim of training an ML agent that runs at the base station. The agent has the goal of detecting incumbent radar transmissions and vacating the cellular bandwidth to avoid interfering with the radar operations. Our experiment results show an average detection accuracy of 88%, with an average detection time of 137 ms.

Link

The adoption of Next-Generation cellular networks is rapidly increasing, together with their achievable throughput and their latency demands. Optimizing existing transport protocols for such networks is challenging, as the wireless channel becomes critical for performance and reliability studies. The performance assessment of transport protocols for wireless networks has mostly relied on simulation-based environments. While providing valuable insights, such studies are influenced by the simulator's specific settings. Employing more advanced and flexible methods for collecting and analyzing end-to-end transport layer datasets in realistic wireless environments is crucial to the design, implementation and evaluation of transport protocols that are effective when employed in real-world 5G networks. We present Hercules, a containerized 5G standalone framework that collects data employing the OpenAirInterface 5G protocol stack. We illustrate its potential with an initial transport layer and 5G stack measurement campaign on the Colosseum wireless network testbed. In addition, we present preliminary post-processing results from testing various TCP Congestion Control techniques over multiple wireless channels.

Link

Because of the ever-growing amount of wireless consumers, spectrum-sharing techniques have been increasingly common in the wireless ecosystem, with the main goal of avoiding harmful interference to coexisting communication systems. This is even more important when considering systems, such as nautical and aerial fleet radars, in which incumbent radios operate mission-critical communication links. To study, develop, and validate these solutions, adequate platforms, such as the Colosseum wireless network emulator, are key as they enable experimentation with spectrum-sharing heterogeneous radio technologies in controlled environments. In this work, we demonstrate how Colosseum can be used to twin commercial radio waveforms to evaluate the coexistence of such technologies in complex wireless propagation environments. To this aim, we create a high-fidelity spectrum-sharing scenario on Colosseum to evaluate the impact of twinned commercial radar waveforms on a cellular network operating in the CBRS band. Then, we leverage IQ samples collected on the testbed to train a machine learning agent that runs at the base station to detect the presence of incumbent radar transmissions and vacate the bandwidth to avoid causing them harmful interference. Our results show an average detection accuracy of 88%, with accuracy above 90% in SNR regimes above 0 dB and SINR regimes above --20 dB, and with an average detection time of 137 ms.

Link

Cellular networks are undergoing a radical transformation toward disaggregated, fully virtualized, and programmable architectures with increasingly heterogeneous devices and applications. In this context, the open architecture standardized by the O-RAN Alliance enables algorithmic and hardware-independent Radio Access Network (RAN) adaptation through closed-loop control. O-RAN introduces Machine Learning (ML)-based network control and automation algorithms as so-called xApps running on RAN Intelligent Controllers . However, in spite of the new opportunities brought about by the Open RAN, advances in ML-based network automation have been slow, mainly because of the unavailability of large-scale datasets and experimental testing infrastructure. This slows down the development and widespread adoption of Deep Reinforcement Learning (DRL) agents on real networks, delaying progress in intelligent and autonomous RAN control. In this paper, we address these challenges by discussing insights and practical solutions for the design, training, testing, and experimental evaluation of DRL-based closed-loop control in the Open RAN. To this end, we introduce ColO-RAN, the first publicly-available large-scale O-RAN testing framework with software-defined radios-in-the-loop. Building on the scale and computational capabilities of the Colosseum wireless network emulator, ColO-RAN enables ML research at scale using O-RAN components, programmable base stations, and a “wireless data factory.” Specifically, we design and develop three exemplary xApps for DRL-based control of RAN slicing, scheduling and online model training, and evaluate their performance on a cellular network with 7 softwarized base stations and 42 users. Finally, we showcase the portability of ColO-RAN to different platforms by deploying it on Arena, an indoor programmable testbed. The lessons learned from the ColO-RAN implementation and the extensive results from our first-of-its-kind large-scale evaluation highlight the importance of experimental frameworks for the development of end-to-end intelligent RAN control pipelines, from data analysis to the design and testing of DRL agents. They also provide insights on the challenges and benefits of DRL-based adaptive control, and on the trade-offs associated to training on a live RAN. ColO-RAN and the collected large-scale dataset are publicly available to the research community.

Link

The increasing number of satellites in orbit has led to a growing reliance on third-party service providers for data transfer between Earth and space. Traditional approaches to managing satellite communications require human intervention, which becomes more burdensome with the escalating number of satellites. This research addresses the need for an efficient and automated system to optimize service provider selection for NASA space communication. Previous research has utilized human-operated approaches for service provider management. Our study fills a gap by developing a cognitive algorithm that automates and optimizes the selection process based on various parameters, such as data volume, priority, quality of service and cost. This novel solution reduces user burden, facilitates service management, and contributes to the development of cognitive spaceflight missions, ultimately supporting NASA’s research into Cognitive Communications technology. The algorithm design consists of three major steps: modeling data, developing a Link Selection Algorithm (LSA) based on a grading system, and applying machine learning using linear regression. The LSA evaluates providers based on user-defined constraints, considering factors such as delivery time, cost, and quality of service. We define a suitability metric which allows our algorithm to make a recommendation to a user regarding which commercial service providers to select. The addition of Linear Regression predicts the future suitability value. Our main findings demonstrate that the resulting algorithm can autonomously manage connections between satellites and providers, maximizing communication channel efficiency. This research has significant implications, as it not only addresses a pressing issue in satellite communication management but also advances the field of cognitive spaceflight missions.

Link

Jamming attacks have plagued wireless communication systems and will continue to do so going forward with technological advances. These attacks fall under the category of Electronic Warfare (EW), a continuously growing area in both attack and defense of the electromagnetic spectrum, with one subcategory being electronic attacks (EA). Jamming attacks fall under this specific subcategory of EW as they comprise adversarial signals that attempt to disrupt, deny, degrade, destroy, or deceive legitimate signals in the electromagnetic spectrum. While jamming is not going away, recent research advances have started to get the upper hand against these attacks by leveraging new methods and techniques, such as machine learning. However, testing such jamming solutions on a wide and realistic scale is a daunting task due to strict regulations on spectrum emissions. In this paper, we introduce eSWORD (emulation (of) Signal Warfare On Radio-frequency Devices), the first large-scale framework that allows users to safely conduct real-time and controlled jamming experiments with hardware-in-the-loop. This is done by integrating METEOR, an electronic warfare (EW) threat-emulating software developed by the MITRE Corporation, into the Colosseum wireless network emulator that enables large-scale experiments with up to 49 software-defined radio nodes. We compare the performance of eSWORD with that of real-world jamming systems by using an over-the-air wireless testbed (considering safe measures when conducting experiments). Our experimental results demonstrate that eSWORD achieves up to 98% accuracy in following throughput, signal-to-interference-plus-noise ratio, and link status patterns when compared to real-world jamming experiments, testifying to the high accuracy of the emulated eSWORD setup.

Link

In this study, a non-invasive system is proposed for monitoring the health of vine plants by measuring their water stress, with the goal of mitigating frequent extreme meteorological events such as droughts. The envisioned system measures the spatial distribution of the Crop Water Stress Index (CWSI) on the crop field and provides the farmers with precise control over their vine’s health and, therefore, on the final quality of their product. To ensure the accurate acquisition of the parameters needed to compute the CWSI, data are collected by field sensors on the ground and by exploiting satellite data. Data fusion then allows us to obtain an associated georeferenced heatmap of the vineyard. The solution has been tested via a prototype, which allowed the collection of information in a vineyard.

Link

Open Radio Access Network (RAN) architectures will enable interoperability, openness and programmable data-driven control in next generation cellular networks. However, developing and testing efficient solutions that generalize across heterogeneous cellular deployments and scales, and that optimize network performance in such diverse environments is a complex task that is still largely unexplored. In this paper, we present OpenRAN Gym, a unified, open, and O-RAN-compliant experimental toolbox for data collection, design, prototyping and testing of end-to-end data-driven control solutions for next generation Open RAN systems. OpenRAN Gym extends and combines into a unique solution several software frameworks for data collection of RAN statistics and RAN control, and a lightweight O-RAN near-real-time RAN Intelligent Controller (RIC) tailored to run on experimental wireless platforms. We first provide an overview of the various architectural components of OpenRAN Gym and describe how it is used to collect data and design, train and test artificial intelligence and machine learning O-RAN-compliant applications (xApps) at scale. We then describe in detail how to test the developed xApps on softwarized RANs and provide an example of two xApps developed with OpenRAN Gym that are used to control a network with 7 base stations and 42 users deployed on the Colosseum testbed. Finally, we show how solutions developed with OpenRAN Gym on Colosseum can be exported to real-world, heterogeneous wireless platforms, such as the Arena testbed and the POWDER and COSMOS platforms of the PAWR program. OpenRAN Gym and its software components are open-source and publicly-available to the research community. By guiding the readers from instantiating the components of OpenRAN Gym, to running experiments in a softwarized RAN with an O-RAN-compliant near-RT RIC and xApps, we aim at providing a key reference for researchers and practitioners working on experimental Open RAN systems.

Link

The Open Radio Access Network (RAN) and its embodiment through the O-RAN Alliance specifications are poised to revolutionize the telecom ecosystem. O-RAN promotes virtualized RANs where disaggregated components are connected via open interfaces and optimized by intelligent controllers. The result is a new paradigm for the RAN design, deployment, and operations: O-RAN networks can be built with multi-vendor, interoperable components, and can be programmatically optimized through a centralized abstraction layer and data-driven closed-loop control. Therefore, understanding O-RAN, its architecture, its interfaces, and workflows is key for researchers and practitioners in the wireless community. In this article, we present the first detailed tutorial on O-RAN. We also discuss the main research challenges and review early research results. We provide a deep dive of the O-RAN specifications, describing its architecture, design principles, and the O-RAN interfaces. We then describe how the O-RAN RAN Intelligent Controllers (RICs) can be used to effectively control and manage 3GPP-defined RANs. Based on this, we discuss innovations and challenges of O-RAN networks, including the Artificial Intelligence (AI) and Machine Learning (ML) workflows that the architecture and interfaces enable, security, and standardization issues. Finally, we review experimental research platforms that can be used to design and test O-RAN networks, along with recent research results, and we outline future directions for O-RAN development.

Link

The harsh propagation environment in the millimeter wave (mmWave) band impacts all the layers of the protocol stack. This calls for full-stack, end-to-end performance evaluation platforms, with programmable lower layers, to enable cross-layer approaches, and with the support for application data traffic and transport protocols. So far, most full-stack mmWave studies have relied on commercial mmWave devices, which have limited insights and programmability at the link level, or on simulations. This paper introduces a fully programmable, software-defined platform for the design, prototyping, and evaluation of the end-to-end application performance at 60 GHz. It extends the NI mmWave Transceiver System (MTS) with real-time video streaming capabilities and a reliable retransmission-based Medium Access Control (MAC) layer. This platform establishes a framework that can be used for the development and evaluation of cross-layer optimization at mmWaves. We evaluate the performance of a video streaming use case with different video bitrates, Modulation and Coding Schemes (MCSs), and link configurations, to showcase the end-to-end, full-stack capabilities of the platform, and discuss the challenges for the support of real-time application traffic over a link with 2 GHz of bandwidth.

Link

This article describes HIRO-NET, an Heterogeneous Intelligent Robotic Network. HIRO-NET is an emergency infrastructure-less network that aims to address the problem of providing connectivity in the immediate aftermath of a natural disaster, where no cellular or wide area network is operational and no Internet access is available. HIRO-NET establishes a two-tier wireless mesh network where the Lower Tier connects nearby survivors in a self-organized mesh via Bluetooth Low Energy (BLE) and the Upper Tier creates long-range VHF links between autonomous robots exploring the disaster-stricken area. HIRO-NET's main goal is to enable users in the disaster area to exchange text messages to share critical information and request help from first responders. The mesh network discovery problem is analyzed and a network protocol specifically designed to facilitate the exploration process is presented. We show how HIRO-NET robots successfully discover, bridge and interconnect local mesh networks. Results show that the Lower Tier always reaches network convergence and the Upper Tier can virtually extend HIRO-NET functionalities to the range of a small metropolitan area. In the event of an Internet connection still being available to some user, HIRO-NET is able to opportunistically share and provide access to low data-rate services (e.g., Twitter, Gmail) to the whole network. Results suggest that a temporary emergency network to cover a metropolitan area can be created in tens of minutes.

Link

The Open Radio Access Network (Open RAN) - being standardized, among others, by the O-RAN Alliance - brings a radical transformation to the cellular ecosystem through disaggregation and RAN intelligent controllers (RICs). The latter enable closed-loop control through custom logic applications (xApps and rApps), supporting control decisions at different timescales. However, the current O-RAN specifications lack a practical approach to execute real-time control loops operating at timescales below 10 ms. In this article, we propose the notion of dApps, distributed applications that complement existing xApps/rApps by allowing operators to implement fine-grained data-driven management and control in real time at the central/distributed units (DUs). dApps receive real-time data from the RAN, as well as enrichment information from the near-real-time RIC, and execute inference and control of lower-layer functionalities, thus enabling use cases with stricter timing requirements than those considered by the RICs, such as beam management and user scheduling. We propose feasible ways to integrate dApps in the O-RAN architecture by leveraging and extending interfaces and components already present therein. Finally, we discuss challenges specific to dApps, and provide preliminary results that show the benefits of executing network intelligence through dApps.

Link

Large-scale wireless testbeds are being increasingly used in developing and evaluating new solutions for next generation wireless networks. Among others, high-fidelity FPGA-based emulation platforms have unique capabilities for faithfully modeling real-world wireless environments in real-time and at scale, while guaranteeing repeatability. However, the reliability of the solutions tested on emulation platforms heavily depends on the precision of the emulation process, which is often overlooked. To address this unmet need in wireless network emulator-based experiments, in this paper we present CaST, a Channel emulation generator and Sounder Toolchain for creating and characterizing realistic wireless network scenarios with high accuracy. CaST consists of (i) a framework for creating mobile wireless scenarios from ray-tracing models for FPGA-based emulation platforms, and (ii) a containerized Software Defined Radio-based channel sounder to precisely characterize the emulated channels. We demonstrate the use of CaST by designing, deploying and validating multi-path mobile scenarios on Colosseum, the world's largest wireless network emulator. Results show that CaST achieves ≤ 20 ns accuracy in sounding Channel Impulse Response tap delays, and 0.5 dB accuracy in measuring tap gains.

Link

Softwarization, programmable network control and the use of all-encompassing controllers acting at different timescales are heralded as the key drivers for the evolution to next-generation cellular networks. These technologies have fostered newly designed intelligent data-driven solutions for managing large sets of diverse cellular functionalities, basically impossible to implement in traditionally closed cellular architectures. Despite the evident interest of industry on Artificial Intelligence (AI) and Machine Learning (ML) solutions for closed-loop control of the Radio Access Network (RAN), and several research works in the field, their design is far from mainstream, and it is still a sophisticated – and often overlooked – operation. In this paper, we discuss how to design AI/ML solutions for the intelligent closed-loop control of the Open RAN, providing guidelines and insights based on exemplary solutions with high-performance record. We then show how to embed these solutions into xApps instantiated on the O-RAN nearreal-time RAN Intelligent Controller (RIC) through OpenRAN Gym, the first publicly available toolbox for data-driven ORAN experimentation at scale. We showcase a use case of an xApp developed with OpenRAN Gym and tested on a cellular network with 7 base stations and 42 users deployed on the Colosseum wireless network emulator. Our demonstration shows the high degree of flexibility of the OpenRAN Gym-based xApp development environment, which is independent of deployment scenarios and traffic demand.

Link

The next generation of cellular networks will be characterized by softwarized, open, and disaggregated architectures exposing analytics and control knobs to enable network intelligence via innovative data-driven algorithms. How to practically realize this vision, however, is largely an open problem. For a given network optimization/automation objective, it is currently unknown how to select which data-driven models should be deployed and where, which parameters to control, and how to feed them appropriate inputs. In this paper, we take a decisive step forward by presenting and prototyping OrchestRAN, a novel orchestration framework for next generation systems that embraces and builds upon the Open Radio Access Network (RAN) paradigm to provide a practical solution to these challenges. OrchestRAN has been designed to execute in the non-Real-time (RT) RAN Intelligent Controller (RIC) and allows Network Operators (NOs) to specify high-level control/inference objectives (i.e., adapt scheduling, and forecast capacity in near-RT, e.g., for a set of base stations in Downtown New York). OrchestRAN automatically computes the optimal set of data-driven algorithms and their execution location (e.g., in the cloud, or at the edge) to achieve intents specified by the NOs while meeting the desired timing requirements and avoiding conflicts between different data-driven algorithms controlling the same parameters set. We show that the intelligence orchestration problem in Open RAN is NP-hard, and design low-complexity solutions to support real-world applications. We prototype OrchestRAN and test it at scale on Colosseum, the world’s largest wireless network emulator with hardware in the loop. Our experimental results on a network with 7 base stations and 42 users demonstrate that OrchestRAN is able to instantiate data-driven services on demand with minimal control overhead and latency.

Link

Open Radio Access Network (RAN) architectures will enable interoperability, openness, and programmatic data-driven control in next generation cellular networks. However, developing scalable and efficient data-driven algorithms that can generalize across diverse deployments and optimize RAN performance is a complex feat, largely unaddressed as of today. Specifically, the ability to design efficient data-driven algorithms for network control and inference requires at a minimum (i) access to large, rich, and heterogeneous datasets; (ii) testing at scale in controlled but realistic environments, and (iii) software pipelines to automate data collection and experimentation. To facilitate these tasks, in this paper we propose OpenRAN Gym, a practical, open, experimental toolbox that provides end-to-end design, data collection, and testing workflows for intelligent control in next generation Open RAN systems. OpenRAN Gym builds on software frameworks for the collection of large datasets and RAN control, and on a lightweight O-RAN environment for experimental wireless platforms. We first provide an overview of OpenRAN Gym and then describe how it can be used to collect data, to design and train artificial intelligence and machine learning-based O-RAN applications (xApps), and to test xApps on a softwarized RAN. Then, we provide an example of two xApps designed with OpenRAN Gym and used to control a large-scale network with 7 base stations and 42 users deployed on the Colosseum testbed. OpenRAN Gym and its software components are open source and publicly-available to the research community.

Link

With the unprecedented rise in traffic demand and mobile subscribers, real-time fine-grained optimization frame-works are crucial for the future of cellular networks. Indeed, rigid and inflexible infrastructures are incapable of adapting to the massive amounts of data forecast for 5G networks. Network softwarization, i.e., the approach of controlling “everything” via software, endows the network with unprecedented flexibility, al-lowing it to run optimization and machine learning-based frame-works for flexible adaptation to current network conditions and traffic demand. This work presents QCell, a Deep Q-Network-based optimization framework for softwarized cellular networks. QCell dynamically allocates slicing and scheduling resources to the network base stations adapting to varying interference con-ditions and traffic patterns. QCell is prototyped on Colosseum, the world's largest network emulator, and tested in a variety of network conditions and scenarios. Our experimental results show that using QCell significantly improves user's throughput (up to 37.6%) and the size of transmission queues (up to 11.9%), decreasing service latency.

Link

Colosseum is an open-access and publicly-available large-scale wireless testbed for experimental research via virtualized and softwarized waveforms and protocol stacks on a fully programmable, “white-box” platform. Through 256 state-of-the-art software-defined radios and a massive channel emulator core, Colosseum can model virtually any scenario, enabling the design, development and testing of solutions at scale in a variety of deployments and channel conditions. These Colosseum radio-frequency scenarios are reproduced through high-fidelity FPGAbased emulation with finite-impulse response filters. Filters model the taps of desired wireless channels and apply them to the signals generated by the radio nodes, faithfully mimicking the conditions of real-world wireless environments. In this paper, we introduce Colosseum as a testbed that is for the first time open to the research community. We describe the architecture of Colosseum and its experimentation and emulation capabilities. We then demonstrate the effectiveness of Colosseum for experimental research at scale through exemplary use cases including prevailing wireless technologies (e.g., cellular and Wi-Fi) in spectrum sharing and unmanned aerial vehicle scenarios. A roadmap for Colosseum future updates concludes the paper.

Link

Practical experimentation and prototyping are core steps in the development of any wireless technology. Often times, however, this crucial step is confined to small laboratory setups that do not capture the scale of commercial deployments and do not ensure result reproducibility and replicability, or it is skipped altogether for lack of suitable hardware and testing facilities. Recent years have seen the development of publicly-available testing platforms for wireless experimentation at scale. Examples include the testbeds of the PAWR program and Colosseum, the world's largest wireless network emulator. With its 256 software-defined radios, 24 racks of powerful compute servers and first-of-its-kind channel emulator, Colosseum allows users to prototype wireless solutions at scale, and guarantees reproducibility and replicability of results. This tutorial provides an overview of the Colosseum platform. We describe the architecture and components of the testbed as a whole, and we then showcase how to run practical experiments in diverse scenarios with heterogeneous wireless technologies (e.g., Wi-Fi and cellular). We also emphasize how Colosseum experiments can be ported to different testing platforms, facilitating full-cycle experimental wireless research: design, experiments and tests at scale in a fully controlled and observable environment and testing in the field. The tutorial concludes with considerations on the flexible future of Colosseum, focusing on its planned extension to emulate larger scenarios and channels at higher frequency bands (mmWave).

Link

Next generation (NextG) cellular networks will be natively cloud-based and built on programmable, virtualized, and disaggregated architectures. The separation of control functions from the hardware fabric and the introduction of standardized control interfaces will enable the definition of custom closed-control loops, which will ultimately enable embedded intelligence and real-time analytics, thus effectively realizing the vision of autonomous and self-optimizing networks. This article explores the disaggregated network architecture proposed by the O-RAN Alliance as a key enabler of NextG networks. Within this architectural context, we discuss the potential, the challenges, and the limitations of data-driven optimization approaches to network control over different timescales. We also present the first large-scale integration of O-RAN-compliant software components with an open source full-stack softwarized cellular network. Experiments conducted on Colosseum, the world's largest wireless network emulator, demonstrate closed-loop integration of real-time analytics and control through deep reinforcement learning agents. We also show the feasibility of radio access network (RAN) control through xApps running on the near-real-time RAN intelligent controller to optimize the scheduling policies of coexisting network slices, leveraging the O-RAN open interfaces to collect data at the edge of the network.

Link

Radio access network (RAN) slicing is a virtualization technology that partitions radio resources into multiple autonomous virtual networks. Since RAN slicing can be tailored to provide diverse performance requirements, it will be pivotal to achieve the high-throughput and low-latency communications that next-generation (5G) systems have long yearned for. To this end, effective RAN slicing algorithms must (i) partition radio resources so as to leverage coordination among multiple base stations and thus boost network throughput; and (ii) reduce interference across different slices to guarantee slice isolation and avoid performance degradation. The ultimate goal of this paper is to design RAN slicing algorithms that address the above two requirements. First, we show that the RAN slicing problem can be formulated as a 0-1 Quadratic Programming problem, and we prove its NP-hardness. Second, we propose an optimal solution for small-scale 5G network deployments, and we present three approximation algorithms to make the optimization problem tractable when the network size increases. We first analyze the performance of our algorithms through simulations, and then demonstrate their performance through experiments on a standard-compliant LTE testbed with 2 base stations and 6 smartphones. Our results show that not only do our algorithms efficiently partition RAN resources, but also improve network throughput by 27% and increase by 2× the signal-to-interference-plus-noise ratio.

Link

The cellular networking ecosystem is being radically transformed by openness, softwarization, and virtualization principles, which will steer NextG networks toward solutions running on "white box" infrastructures. Telco operators will be able to truly bring intelligence to the network, dynamically deploying and adapting its elements at run time according to current conditions and traffic demands. Deploying intelligent solutions for softwarized NextG networks, however, requires extensive prototyping and testing procedures, currently largely unavailable. To this aim, this paper introduces SCOPE, an open and softwarized prototyping platform for NextG systems. SCOPE is made up of: (i) A ready-to-use, portable open-source container for instantiating softwarized and programmable cellular network elements (e.g., base stations and users); (ii) an emulation module for diverse real-world deployments, channels and traffic conditions for testing new solutions; (iii) a data collection module for artificial intelligence and machine learning-based applications, and (iv) a set of open APIs for users to control network element functionalities in real time. Researchers can use SCOPE to test and validate NextG solutions over a variety of large-scale scenarios before implementing them on commercial infrastructures. We demonstrate the capabilities of SCOPE and its platform independence by prototyping exemplary cellular solutions in the controlled environment of Colosseum, the world's largest wireless network emulator. We then port these solutions to indoor and outdoor testbeds, namely, to Arena and POWDER, a PAWR platform.

Link

Recent years have seen the introduction of large-scale platforms for experimental wireless research. These platforms, which include testbeds like those of the PAWR program and emulators like Colosseum, allow researchers to prototype and test their solutions in a sound yet realistic wireless environment before actual deployment. Emulators, in particular, enable wireless experiments that are not site-specific as those on real testbeds. Researchers can choose among different radio frequency (RF) scenarios for real-time emulation of a vast variety of different situations, with different number of users, RF bandwidth, antenna counts, hardware requirements, etc. Although very powerful, in that they can emulate virtually any real-world deployment, emulated scenarios are only as useful as how accurately they can reproduce the targeted wireless channel and environment. Achieving emulation accuracy is particularly challenging, especially for experiments at scale for which emulators require considerable amounts of computational resources. In this paper, we propose a framework to create RF scenarios for emulators like Colosseum starting from rich forms of input, like those obtained by measurements through radio equipment or via software (e.g., ray-tracers and electromagnetic field solvers). Our framework optimally scales down the large set of RF data in input to the fewer parameters allowed by the emulator by using efficient clustering techniques and channel impulse response re-sampling. We showcase our method by generating wireless scenarios for the Colosseum network emulator by using Remcom's Wireless InSite, a commercial-grade ray-tracer that produces key characteristics of the wireless channel. Examples are provided for line-of-sight and non-line-of-sight scenarios on portions of the Northeastern University main campus

Link

Fifth-generation (5G) systems will extensively employ radio access network (RAN) softwarization. This key innovation enables the instantiation of "virtual cellular networks" running on different slices of the shared physical infrastructure. In this paper, we propose the concept of Private Cellular Connectivity as a Service (PCCaaS), where infrastructure providers deploy covert network slices known only to a subset of users. We then present SteaLTE as the first realization of a PCCaaS-enabling system for cellular networks. At its core, SteaLTE utilizes wireless steganography to disguise data as noise to adversarial receivers. Differently from previous work, however, it takes a full-stack approach to steganography, contributing an LTE-compliant stegano-graphic protocol stack for PCCaaS-based communications, and packet schedulers and operations to embed covert data streams on top of traditional cellular traffic (primary traffic). SteaLTE balances undetectability and performance by mimicking channel impairments so that covert data waveforms are almost indistinguishable from noise. We evaluate the performance of SteaLTE on an indoor LTE-compliant testbed under different traffic profiles, distance and mobility patterns. We further test it on the outdoor PAWR POWDER platform over long-range cellular links. Results show that in most experiments SteaLTE imposes little loss of primary traffic throughput in presence of covert data transmissions (<; 6%), making it suitable for undetectable PCCaaS networking.

Link

Fifth generation (5G) cellular networks will serve a wide variety of heterogeneous use cases, including mobile broadband users, ultra-low latency services and massively dense connectivity scenarios. The resulting diverse communication requirements will demand networking with unprecedented flexibility, not currently provided by the monolithic black-box approach of 4G cellular networks. The research community and an increasing number of standardization bodies and industry coalitions have recognized softwarization, virtualization, and disaggregation of networking functionalities as the key enablers of the needed shift to flexibility. Particularly, software-defined cellular networks are heralded as the prime technology to satisfy the new application-driven traffic requirements and to support the highly time-varying topology and interference dynamics, because of their openness through well-defined interfaces, and programmability, for swift and responsive network optimization. Leading the technological innovation in this direction, several 5G software-based projects and alliances have embraced the open source approach, making new libraries and frameworks available to the wireless community. This race to open source softwarization, however, has led to a deluge of solutions whose interoperability and interactions are often unclear. This article provides the first cohesive and exhaustive compendium of recent open source software and frameworks for 5G cellular networks, with a full stack and end-to-end perspective. We detail their capabilities and functionalities focusing on how their constituting elements fit the 5G ecosystem, and unravel the interactions among the surveyed solutions. Finally, we review hardware and testbeds on which these frameworks can run, and provide a critical perspective on the limitations of the state-of-the-art, as well as feasible directions toward fully open source, programmable 5G networks.

Link

Arena is an open-access wireless testing platform based on a grid of antennas mounted on the ceiling of a large office-space environment. Each antenna is connected to programmable software-defined radios (SDR) enabling sub-6 GHz 5G-and-beyond spectrum research. With 12 computational servers, 24 SDRs synchronized at the symbol level, and a total of 64 antennas, Arena provides the computational power and the scale to foster new technology development in some of the most crowded spectrum bands. Arena is based on a three-tier design, where the servers and the SDRs are housed in a double rack in a dedicated room, while the antennas are hung off the ceiling of a 2240 square feet office space and cabled to the radios through 100 ft-long cables. This ensures a reconfigurable, scalable, and repeatable real-time experimental evaluation in a real wireless indoor environment. In this paper, we introduce the architecture, capabilities, and system design choices of Arena, and provides details of the software and hardware implementation of various testbed components. Furthermore, we describe key capabilities by providing examples of published work that employed Arena for applications as diverse as synchronized MIMO transmission schemes, multi-hop ad hoc networking, multi-cell 5G networks, AI-powered Radio-Frequency fingerprinting, secure wireless communications, and spectrum sensing for cognitive radio.

Link

Current cellular networks rely on closed and inflexible infrastructure tightly controlled by a handful of vendors. Their configuration requires vendor support and lengthy manual operations, which prevent Telco Operators (TOs) from unlocking the full network potential and from performing fine grained performance optimization, especially on a per-user basis. To address these key issues, this paper introduces CellOS, a fully automated optimization and management framework for cellular networks that requires negligible intervention (“zero-touch”). CellOS leverages softwarization and automatic optimization principles to bridge Software-Defined Networking (SDN) and cross-layer optimization. Unlike state-of-the-art SDN-inspired solutions for cellular networking, CellOS: (i) Hides low-level network details through a general virtual network abstraction; (ii) allows TOs to define high-level control objectives to dictate the desired network behavior without requiring knowledge of optimization techniques, and (iii) automatically generates and executes distributed control programs for simultaneous optimization of heterogeneous control objectives on multiple network slices. CellOS has been implemented and evaluated on an indoor testbed with two different LTE-compliant implementations: OpenAirInterface and srsLTE. We further demonstrated CellOS capabilities on the long-range outdoor POWDER-RENEW PAWR 5G platform. Results from scenarios with multiple base stations and users show that CellOS is platform-independent and self-adapts to diverse network deployments. Our investigation shows that CellOS outperforms existing solutions on key metrics, including throughput (up to 86% improvement), energy efficiency (up to 84%) and fairness (up to 29%).

Link

Network slicing of multi-access edge computing (MEC) resources is expected to be a pivotal technology to the success of 5G networks and beyond. The key challenge that sets MEC slicing apart from traditional resource allocation problems is that edge nodes depend on tightly-intertwined and strictly-constrained networking, computation and storage resources. Therefore, instantiating MEC slices without incurring in resource over-provisioning is hardly addressable with existing slicing algorithms. The main innovation of this paper is Sl-EDGE, a unified MEC slicing framework that allows network operators to instantiate heterogeneous slice services (e.g., video streaming, caching, 5G network access) on edge devices. We first describe the architecture and operations of Sl-EDGE, and then show that the problem of optimally instantiating joint network-MEC slices is NP-hard. Thus, we propose near-optimal algorithms that leverage key similarities among edge nodes and resource virtualization to instantiate heterogeneous slices 7.5x faster and within 25% of the optimum. We first assess the performance of our algorithms through extensive numerical analysis, and show that Sl-EDGE instantiates slices 6x more efficiently then state-of-the-art MEC slicing algorithms. Furthermore, experimental results on a 24-radio testbed with 9 smartphones demonstrate that Sl-EDGE provides simultaneously highly-efficient slicing of joint LTE connectivity, video streaming over WiFi, and ffmpeg video transcoding.

Link

Unmanned Aerial Vehicle (UAV) networks can provide a resilient communication infrastructure to enhance terrestrial networks in case of traffic spikes or disaster scenarios. However, to be able to do so, they need to be based on high-bandwidth wireless technologies for both radio access and backhaul. With this respect, the millimeter wave (mmWave) spectrum represents an enticing solution, since it provides large chunks of untapped spectrum that can enable ultra-high data-rates for aerial platforms. Aerial mmWave channels, however, experience characteristics that are significantly different from terrestrial deployments in the same frequency bands. As of today, mmWave aerial channels have not been extensively studied and modeled. Specifically, the combination of UAV micro-mobility (because of imprecisions in the control loop, and external factors including wind) and the highly directional mmWave transmissions require ad hoc models to accurately capture the performance of UAV deployments. To fill this gap, we propose an empirical propagation loss model for UAV-to-UAV communications at 60 GHz, based on an extensive aerial measurement campaign conducted with the Facebook Terragraph channel sounders. We compare it with 3GPP channel models and make the measurement dataset publicly available.

Link

In this paper we propose SkyCell, a prototyping platform for 5G autonomous aerial base stations. While the majority of work on the topic focuses on theoretical and rarely implemented solutions, SkyCell practically demonstrates the feasibility of an aerial base station where wireless backhaul, autonomous mobility and 5G functionalities are integrated within a unified framework. We showcase the advantages of Unmanned Aerial Vehicles for 5G applications, discuss the design challenges, and ultimately propose a prototyping framework to develop aerial cellular base stations. Experimental results demonstrate that SkyCell not only supports heterogeneous data traffic demand and services, but also enables the implementation of autonomous flight control algorithms while improving metrics such as network throughput (up to 35%) and user fairness (up to 39%).

Link

Networks of Unmanned Aerial Vehicles (UAVs), composed of hundreds, possibly thousands of highly mobile and wirelessly connected flying drones will play a vital role in future Internet of Things (IoT) and 5G networks. However, how to control UAV networks in an automated and scalable fashion in distributed, interference-prone, and potentially adversarial environments is still an open research problem. This article introduces SwarmControl, a new software-defined control framework for UAV wireless networks based on distributed optimization principles. In essence, SwarmControl provides the Network Operator (NO) with a unified centralized abstraction of the networking and flight control functionalities. High-level control directives are then automatically decomposed and converted into distributed network control actions that are executed through programmable software-radio protocol stacks. SwarmControl (i) constructs a network control problem representation of the directives of the NO; (ii) decomposes it into a set of distributed sub-problems; and (iii) automatically generates numerical solution algorithms to be executed at individual UAVs.We present a prototype of an SDR-based, fully reconfigurable UAV network platform that implements the proposed control framework, based on which we assess the effectiveness and flexibility of SwarmControl with extensive flight experiments. Results indicate that the SwarmControl framework enables swift reconfiguration of the network control functionalities, and it can achieve an average throughput gain of 159% compared to the state-of-the-art solutions.

Link

Wireless networks are ubiquitous in our modern world, and we rely more and more on their continuous and reliable operation for battery-powered devices. Networks that self-maintain and self-heal are inherently more reliable. We study efficient and effective network self-healing and update methods for routing recovery following routing failures in a wireless multi-hop network. Network update processes are important since they enable local nodes to maintain the latest and updated neighbor information for routing given the network changes caused by failures. Network update also introduces control signals overhead. In this paper, we investigate the trade-off between routing performance and overhead cost with different network update algorithms and we characterize the performance of the proposed algorithms using network simulations. We show that network updates have positive impacts on routing. In particular, the on-demand route update method provides better results among compared techniques. The improvement is varying depending on the network topology and failure condition scenario.

Link

This paper investigates the advantages and design challenges of leveraging Unmanned Aerial Vehicles (UAVs) to deploy 4G/5G femto- and pico-cells to provide quality-aware user service and improve network performance. In order to do so, we combine UAVs dashing flight capabilities with Software-defined Radios (SDRs) flexibility and devise the concept of self-optimizing UAV Base Stations (UABSs). The proposed framework allows for on-the-fly drone repositioning based on rigorous optimization techniques using real-time network metrics to enhance users' service. This makes it possible to offload the traditional cellular infrastructure, or to mend its temporary failure, by deploying UABSs in areas of interest. Cellular connectivity is, then, provided to mobile subscribers through the LTE-compliant OpenAirInterface software interfaced with the on-drone SDR. We first describe the UABS design challenges and approaches. Then, we give details on the devised optimization algorithm and its main requirements. Finally, we illustrate a prototype implementation of the proposed UABS that leverages an SDR device and a PX4 flight controller, and test its effectiveness. Experimental results demonstrate that UABSs are able to autonomously reposition themselves based on cellular network metrics and to improve network performance.

Link

Mobile cells are seen as an enabler of more flexible and elastic services for next-generation wireless networks, making it possible to provide ad hoc coverage in failure scenarios and scale up the network capacity during peak traffic hours and temporary events. When mounted on Unmanned Aerial Vehicles (UAVs), mobile cells require a high-capacity, low-latency wireless backhaul. Although mmWaves can meet such data-rate demand, they suffer from high-latency link establishment, due to the need to transmit and receive with highly directional beams to compensate for the high isotropic path loss. In this paper, we review the benefits of side-information-aided beam management and present a GPS-aided beam tracking algorithm for UAV-based aerial cells. We prototype the proposed algorithm on a mmWave aerial link using a DJI M600 Pro and 60 GHz radios and prove its effectiveness in reducing the average link establishment latency by 66% with respect to state-of-the-art non-aided schemes.

Link

Arena is an open-access wireless testing platform based on a grid of antennas mounted on the ceiling of a 2240 square feet office-space environment. Each antenna is connected to programmable software-defined radios enabling sub-6 GHz 5G-and-beyond spectrum research. With 12 computational servers, 24 software defined radios synchronized at the symbol level, and a total of 64 antennas, Arena provides the computational power and the scale to foster new technology development in some of the most crowded spectrum bands, ensuring a reconfigurable, scalable, and repeatable real-time experimental evaluation in a real wireless indoor environment. We demonstrate some of the many possible capabilities of Arena in three cases: MIMO Capabilities, Ad Hoc Network, and Cognitive Radio Network.

Link

Arena is an open-access wireless testing platform based on a grid of antennas mounted on the ceiling of a large office-space environment. Each antenna is connected to programmable software-defined radios enabling sub-6 GHz 5G-and-beyond spectrum research. With 12 computational servers, 24 software defined radios synchronized at the symbol level, and a total of 64 antennas, Arena provides the computational power and the scale to foster new technology development in some of the most crowded spectrum bands. Arena is based on a clean three-tier design, where the servers and the software defined radios are housed in a double rack in a dedicated room, while the antennas are hung off the ceiling of a 2240 square feet office space and cabled to the radios through 100 ft long cables. This ensures a reconfigurable, scalable, and repeatable real-time experimental evaluation in a real wireless indoor environment. This article introduces for the first time architecture, capabilities, and system design choices of Arena, and provide details of the software and hardware implementation of the different testbed components. Finally, we showcase some of the capabilities of Arena in providing a testing ground for key wireless technologies, including synchronized MIMO transmission schemes, multi-hop ad hoc networking, multi-cell LTE networks, and spectrum sensing for cognitive radio.

Link

In this paper we present HIRO-NET, Heterogeneous Intelligent Robotic Network. HIRO-NET is an emergency infrastructure-less network tailored to address the problem of providing connectivity in the immediate aftermath of a natural disaster, where no cellular or wide area network is operational and no Internet access is available. HIRO-NET establishes a two-tier wireless mesh network where the Lower Tier connects nearby survivors in a self-organized mesh via Bluetooth Low Energy (BLE)and the Upper Tier creates long-range VHF links between autonomous robots exploring the disaster stricken area. HIRO-NET main goal is to enable users in the disaster to exchange text messages in order to share critical information and request help from first responders. The mesh network discovery problem is analyzed and a network protocol specifically designed to facilitate the exploration process is presented. We show how HIRO-NET robots successfully discover, bridge and interconnect local mesh networks. Results show that the Lower Tier always reaches network convergence and the Upper Tier can virtually extend HIRO-NET functionalities to the range of a small metropolitan area. In the event of an Internet connection still being available to some user, HIRO-NET is able to opportunistically share and provide access to low data-rate services (e.g. Twitter, Gmail)to the whole network. Results suggest that a temporary emergency network to cover a metropolitan area can be created in tens of minutes.

Link

Wireless Power Transfer (WPT) technology offers unprecedented opportunities to future cellular systems, making it possible to wirelessly recharge the mobile terminals as they get sufficiently close to the Base Stations (BSs). Here, we investigate the tradeoffs involved in the recharging process as multiple mobile users move across the cellular network, by systematically measuring the charging efficiency (i.e., amount of energy transferred as opposed to that transmitted) accounting for different mobility models, speeds, frequency range and inter-BS distance. We consider dense cellular deployments, where power is transferred to the mobile users through beamforming and scheduling techniques. At first, a genie is utilized to devise optimal charging schedules, where user locations and the residual energy in their batteries are exactly known by the controller. Hence, several heuristic policies are proposed and their performance is compared against that of the genie-based approach in terms of transfer efficiency and fraction of dead nodes (whose battery is completely depleted). Our numerical results reveal that: i) an even allocation of resources among users is inefficient, whereas even a rough estimate of their location allows heuristic policies to perform close to the genie-based approach, ii) mobility matters: group mobility leads to higher efficiencies and an increasing speed is also beneficial and iii) WPT can substantially reduce the number of dead nodes in the network, although this comes at the expense of constantly transmitting power and transfer efficiencies are very low under any scenario.

Link