Education
- PhD degree from the University of Catania in 2015
- M.S. degree in Telecommunications Engineering degree both from the University of Catania (Catania, Italy) in 2011
- B.S. degree in Computer Engineering and the M.S. degree in Telecommunications Engineering degree both from the University of Catania (Catania, Italy) in 2011
Research Interests
Salvatore D’Oro joined Northeastern University (Boston, MA) in 2017 where now he is a research associate professor with the Institute for the Wireless Internet of Things (WIoT). He received the B.S. degree in Computer Engineering and the M.S. degree in Telecommunications Engineering degree both from the University of Catania (Catania, Italy) in 2011 and 2012, respectively. He received the PhD degree from the University of Catania in 2015, where, in 2016 worked as a postdoctoral researcher with Prof. Sergio Palazzo . In 2013 and 2015, he was a Visiting Researcher at Université Paris-Sud 11 (Paris, France) and at Ohio State University (Ohio, USA). He serves in the Technical Program Committee (TPC) of several international journals and conferences such as IEEE Vehicular Communications Magazine, Elsevier Computer Communications and IEEE INFOCOM. His research interests focus on Open RAN and the intersection of AI and RAN domains and also include network slicing and virtualization techniques for 5G systems and beyond and physical layer security.
Publications
The O-RAN ALLIANCE is defining architectures, interfaces, operations, and security requirements for cellular networks based on Open Radio Access Network (RAN) principles. In this context, O-RAN introduced the RAN Intelligent Controllers (RICs) to enable dynamic control of cellular networks via data-driven applications referred to as rApps and xApps. RICs enable for the first time truly intelligent and self-organizing cellular networks. However, enabling the execution of many Artificial Intelligence (AI) algorithms making autonomous control decisions to fulfill diverse (and possibly conflicting) goals poses unprecedented challenges. For instance, the execution of one xApp aiming at maximizing throughput and one aiming at minimizing energy consumption would inevitably result in diametrically opposed resource allocation strategies. Therefore, conflict management becomes a crucial component of any functional intelligent O-RAN system. This article studies the problem of conflict mitigation in O-RAN and proposes PACIFISTA, a framework to detect, characterize, and mitigate conflicts generated by O-RAN applications that control RAN parameters. PACIFISTA leverages a profiling pipeline to tests O-RAN applications in a sandbox environment, and combines hierarchical graphs with statistical models to detect the existence of conflicts and evaluate their severity. Experiments on Colosseum and OpenRAN Gym demonstrate PACIFISTA's ability to predict conflicts and provide valuable information before potentially conflicting xApps are deployed in production systems. We use PACIFISTA to demonstrate that users can experience a 16% throughput loss even in the case of xApps with similar goals, and that applications with conflicting goals might cause severe instability and result in up to 30% performance degradation. We also show that PACIFISTA can help operators to identify conflicting applications and maintain performance degradation below a tolerable threshold.
LinkThe O-RAN architecture is transforming cellular networks by adopting RAN softwarization and disaggregation concepts to enable data-driven monitoring and control of the network. Such management is enabled by RICs, which facilitate near-real-time and non-real-time network control through xApps and rApps. However, they face limitations, including latency overhead in data exchange between the RAN and RIC, restricting real-time monitoring, and the inability to access user plain data due to privacy and security constraints, hindering use cases like beamforming and spectrum classification. In this paper, we leverage the dApps concept to enable real-time RF spectrum classification with LibIQ, a novel library for RF signals that facilitates efficient spectrum monitoring and signal classification by providing functionalities to read I/Q samples as time-series, create datasets and visualize time-series data through plots and spectrograms. Thanks to LibIQ, I/Q samples can be efficiently processed to detect external RF signals, which are subsequently classified using a CNN inside the library. To achieve accurate spectrum analysis, we created an extensive dataset of time-series-based I/Q samples, representing distinct signal types captured using a custom dApp running on a 5G deployment over the Colosseum network emulator and an OTA testbed. We evaluate our model by deploying LibIQ in heterogeneous scenarios with varying center frequencies, time windows, and external RF signals. In real-time analysis, the model classifies the processed I/Q samples, achieving an average accuracy of approximately 97.8% in identifying signal types across all scenarios.
LinkModern cellular networks, characterized by heterogeneous deployments, diverse requirements, and mission-critical reliability needs, face significant complexity in end-to-end management. This challenge is exacerbated in private 5G systems, where enterprise Information Technology (IT) teams struggle with costly, inflexible deployment and operational workflows. While software-driven cellular architectures introduce flexibility, they lack robust automation frameworks comparable to cloud-native ecosystems, impeding efficient configuration, scalability, and vendor integration. This paper presents AutoRAN, an automated, intent-driven framework for zero-touch provisioning of open, programmable cellular networks. Leveraging cloud-native principles, AutoRAN employs virtualization, declarative infrastructure-as-code templates, and disaggregated micro-services to abstract physical resources and protocol stacks. Its orchestration engine integrates Language Models (LLMs) to translate high-level intents into machine-readable configurations, enabling closed-loop control via telemetry-driven observability. Implemented on a multi-architecture OpenShift cluster with heterogeneous compute (x86/ARM CPUs, NVIDIA GPUs) and multi-vendor Radio Access Network (RAN) hardware (Foxconn, NI), AutoRAN automates deployment of O-RAN-compliant stacks-including OpenAirInterface, NVIDIA ARC RAN, Open5GS core, and O-RAN Software Community (OSC) RIC components-using CI/CD pipelines. Experimental results demonstrate that AutoRAN is capable of deploying an end-to-end Private 5G network in less than 60 seconds with 1.6 Gbps throughput, validating its ability to streamline configuration, accelerate testing, and reduce manual intervention with similar performance than non cloud-based implementations. With its novel LLM-assisted intent translation mechanism, and performance-optimized automation workflow for multi-vendor environments, AutoRAN has the potential of advancing the robustness of next-generation cellular supply chains through reproducible, intent-based provisioning across public and private deployments.
LinkThe application of small-factor, 5G-enabled Unmanned Aerial Vehicles (UAVs) has recently gained significant interest in various aerial and Industry 4.0 applications. However, ensuring reliable, high-throughput, and low-latency 5G communication in aerial applications remains a critical and underexplored problem. This paper presents the 5th generation (5G) Aero, a compact UAV optimized for 5G connectivity, aimed at fulfilling stringent 3rd Generation Partnership Project (3GPP) requirements. We conduct a set of experiments in an indoor environment, evaluating the UAV's ability to establish high-throughput, low-latency communications in both Line-of-Sight (LoS) and Non-Line-of-Sight (NLoS) conditions. Our findings demonstrate that the 5G Aero meets the required 3GPP standards for Command and Control (C2) packets latency in both LoS and NLoS, and video latency in LoS communications and it maintains acceptable latency levels for video transmission in NLoS conditions. Additionally, we show that the 5G module installed on the UAV introduces a negligible 1% decrease in flight time, showing that 5G technologies can be integrated into commercial off-the-shelf UAVs with minimal impact on battery lifetime. This paper contributes to the literature by demonstrating the practical capabilities of current 5G networks to support advanced UAV operations in telecommunications, offering insights into potential enhancements and optimizations for UAV performance in 5G networks.
LinkOpen Radio Access Networks (RANs) leverage disaggregated and programmable RAN functions and open interfaces to enable closed-loop, data-driven radio resource management. This is performed through custom intelligent applications on the RAN Intelligent Controllers (RICs), optimizing RAN policy scheduling, network slicing, user session management, and medium access control, among others. In this context, we have proposed dApps as a key extension of the O-RAN architecture into the real-time and user-plane domains. Deployed directly on RAN nodes, dApps access data otherwise unavailable to RICs due to privacy or timing constraints, enabling the execution of control actions within shorter time intervals. In this paper, we propose for the first time a reference architecture for dApps, defining their life cycle from deployment by the Service Management and Orchestration (SMO) to real-time control loop interactions with the RAN nodes where they are hosted. We introduce a new dApp interface, E3, along with an Application Protocol (AP) that supports structured message exchanges and extensible communication for various service models. By bridging E3 with the existing O-RAN E2 interface, we enable dApps, xApps, and rApps to coexist and coordinate. These applications can then collaborate on complex use cases and employ hierarchical control to resolve shared resource conflicts. Finally, we present and open-source a dApp framework based on OpenAirInterface (OAI). We benchmark its performance in two real-time control use cases, i.e., spectrum sharing and positioning in a 5th generation (5G) Next Generation Node Base (gNB) scenario. Our experimental results show that standardized real-time control loops via dApps are feasible, achieving average control latency below 450 microseconds and allowing optimal use of shared spectral resources.
LinkThe development of Open Radio Access Network (RAN) cellular systems is being propelled by the integration of Artificial Intelligence (AI) techniques. While AI can enhance network performance, it expands the attack surface of the RAN. For instance, the need for datasets to train AI algorithms and the use of open interface to retrieve data in real time paves the way to data tampering during both training and inference phases. In this work, we propose MalO-RAN, a framework to evaluate the impact of data poisoning on O-RAN intelligent applications. We focus on AI-based xApps taking control decisions via Deep Reinforcement Learning (DRL), and investigate backdoor attacks, where tampered data is added to training datasets to include a backdoor in the final model that can be used by the attacker to trigger potentially harmful or inefficient pre-defined control decisions. We leverage an extensive O-RAN dataset collected on the Colosseum network emulator and show how an attacker may tamper with the training of AI models embedded in xApps, with the goal of favoring specific tenants after the application deployment on the network. We experimentally evaluate the impact of the SleeperNets and TrojDRL attacks and show that backdoor attacks achieve up to a 0.9 attack success rate. Moreover, we demonstrate the impact of these attacks on a live O-RAN deployment implemented on Colosseum, where we instantiate the xApps poisoned with MalO-RAN on an O-RAN-compliant Near-real-time RAN Intelligent Controller (RIC). Results show that these attacks cause an average network performance degradation of 87%.
LinkDeploying and testing cellular networks is a complex task due to the multitude of components involved from the core to the radio access network (RAN) and user equipment (UE), all of which requires integration and constant monitoring. Additional challenges are posed by the nature of the wireless channel, whose inherent randomness hinders the repeatability and consistency of the testing process. Consequently, existing solutions for both private and public cellular systems still rely heavily on human intervention for operations such as network reconfiguration, performance monitoring, and end-to-end testing. This reliance significantly slows the pace of innovation in cellular systems. To address these challenges, we introduce 5G-CT, an automation framework based on OpenShift and the GitOps workflow, capable of deploying a soft-warized end-to-end 5G and O-RAN-compliant system in a matter of seconds without the need for any human intervention. We have deployed 5G-CT to test the integration and performance of open-source cellular stacks, including OpenAirInterface, and have collected months of automated over-the-air testing results involving software-defined radios. 5G-CT brings cloud-native continuous integration and delivery to the RAN, effectively addressing the complexities associated with managing spectrum, radios, heterogeneous devices, and distributed components. Moreover, it endows cellular networks with much needed automation and continuous testing capabilities, providing a platform to evaluate the robustness and resiliency of Open RAN software.
LinkThe Open Radio Access Network (RAN) is a new networking paradigm that builds on top of cloud-based, multi-vendor, open and intelligent architectures to shape the next generation of cellular networks for 5G and beyond. While this new paradigm comes with many advantages in terms of observatibility and reconfigurability of the network, it inevitably expands the threat surface of cellular systems and can potentially expose its components and the Machine Learning (ML) infrastructure to several cyber attacks, thus making securing O-RAN networks a necessity. In this paper, we explore security aspects of O-RAN systems by focusing on the specifications, architectures, and intelligence proposed by the O-RAN Alliance. We address the problem of securing O-RAN systems with a holistic perspective, including considerations on the open interfaces used to interconnect the different O-RAN components, on the overall platform, and on the intelligence used to monitor and control the network. For each focus area we identify threats, discuss relevant solutions to address these issues, and demonstrate experimentally how such solutions can effectively defend O -RAN systems against selected cyber attacks. This article is the first work in approaching the security aspect of O-RAN holistically and with experimental evidence obtained on a state-of-the-art programmable O-RAN platform, providing unique guideline for researchers in the field.
Link5G and beyond cellular systems embrace the disaggregation of Radio Access Network (RAN) components, exemplified by the evolution of the fronthual (FH) connection between cellular baseband and radio unit equipment. Crucially, synchronization over the FH is pivotal for reliable 5G services. In recent years, there has been a push to move these links to an Ethernet-based packet network topology, leveraging existing standards and ongoing research for Time-Sensitive Networking (TSN). However, TSN standards, such as Precision Time Protocol (PTP), focus on performance with little to no concern for security. This increases the exposure of the open FH to security risks. Attacks targeting synchronization mechanisms pose significant threats, potentially disrupting 5G networks and impairing connectivity. In this paper, we demonstrate the impact of successful spoofing and replay attacks against PTP synchronization. We show how a spoofing attack is able to cause a production-ready O-RAN and 5G-compliant private cellular base station to catastrophically fail within 2 seconds of the attack, necessitating manual intervention to restore full network operations. To counter this, we design a Machine Learning (ML)-based monitoring solution capable of detecting various malicious attacks with over 97.5% accuracy.
LinkWhile the availability of large datasets has been instrumental to advance fields like computer vision and natural language processing, this has not been the case in mobile networking. Indeed, mobile traffic data is often unavailable due to privacy or regulatory concerns. This problem becomes especially relevant in Open Radio Access Network (RAN), where artificial intelligence can potentially drive optimization and control of the RAN, but still lags behind due to the lack of training datasets. While substantial work has focused on developing testbeds that can accurately reflect production environments, the same level of effort has not been put into twinning the traffic that traverse such networks.To fill this gap, in this paper, we design a methodology to twin real-world cellular traffic traces in experimental Open RAN testbeds. We demonstrate our approach on the Colosseum Open RAN digital twin, and publicly release a large dataset (more than 500 hours and 450 GB) with PHY-, MAC-, and App-layer Key Performance Measurements (KPMs), and protocol stack logs. Our analysis shows that our dataset can be used to develop and evaluate a number of Open RAN use cases, including those with strict latency requirements.
LinkNetwork slicing allows Telecom Operators (TOs) to support service provisioning with diverse Service Level Agreements (SLAs). The combination of network slicing and Open Radio Access Network (RAN) enables TOs to provide more customized network services and higher commercial benefits. However, in the current Open RAN community, an open-source end-to-end slicing solution for 5G is still missing. To bridge this gap, we developed ORANSlice, an open-source network slicing-enabled Open RAN system integrated with popular open-source RAN frameworks. ORANSlice features programmable, 3GPP-compliant RAN slicing and scheduling functionalities. It supports RAN slicing control and optimization via xApps on the near-real-time RAN Intelligent Controller (RIC) thanks to an extension of the E2 interface between RIC and RAN, and service models for slicing. We deploy and test ORANSlice on different O-RAN testbeds and demonstrate its capabilities on different use cases, including slice prioritization and minimum radio resource guarantee.
LinkThe next generation of cellular networks will be characterized by openness, intelligence, virtualization, and distributed computing. The Open Radio Access Network (Open RAN) framework represents a significant leap toward realizing these ideals, with prototype deployments taking place in both academic and industrial domains. While it holds the potential to disrupt the established vendor lock-ins, Open RAN's disaggregated nature raises critical security concerns. Safeguarding data and securing interfaces must be integral to Open RAN's design, demanding meticulous analysis of cost/benefit tradeoffs. In this paper, we embark on the first comprehensive investigation into the impact of encryption on two pivotal Open RAN interfaces: the E2 interface, connecting the base station with a near-real-time RAN Intelligent Controller, and the Open Fronthaul, connecting the Radio Unit to the Distributed Unit. Our study leverages a full-stack O-RAN ALLIANCE compliant implementation within the Colosseum network emulator and a production-ready Open RAN and 5G-compliant private cellular network. This research contributes quantitative insights into the latency introduced and throughput reduction stemming from using various encryption protocols. Furthermore, we present four fundamental principles for constructing security by design within Open RAN systems, offering a roadmap for navigating the intricate landscape of Open RAN security.
LinkThis demo paper presents a dApp-based real-time spectrum sharing scenario where a 5th generation (5G) base station implementing the NR stack adapts its transmission and reception strategies based on the incumbent priority users in the Citizen Broadband Radio Service (CBRS) band. The dApp is responsible for obtaining relevant measurements from the Next Generation Node Base (gNB), running the spectrum sensing inference, and configuring the gNB with a control action upon detecting the primary incumbent user transmissions. This approach is built on dApps, which extend the O-RAN framework to the real-time and user plane domains. Thus, it avoids the need of dedicated Spectrum Access Systems (SASs) in the CBRS band. The demonstration setup is based on the open-source 5G OpenAirInterface (OAI) framework, where we have implemented a dApp interfaced with a gNB and communicating with a Commercial Off-the-Shelf (COTS) User Equipment (UE) in an over-the-air wireless environment. When an incumbent user has active transmission, the dApp will detect and inform the primary user presence to the gNB. The dApps will also enforce a control policy that adapts the scheduling and transmission policy of the Radio Access Network (RAN). This demo provides valuable insights into the potential of using dApp-based spectrum sensing with O-RAN architecture in next generation cellular networks.
LinkProvided herein are methods and systems for beyond line of sight control of autonomous vehicles including a wireless network having an Open RAN infrastructure in communication with a core network and a MEC infrastructure, a MEC orchestrator deployed in the Open RAN infrastructure and including MEC computing nodes and a ground control station, a catalog in communication with the orchestrator and including MEC function apps for operating the MEC infrastructure and/or the core network, and Open RAN apps for operating the Open RAN infrastructure, wherein the orchestrator is configured to process data from the MEC infrastructure, the core network, the open RAN infrastructure, or combinations thereof, and instantiate the MEC function apps in the MEC infrastructure and/or core network, instantiate the Open RAN apps in the Open RAN infrastructure, or combinations thereof to manage a plurality of vehicle functions and/or wireless network functions responsive to data from the vehicles.
LinkThe next generation of cellular networks will be characterized by softwarized, open, and disaggregated architectures exposing analytics and control knobs to enable network intelligence via innovative data-driven algorithms. How to practically realize this vision, however, is largely an open problem. Specifically, for a given intent, it is still unclear how to select which data-driven models should be deployed and where, which parameters to control, and how to feed them appropriate inputs. In this article, we take a decisive step forward by presenting OrchestRAN, a network intelligence orchestration framework for next generation systems that embraces and builds upon the Open Radio Access Network (RAN) paradigm to provide a practical solution to these challenges. OrchestRAN has been designed to execute in the non-Real-time (RT) RAN Intelligent Controller (RIC) as an rApp and allows Network Operators (NOs) to specify high-level control/inference objectives (i.e., adapt scheduling, and forecast capacity in near-RT, e.g., for a set of base stations in Downtown New York). OrchestRAN automatically computes the optimal set of data-driven algorithms and their execution location (e.g., in the cloud, or at the edge) to achieve intents specified by the NOs while meeting the desired timing requirements and avoiding conflicts between different data-driven algorithms controlling the same parameters set. We show that the intelligence orchestration problem in Open RAN is NP-hard. To support real-world applications, we also propose three complexity reduction techniques to obtain low-complexity solutions that, when combined, can compute a solution in 0.1 s for large network instances. We prototype OrchestRAN and test it at scale on Colosseum, the world's largest wireless network emulator with hardware in the loop. Our experimental results on a network with 7 base stations and 42 users demonstrate that OrchestRAN is able to instantiate data-driven services on demand with minimal control overhead and latency.
LinkNetwork virtualization, software-defined infrastructure, and orchestration are pivotal elements in contemporary networks, yielding new vectors for optimization and novel capabilities. In line with these principles, O-RAN presents an avenue to bypass vendor lock-in, circumvent vertical configurations, enable network programmability, and facilitate integrated artificial intelligence (AI) support. Moreover, modern container orchestration frameworks (e.g., Kubernetes, Red Hat OpenShift) simplify the way cellular base stations, as well as the newly introduced RAN Intelligent Controllers (RICs), are deployed, managed, and orchestrated. While this enables cost reduction via infrastructure sharing, it also makes it more challenging to meet O-RAN control latency requirements, especially during peak resource utilization. For instance, the Near-real-time RIC is in charge of executing applications (xApps) that must take control decisions within one second, and we show that container platforms available today fail in guaranteeing such timing constraints. To address this problem, we propose ScalO-RAN, a control framework rooted in optimization and designed as an O-RAN rApp that allocates and scales AI-based O-RAN applications (xApps, rApps, dApps) to: (i) abide by application-specific latency requirements, and (ii) monetize the shared infrastructure while reducing energy consumption. We prototype ScalO-RAN on an OpenShift cluster with base stations, RIC, and a set of AI-based xApps deployed as micro-services. We evaluate ScalO-RAN both numerically and experimentally. Our results show that ScalO-RAN can optimally allocate and distribute O-RAN applications within available computing nodes to accommodate even stringent latency requirements. More importantly, we show that scaling O-RAN applications is primarily a time-constrained problem rather than a resource-constrained one, where scaling policies must account for stringent inference time of AI applications, and not only how many resources they consume.
LinkObtaining access to exclusive spectrum, cell sites, Radio Access Network (RAN) equipment, and edge infrastructure imposes major capital expenses to mobile network operators. A neutral host infrastructure, by which a third-party company provides RAN services to mobile operators through network virtualization and slicing techniques, is seen as a promising solution to decrease these costs. Currently, however, neutral host providers lack automated and virtualized pipelines for onboarding new tenants and to provide elastic and on-demand allocation of resources matching operators’ requirements. To address this gap, this paper presents NeutRAN, a zero-touch framework based on the O-RAN architecture to support applications on neutral hosts and automatic operator onboarding. NeutRAN builds upon two key components: (i) an optimization engine to guarantee coverage and to meet quality of service requirements while accounting for the limited amount of shared spectrum and RAN nodes, and (ii) a fully virtualized and automated infrastructure that converts the output of the optimization engine into deployable micro-services to be executed at RAN nodes and cell sites. NeutRAN was prototyped on an OpenShift cluster and on a programmable testbed with 4 base stations and 10 users from 3 different tenants. We evaluate its benefits, comparing it to a traditional license-based RAN where each tenant has dedicated physical and spectrum resources. We show that NeutRAN can deploy a fully operational neutral host-based cellular network in around 10 seconds. Experimental results show that it increases the cumulative network throughput by 2.18× and the per-user average throughput by 1.73× in networks with shared spectrum blocks of 30 MHz. NeutRAN provides a 1.77× cumulative throughput gain even when it can only operate on a shared spectrum block of 10 MHz (one third of the spectrum used in license-based RANs).
LinkRecent years have witnessed the Open Radio Access Network (RAN) paradigm transforming the fundamental ways cellular systems are deployed, managed, and optimized. This shift is led by concepts such as openness, softwarization, programmability, interoperability, and intelligence of the network, which have emerged in wired networks through Software-defined Networking (SDN) but lag behind in cellular systems. The realization of the Open RAN vision into practical architectures, intelligent data-driven control loops, and efficient software implementations, however, is a multifaceted challenge, which requires (i) datasets to train Artificial Intelligence (AI) and Machine Learning (ML) models; (ii) facilities to test models without disrupting production networks; (iii) continuous and automated validation of the RAN software; and (iv) significant testing and integration efforts. This paper is a tutorial on how Colosseum—the world’s largest wireless network emulator with hardware in the loop—can provide the research infrastructure and tools to fill the gap between the Open RAN vision, and the deployment and commercialization of open and programmable networks. We describe how Colosseum implements an Open RAN digital twin through a high-fidelity Radio Frequency (RF) channel emulator and endto- end softwarized O-RAN and 5G-compliant protocol stacks, thus allowing users to reproduce and experiment upon topologies representative of real-world cellular deployments. Then, we detail the twinning infrastructure of Colosseum, as well as the automation pipelines for RF and protocol stack twinning. Finally, we showcase a broad range of Open RAN use cases implemented on Colosseum, including the real-time connection between the digital twin and real-world networks, and the development, prototyping, and testing of AI/ML solutions for Open RAN.
LinkThe highly heterogeneous ecosystem of Next Generation (NextG) wireless communication systems calls for novel networking paradigms where functionalities and operations can be dynamically and optimally reconfigured in real time to adapt to changing traffic conditions and satisfy stringent and diverse Quality of Service (QoS) demands. Open Radio Access Network (RAN) technologies, and specifically those being standardized by the O-RAN Alliance, make it possible to integrate network intelligence into the once monolithic RAN via intelligent applications, namely, xApps and rApps. These applications enable flexible control of the network resources and functionalities, network management, and orchestration through data-driven control loops. Despite recent work demonstrating the effectiveness of Deep Reinforcement Learning (DRL) in controlling O-RAN systems, how to design these solutions in a way that does not create conflicts and unfair resource allocation policies is still an open challenge. In this paper, we perform a comparative analysis where we dissect the impact of different DRL-based xApp designs on network performance. Specifically, we benchmark 12 different xApps that embed DRL agents trained using different reward functions, with different action spaces and with the ability to hierarchically control different network parameters. We prototype and evaluate these xApps on Colosseum, the world's largest O-RAN-compliant wireless network emulator with hardware-in-the-loop. We share the lessons learned and discuss our experimental results, which demonstrate how certain design choices deliver the highest performance while others might result in a competitive behavior between different classes of traffic with similar objectives.
LinkThe Open Radio Access Network (RAN) paradigm is transforming cellular networks into a system of disaggregated, virtualized, and software-based components. These self-optimize the network through programmable, closed-loop control, leveraging Artificial Intelligence (AI) and Machine Learning (ML) routines. In this context, Deep Reinforcement Learning (DRL) has shown great potential in addressing complex resource allocation problems. However, DRL-based solutions are inherently hard to explain, which hinders their deployment and use in practice. In this paper, we propose EXPLORA, a framework that provides explainability of DRL-based control solutions for the Open RAN ecosystem. EXPLORA synthesizes network-oriented explanations based on an attributed graph that produces a link between the actions taken by a DRL agent (i.e., the nodes of the graph) and the input state space (i.e., the attributes of each node). This novel approach allows EXPLORA to explain models by providing information on the wireless context in which the DRL agent operates. EXPLORA is also designed to be lightweight for real-time operation. We prototype EXPLORA and test it experimentally on an O-RAN-compliant near-real-time RIC deployed on the Colosseum wireless network emulator. We evaluate EXPLORA for agents trained for different purposes and showcase how it generates clear network-oriented explanations. We also show how explanations can be used to perform informative and targeted intent-based action steering and achieve median transmission bitrate improvements of 4% and tail improvements of 10%.
LinkThe adoption of Next-Generation cellular networks is rapidly increasing, together with their achievable throughput and their latency demands. Optimizing existing transport protocols for such networks is challenging, as the wireless channel becomes critical for performance and reliability studies. The performance assessment of transport protocols for wireless networks has mostly relied on simulation-based environments. While providing valuable insights, such studies are influenced by the simulator's specific settings. Employing more advanced and flexible methods for collecting and analyzing end-to-end transport layer datasets in realistic wireless environments is crucial to the design, implementation and evaluation of transport protocols that are effective when employed in real-world 5G networks. We present Hercules, a containerized 5G standalone framework that collects data employing the OpenAirInterface 5G protocol stack. We illustrate its potential with an initial transport layer and 5G stack measurement campaign on the Colosseum wireless network testbed. In addition, we present preliminary post-processing results from testing various TCP Congestion Control techniques over multiple wireless channels.
LinkCellular networks are undergoing a radical transformation toward disaggregated, fully virtualized, and programmable architectures with increasingly heterogeneous devices and applications. In this context, the open architecture standardized by the O-RAN Alliance enables algorithmic and hardware-independent Radio Access Network (RAN) adaptation through closed-loop control. O-RAN introduces Machine Learning (ML)-based network control and automation algorithms as so-called xApps running on RAN Intelligent Controllers . However, in spite of the new opportunities brought about by the Open RAN, advances in ML-based network automation have been slow, mainly because of the unavailability of large-scale datasets and experimental testing infrastructure. This slows down the development and widespread adoption of Deep Reinforcement Learning (DRL) agents on real networks, delaying progress in intelligent and autonomous RAN control. In this paper, we address these challenges by discussing insights and practical solutions for the design, training, testing, and experimental evaluation of DRL-based closed-loop control in the Open RAN. To this end, we introduce ColO-RAN, the first publicly-available large-scale O-RAN testing framework with software-defined radios-in-the-loop. Building on the scale and computational capabilities of the Colosseum wireless network emulator, ColO-RAN enables ML research at scale using O-RAN components, programmable base stations, and a “wireless data factory.” Specifically, we design and develop three exemplary xApps for DRL-based control of RAN slicing, scheduling and online model training, and evaluate their performance on a cellular network with 7 softwarized base stations and 42 users. Finally, we showcase the portability of ColO-RAN to different platforms by deploying it on Arena, an indoor programmable testbed. The lessons learned from the ColO-RAN implementation and the extensive results from our first-of-its-kind large-scale evaluation highlight the importance of experimental frameworks for the development of end-to-end intelligent RAN control pipelines, from data analysis to the design and testing of DRL agents. They also provide insights on the challenges and benefits of DRL-based adaptive control, and on the trade-offs associated to training on a live RAN. ColO-RAN and the collected large-scale dataset are publicly available to the research community.
LinkOpen Radio Access Network (RAN) architectures will enable interoperability, openness and programmable data-driven control in next generation cellular networks. However, developing and testing efficient solutions that generalize across heterogeneous cellular deployments and scales, and that optimize network performance in such diverse environments is a complex task that is still largely unexplored. In this paper, we present OpenRAN Gym, a unified, open, and O-RAN-compliant experimental toolbox for data collection, design, prototyping and testing of end-to-end data-driven control solutions for next generation Open RAN systems. OpenRAN Gym extends and combines into a unique solution several software frameworks for data collection of RAN statistics and RAN control, and a lightweight O-RAN near-real-time RAN Intelligent Controller (RIC) tailored to run on experimental wireless platforms. We first provide an overview of the various architectural components of OpenRAN Gym and describe how it is used to collect data and design, train and test artificial intelligence and machine learning O-RAN-compliant applications (xApps) at scale. We then describe in detail how to test the developed xApps on softwarized RANs and provide an example of two xApps developed with OpenRAN Gym that are used to control a network with 7 base stations and 42 users deployed on the Colosseum testbed. Finally, we show how solutions developed with OpenRAN Gym on Colosseum can be exported to real-world, heterogeneous wireless platforms, such as the Arena testbed and the POWDER and COSMOS platforms of the PAWR program. OpenRAN Gym and its software components are open-source and publicly-available to the research community. By guiding the readers from instantiating the components of OpenRAN Gym, to running experiments in a softwarized RAN with an O-RAN-compliant near-RT RIC and xApps, we aim at providing a key reference for researchers and practitioners working on experimental Open RAN systems.
LinkThe Open Radio Access Network (RAN) and its embodiment through the O-RAN Alliance specifications are poised to revolutionize the telecom ecosystem. O-RAN promotes virtualized RANs where disaggregated components are connected via open interfaces and optimized by intelligent controllers. The result is a new paradigm for the RAN design, deployment, and operations: O-RAN networks can be built with multi-vendor, interoperable components, and can be programmatically optimized through a centralized abstraction layer and data-driven closed-loop control. Therefore, understanding O-RAN, its architecture, its interfaces, and workflows is key for researchers and practitioners in the wireless community. In this article, we present the first detailed tutorial on O-RAN. We also discuss the main research challenges and review early research results. We provide a deep dive of the O-RAN specifications, describing its architecture, design principles, and the O-RAN interfaces. We then describe how the O-RAN RAN Intelligent Controllers (RICs) can be used to effectively control and manage 3GPP-defined RANs. Based on this, we discuss innovations and challenges of O-RAN networks, including the Artificial Intelligence (AI) and Machine Learning (ML) workflows that the architecture and interfaces enable, security, and standardization issues. Finally, we review experimental research platforms that can be used to design and test O-RAN networks, along with recent research results, and we outline future directions for O-RAN development.
LinkThis article describes HIRO-NET, an Heterogeneous Intelligent Robotic Network. HIRO-NET is an emergency infrastructure-less network that aims to address the problem of providing connectivity in the immediate aftermath of a natural disaster, where no cellular or wide area network is operational and no Internet access is available. HIRO-NET establishes a two-tier wireless mesh network where the Lower Tier connects nearby survivors in a self-organized mesh via Bluetooth Low Energy (BLE) and the Upper Tier creates long-range VHF links between autonomous robots exploring the disaster-stricken area. HIRO-NET's main goal is to enable users in the disaster area to exchange text messages to share critical information and request help from first responders. The mesh network discovery problem is analyzed and a network protocol specifically designed to facilitate the exploration process is presented. We show how HIRO-NET robots successfully discover, bridge and interconnect local mesh networks. Results show that the Lower Tier always reaches network convergence and the Upper Tier can virtually extend HIRO-NET functionalities to the range of a small metropolitan area. In the event of an Internet connection still being available to some user, HIRO-NET is able to opportunistically share and provide access to low data-rate services (e.g., Twitter, Gmail) to the whole network. Results suggest that a temporary emergency network to cover a metropolitan area can be created in tens of minutes.
LinkThe Open Radio Access Network (Open RAN) - being standardized, among others, by the O-RAN Alliance - brings a radical transformation to the cellular ecosystem through disaggregation and RAN intelligent controllers (RICs). The latter enable closed-loop control through custom logic applications (xApps and rApps), supporting control decisions at different timescales. However, the current O-RAN specifications lack a practical approach to execute real-time control loops operating at timescales below 10 ms. In this article, we propose the notion of dApps, distributed applications that complement existing xApps/rApps by allowing operators to implement fine-grained data-driven management and control in real time at the central/distributed units (DUs). dApps receive real-time data from the RAN, as well as enrichment information from the near-real-time RIC, and execute inference and control of lower-layer functionalities, thus enabling use cases with stricter timing requirements than those considered by the RICs, such as beam management and user scheduling. We propose feasible ways to integrate dApps in the O-RAN architecture by leveraging and extending interfaces and components already present therein. Finally, we discuss challenges specific to dApps, and provide preliminary results that show the benefits of executing network intelligence through dApps.
LinkSoftwarization, programmable network control and the use of all-encompassing controllers acting at different timescales are heralded as the key drivers for the evolution to next-generation cellular networks. These technologies have fostered newly designed intelligent data-driven solutions for managing large sets of diverse cellular functionalities, basically impossible to implement in traditionally closed cellular architectures. Despite the evident interest of industry on Artificial Intelligence (AI) and Machine Learning (ML) solutions for closed-loop control of the Radio Access Network (RAN), and several research works in the field, their design is far from mainstream, and it is still a sophisticated – and often overlooked – operation. In this paper, we discuss how to design AI/ML solutions for the intelligent closed-loop control of the Open RAN, providing guidelines and insights based on exemplary solutions with high-performance record. We then show how to embed these solutions into xApps instantiated on the O-RAN nearreal-time RAN Intelligent Controller (RIC) through OpenRAN Gym, the first publicly available toolbox for data-driven ORAN experimentation at scale. We showcase a use case of an xApp developed with OpenRAN Gym and tested on a cellular network with 7 base stations and 42 users deployed on the Colosseum wireless network emulator. Our demonstration shows the high degree of flexibility of the OpenRAN Gym-based xApp development environment, which is independent of deployment scenarios and traffic demand.
LinkThe next generation of cellular networks will be characterized by softwarized, open, and disaggregated architectures exposing analytics and control knobs to enable network intelligence via innovative data-driven algorithms. How to practically realize this vision, however, is largely an open problem. For a given network optimization/automation objective, it is currently unknown how to select which data-driven models should be deployed and where, which parameters to control, and how to feed them appropriate inputs. In this paper, we take a decisive step forward by presenting and prototyping OrchestRAN, a novel orchestration framework for next generation systems that embraces and builds upon the Open Radio Access Network (RAN) paradigm to provide a practical solution to these challenges. OrchestRAN has been designed to execute in the non-Real-time (RT) RAN Intelligent Controller (RIC) and allows Network Operators (NOs) to specify high-level control/inference objectives (i.e., adapt scheduling, and forecast capacity in near-RT, e.g., for a set of base stations in Downtown New York). OrchestRAN automatically computes the optimal set of data-driven algorithms and their execution location (e.g., in the cloud, or at the edge) to achieve intents specified by the NOs while meeting the desired timing requirements and avoiding conflicts between different data-driven algorithms controlling the same parameters set. We show that the intelligence orchestration problem in Open RAN is NP-hard, and design low-complexity solutions to support real-world applications. We prototype OrchestRAN and test it at scale on Colosseum, the world’s largest wireless network emulator with hardware in the loop. Our experimental results on a network with 7 base stations and 42 users demonstrate that OrchestRAN is able to instantiate data-driven services on demand with minimal control overhead and latency.
LinkOpen Radio Access Network (RAN) architectures will enable interoperability, openness, and programmatic data-driven control in next generation cellular networks. However, developing scalable and efficient data-driven algorithms that can generalize across diverse deployments and optimize RAN performance is a complex feat, largely unaddressed as of today. Specifically, the ability to design efficient data-driven algorithms for network control and inference requires at a minimum (i) access to large, rich, and heterogeneous datasets; (ii) testing at scale in controlled but realistic environments, and (iii) software pipelines to automate data collection and experimentation. To facilitate these tasks, in this paper we propose OpenRAN Gym, a practical, open, experimental toolbox that provides end-to-end design, data collection, and testing workflows for intelligent control in next generation Open RAN systems. OpenRAN Gym builds on software frameworks for the collection of large datasets and RAN control, and on a lightweight O-RAN environment for experimental wireless platforms. We first provide an overview of OpenRAN Gym and then describe how it can be used to collect data, to design and train artificial intelligence and machine learning-based O-RAN applications (xApps), and to test xApps on a softwarized RAN. Then, we provide an example of two xApps designed with OpenRAN Gym and used to control a large-scale network with 7 base stations and 42 users deployed on the Colosseum testbed. OpenRAN Gym and its software components are open source and publicly-available to the research community.
LinkWith the unprecedented rise in traffic demand and mobile subscribers, real-time fine-grained optimization frame-works are crucial for the future of cellular networks. Indeed, rigid and inflexible infrastructures are incapable of adapting to the massive amounts of data forecast for 5G networks. Network softwarization, i.e., the approach of controlling “everything” via software, endows the network with unprecedented flexibility, al-lowing it to run optimization and machine learning-based frame-works for flexible adaptation to current network conditions and traffic demand. This work presents QCell, a Deep Q-Network-based optimization framework for softwarized cellular networks. QCell dynamically allocates slicing and scheduling resources to the network base stations adapting to varying interference con-ditions and traffic patterns. QCell is prototyped on Colosseum, the world's largest network emulator, and tested in a variety of network conditions and scenarios. Our experimental results show that using QCell significantly improves user's throughput (up to 37.6%) and the size of transmission queues (up to 11.9%), decreasing service latency.
LinkColosseum is an open-access and publicly-available large-scale wireless testbed for experimental research via virtualized and softwarized waveforms and protocol stacks on a fully programmable, “white-box” platform. Through 256 state-of-the-art software-defined radios and a massive channel emulator core, Colosseum can model virtually any scenario, enabling the design, development and testing of solutions at scale in a variety of deployments and channel conditions. These Colosseum radio-frequency scenarios are reproduced through high-fidelity FPGAbased emulation with finite-impulse response filters. Filters model the taps of desired wireless channels and apply them to the signals generated by the radio nodes, faithfully mimicking the conditions of real-world wireless environments. In this paper, we introduce Colosseum as a testbed that is for the first time open to the research community. We describe the architecture of Colosseum and its experimentation and emulation capabilities. We then demonstrate the effectiveness of Colosseum for experimental research at scale through exemplary use cases including prevailing wireless technologies (e.g., cellular and Wi-Fi) in spectrum sharing and unmanned aerial vehicle scenarios. A roadmap for Colosseum future updates concludes the paper.
LinkNext generation (NextG) cellular networks will be natively cloud-based and built on programmable, virtualized, and disaggregated architectures. The separation of control functions from the hardware fabric and the introduction of standardized control interfaces will enable the definition of custom closed-control loops, which will ultimately enable embedded intelligence and real-time analytics, thus effectively realizing the vision of autonomous and self-optimizing networks. This article explores the disaggregated network architecture proposed by the O-RAN Alliance as a key enabler of NextG networks. Within this architectural context, we discuss the potential, the challenges, and the limitations of data-driven optimization approaches to network control over different timescales. We also present the first large-scale integration of O-RAN-compliant software components with an open source full-stack softwarized cellular network. Experiments conducted on Colosseum, the world's largest wireless network emulator, demonstrate closed-loop integration of real-time analytics and control through deep reinforcement learning agents. We also show the feasibility of radio access network (RAN) control through xApps running on the near-real-time RAN intelligent controller to optimize the scheduling policies of coexisting network slices, leveraging the O-RAN open interfaces to collect data at the edge of the network.
LinkRadio access network (RAN) slicing is a virtualization technology that partitions radio resources into multiple autonomous virtual networks. Since RAN slicing can be tailored to provide diverse performance requirements, it will be pivotal to achieve the high-throughput and low-latency communications that next-generation (5G) systems have long yearned for. To this end, effective RAN slicing algorithms must (i) partition radio resources so as to leverage coordination among multiple base stations and thus boost network throughput; and (ii) reduce interference across different slices to guarantee slice isolation and avoid performance degradation. The ultimate goal of this paper is to design RAN slicing algorithms that address the above two requirements. First, we show that the RAN slicing problem can be formulated as a 0-1 Quadratic Programming problem, and we prove its NP-hardness. Second, we propose an optimal solution for small-scale 5G network deployments, and we present three approximation algorithms to make the optimization problem tractable when the network size increases. We first analyze the performance of our algorithms through simulations, and then demonstrate their performance through experiments on a standard-compliant LTE testbed with 2 base stations and 6 smartphones. Our results show that not only do our algorithms efficiently partition RAN resources, but also improve network throughput by 27% and increase by 2× the signal-to-interference-plus-noise ratio.
LinkThe cellular networking ecosystem is being radically transformed by openness, softwarization, and virtualization principles, which will steer NextG networks toward solutions running on "white box" infrastructures. Telco operators will be able to truly bring intelligence to the network, dynamically deploying and adapting its elements at run time according to current conditions and traffic demands. Deploying intelligent solutions for softwarized NextG networks, however, requires extensive prototyping and testing procedures, currently largely unavailable. To this aim, this paper introduces SCOPE, an open and softwarized prototyping platform for NextG systems. SCOPE is made up of: (i) A ready-to-use, portable open-source container for instantiating softwarized and programmable cellular network elements (e.g., base stations and users); (ii) an emulation module for diverse real-world deployments, channels and traffic conditions for testing new solutions; (iii) a data collection module for artificial intelligence and machine learning-based applications, and (iv) a set of open APIs for users to control network element functionalities in real time. Researchers can use SCOPE to test and validate NextG solutions over a variety of large-scale scenarios before implementing them on commercial infrastructures. We demonstrate the capabilities of SCOPE and its platform independence by prototyping exemplary cellular solutions in the controlled environment of Colosseum, the world's largest wireless network emulator. We then port these solutions to indoor and outdoor testbeds, namely, to Arena and POWDER, a PAWR platform.
LinkFifth-generation (5G) systems will extensively employ radio access network (RAN) softwarization. This key innovation enables the instantiation of "virtual cellular networks" running on different slices of the shared physical infrastructure. In this paper, we propose the concept of Private Cellular Connectivity as a Service (PCCaaS), where infrastructure providers deploy covert network slices known only to a subset of users. We then present SteaLTE as the first realization of a PCCaaS-enabling system for cellular networks. At its core, SteaLTE utilizes wireless steganography to disguise data as noise to adversarial receivers. Differently from previous work, however, it takes a full-stack approach to steganography, contributing an LTE-compliant stegano-graphic protocol stack for PCCaaS-based communications, and packet schedulers and operations to embed covert data streams on top of traditional cellular traffic (primary traffic). SteaLTE balances undetectability and performance by mimicking channel impairments so that covert data waveforms are almost indistinguishable from noise. We evaluate the performance of SteaLTE on an indoor LTE-compliant testbed under different traffic profiles, distance and mobility patterns. We further test it on the outdoor PAWR POWDER platform over long-range cellular links. Results show that in most experiments SteaLTE imposes little loss of primary traffic throughput in presence of covert data transmissions (<; 6%), making it suitable for undetectable PCCaaS networking.
LinkFifth generation (5G) cellular networks will serve a wide variety of heterogeneous use cases, including mobile broadband users, ultra-low latency services and massively dense connectivity scenarios. The resulting diverse communication requirements will demand networking with unprecedented flexibility, not currently provided by the monolithic black-box approach of 4G cellular networks. The research community and an increasing number of standardization bodies and industry coalitions have recognized softwarization, virtualization, and disaggregation of networking functionalities as the key enablers of the needed shift to flexibility. Particularly, software-defined cellular networks are heralded as the prime technology to satisfy the new application-driven traffic requirements and to support the highly time-varying topology and interference dynamics, because of their openness through well-defined interfaces, and programmability, for swift and responsive network optimization. Leading the technological innovation in this direction, several 5G software-based projects and alliances have embraced the open source approach, making new libraries and frameworks available to the wireless community. This race to open source softwarization, however, has led to a deluge of solutions whose interoperability and interactions are often unclear. This article provides the first cohesive and exhaustive compendium of recent open source software and frameworks for 5G cellular networks, with a full stack and end-to-end perspective. We detail their capabilities and functionalities focusing on how their constituting elements fit the 5G ecosystem, and unravel the interactions among the surveyed solutions. Finally, we review hardware and testbeds on which these frameworks can run, and provide a critical perspective on the limitations of the state-of-the-art, as well as feasible directions toward fully open source, programmable 5G networks.
LinkArena is an open-access wireless testing platform based on a grid of antennas mounted on the ceiling of a large office-space environment. Each antenna is connected to programmable software-defined radios (SDR) enabling sub-6 GHz 5G-and-beyond spectrum research. With 12 computational servers, 24 SDRs synchronized at the symbol level, and a total of 64 antennas, Arena provides the computational power and the scale to foster new technology development in some of the most crowded spectrum bands. Arena is based on a three-tier design, where the servers and the SDRs are housed in a double rack in a dedicated room, while the antennas are hung off the ceiling of a 2240 square feet office space and cabled to the radios through 100 ft-long cables. This ensures a reconfigurable, scalable, and repeatable real-time experimental evaluation in a real wireless indoor environment. In this paper, we introduce the architecture, capabilities, and system design choices of Arena, and provides details of the software and hardware implementation of various testbed components. Furthermore, we describe key capabilities by providing examples of published work that employed Arena for applications as diverse as synchronized MIMO transmission schemes, multi-hop ad hoc networking, multi-cell 5G networks, AI-powered Radio-Frequency fingerprinting, secure wireless communications, and spectrum sensing for cognitive radio.
LinkCurrent cellular networks rely on closed and inflexible infrastructure tightly controlled by a handful of vendors. Their configuration requires vendor support and lengthy manual operations, which prevent Telco Operators (TOs) from unlocking the full network potential and from performing fine grained performance optimization, especially on a per-user basis. To address these key issues, this paper introduces CellOS, a fully automated optimization and management framework for cellular networks that requires negligible intervention (“zero-touch”). CellOS leverages softwarization and automatic optimization principles to bridge Software-Defined Networking (SDN) and cross-layer optimization. Unlike state-of-the-art SDN-inspired solutions for cellular networking, CellOS: (i) Hides low-level network details through a general virtual network abstraction; (ii) allows TOs to define high-level control objectives to dictate the desired network behavior without requiring knowledge of optimization techniques, and (iii) automatically generates and executes distributed control programs for simultaneous optimization of heterogeneous control objectives on multiple network slices. CellOS has been implemented and evaluated on an indoor testbed with two different LTE-compliant implementations: OpenAirInterface and srsLTE. We further demonstrated CellOS capabilities on the long-range outdoor POWDER-RENEW PAWR 5G platform. Results from scenarios with multiple base stations and users show that CellOS is platform-independent and self-adapts to diverse network deployments. Our investigation shows that CellOS outperforms existing solutions on key metrics, including throughput (up to 86% improvement), energy efficiency (up to 84%) and fairness (up to 29%).
LinkNetwork slicing of multi-access edge computing (MEC) resources is expected to be a pivotal technology to the success of 5G networks and beyond. The key challenge that sets MEC slicing apart from traditional resource allocation problems is that edge nodes depend on tightly-intertwined and strictly-constrained networking, computation and storage resources. Therefore, instantiating MEC slices without incurring in resource over-provisioning is hardly addressable with existing slicing algorithms. The main innovation of this paper is Sl-EDGE, a unified MEC slicing framework that allows network operators to instantiate heterogeneous slice services (e.g., video streaming, caching, 5G network access) on edge devices. We first describe the architecture and operations of Sl-EDGE, and then show that the problem of optimally instantiating joint network-MEC slices is NP-hard. Thus, we propose near-optimal algorithms that leverage key similarities among edge nodes and resource virtualization to instantiate heterogeneous slices 7.5x faster and within 25% of the optimum. We first assess the performance of our algorithms through extensive numerical analysis, and show that Sl-EDGE instantiates slices 6x more efficiently then state-of-the-art MEC slicing algorithms. Furthermore, experimental results on a 24-radio testbed with 9 smartphones demonstrate that Sl-EDGE provides simultaneously highly-efficient slicing of joint LTE connectivity, video streaming over WiFi, and ffmpeg video transcoding.
LinkIn this paper we propose SkyCell, a prototyping platform for 5G autonomous aerial base stations. While the majority of work on the topic focuses on theoretical and rarely implemented solutions, SkyCell practically demonstrates the feasibility of an aerial base station where wireless backhaul, autonomous mobility and 5G functionalities are integrated within a unified framework. We showcase the advantages of Unmanned Aerial Vehicles for 5G applications, discuss the design challenges, and ultimately propose a prototyping framework to develop aerial cellular base stations. Experimental results demonstrate that SkyCell not only supports heterogeneous data traffic demand and services, but also enables the implementation of autonomous flight control algorithms while improving metrics such as network throughput (up to 35%) and user fairness (up to 39%).
LinkNetworks of Unmanned Aerial Vehicles (UAVs), composed of hundreds, possibly thousands of highly mobile and wirelessly connected flying drones will play a vital role in future Internet of Things (IoT) and 5G networks. However, how to control UAV networks in an automated and scalable fashion in distributed, interference-prone, and potentially adversarial environments is still an open research problem. This article introduces SwarmControl, a new software-defined control framework for UAV wireless networks based on distributed optimization principles. In essence, SwarmControl provides the Network Operator (NO) with a unified centralized abstraction of the networking and flight control functionalities. High-level control directives are then automatically decomposed and converted into distributed network control actions that are executed through programmable software-radio protocol stacks. SwarmControl (i) constructs a network control problem representation of the directives of the NO; (ii) decomposes it into a set of distributed sub-problems; and (iii) automatically generates numerical solution algorithms to be executed at individual UAVs.We present a prototype of an SDR-based, fully reconfigurable UAV network platform that implements the proposed control framework, based on which we assess the effectiveness and flexibility of SwarmControl with extensive flight experiments. Results indicate that the SwarmControl framework enables swift reconfiguration of the network control functionalities, and it can achieve an average throughput gain of 159% compared to the state-of-the-art solutions.
LinkIn this paper we present HIRO-NET, Heterogeneous Intelligent Robotic Network. HIRO-NET is an emergency infrastructure-less network tailored to address the problem of providing connectivity in the immediate aftermath of a natural disaster, where no cellular or wide area network is operational and no Internet access is available. HIRO-NET establishes a two-tier wireless mesh network where the Lower Tier connects nearby survivors in a self-organized mesh via Bluetooth Low Energy (BLE)and the Upper Tier creates long-range VHF links between autonomous robots exploring the disaster stricken area. HIRO-NET main goal is to enable users in the disaster to exchange text messages in order to share critical information and request help from first responders. The mesh network discovery problem is analyzed and a network protocol specifically designed to facilitate the exploration process is presented. We show how HIRO-NET robots successfully discover, bridge and interconnect local mesh networks. Results show that the Lower Tier always reaches network convergence and the Upper Tier can virtually extend HIRO-NET functionalities to the range of a small metropolitan area. In the event of an Internet connection still being available to some user, HIRO-NET is able to opportunistically share and provide access to low data-rate services (e.g. Twitter, Gmail)to the whole network. Results suggest that a temporary emergency network to cover a metropolitan area can be created in tens of minutes.
LinkWireless networks require fast-acting, effective and efficient security mechanisms able to tackle unpredictable, dynamic, and stealthy attacks. In recent years, we have seen the steadfast rise of technologies based on machine learning and software-defined radios, which provide the necessary tools to address existing and future security threats without the need of direct human-in-the-loop intervention. On the other hand, these techniques have been so far used in an ad hoc fashion, without any tight interaction between the attack detection and mitigation phases. In this chapter, we propose and discuss a Learning-based Wireless Security (LeWiS) framework that provides a closed-loop approach to the problem of cross-layer wireless security. Along with discussing the LeWiS framework, we also survey recent advances in cross-layer wireless security.
Link