Andrea Lacava

Postdoc Research Fellow

Office: 650 EXP, Northeastern University, Boston, MA 02115

Education

  • Ph.D. in Computer Engineering - Northeastern University (2025)
  • Ph.D. in Information and Communication Technology (ICT) - Sapienza University of Rome (2025)
  • M.Sc. in Cybersecurity - Sapienza University of Rome (2020)
  • B.Sc. in Computer Engineering - Sapienza University of Rome (2018)

Research Interests

  • Open RAN (O-RAN) Architecture
  • 5G and beyond cellular networks
  • Deep Reinforcement Learning for Cellular Networks
  • Security of the AI in Wireless Networks

Andrea Lacava is a Ph.D. Candidate in Computer Engineering at the Institute for the Wireless Internet of Things currently enrolled in a double degree program with Northeastern University, USA, under Prof. Tommaso Melodia and Sapienza University of Rome, Italy, under Prof. Francesca Cuomo. In 2020, he obtained his Master’s Degree in Cybersecurity at Sapienza. His main research efforts are focused on enabling Intelligent nextG Cellular Networks through the Open RAN architecture and on studying the security of Bluetooth Low Energy Mesh networks.

Publications

The O-RAN architecture is transforming cellular networks by adopting RAN softwarization and disaggregation concepts to enable data-driven monitoring and control of the network. Such management is enabled by RICs, which facilitate near-real-time and non-real-time network control through xApps and rApps. However, they face limitations, including latency overhead in data exchange between the RAN and RIC, restricting real-time monitoring, and the inability to access user plain data due to privacy and security constraints, hindering use cases like beamforming and spectrum classification. In this paper, we leverage the dApps concept to enable real-time RF spectrum classification with LibIQ, a novel library for RF signals that facilitates efficient spectrum monitoring and signal classification by providing functionalities to read I/Q samples as time-series, create datasets and visualize time-series data through plots and spectrograms. Thanks to LibIQ, I/Q samples can be efficiently processed to detect external RF signals, which are subsequently classified using a CNN inside the library. To achieve accurate spectrum analysis, we created an extensive dataset of time-series-based I/Q samples, representing distinct signal types captured using a custom dApp running on a 5G deployment over the Colosseum network emulator and an OTA testbed. We evaluate our model by deploying LibIQ in heterogeneous scenarios with varying center frequencies, time windows, and external RF signals. In real-time analysis, the model classifies the processed I/Q samples, achieving an average accuracy of approximately 97.8% in identifying signal types across all scenarios.

Link

Open Radio Access Networks (RANs) leverage disaggregated and programmable RAN functions and open interfaces to enable closed-loop, data-driven radio resource management. This is performed through custom intelligent applications on the RAN Intelligent Controllers (RICs), optimizing RAN policy scheduling, network slicing, user session management, and medium access control, among others. In this context, we have proposed dApps as a key extension of the O-RAN architecture into the real-time and user-plane domains. Deployed directly on RAN nodes, dApps access data otherwise unavailable to RICs due to privacy or timing constraints, enabling the execution of control actions within shorter time intervals. In this paper, we propose for the first time a reference architecture for dApps, defining their life cycle from deployment by the Service Management and Orchestration (SMO) to real-time control loop interactions with the RAN nodes where they are hosted. We introduce a new dApp interface, E3, along with an Application Protocol (AP) that supports structured message exchanges and extensible communication for various service models. By bridging E3 with the existing O-RAN E2 interface, we enable dApps, xApps, and rApps to coexist and coordinate. These applications can then collaborate on complex use cases and employ hierarchical control to resolve shared resource conflicts. Finally, we present and open-source a dApp framework based on OpenAirInterface (OAI). We benchmark its performance in two real-time control use cases, i.e., spectrum sharing and positioning in a 5th generation (5G) Next Generation Node Base (gNB) scenario. Our experimental results show that standardized real-time control loops via dApps are feasible, achieving average control latency below 450 microseconds and allowing optimal use of shared spectral resources.

Link

The development of Open Radio Access Network (RAN) cellular systems is being propelled by the integration of Artificial Intelligence (AI) techniques. While AI can enhance network performance, it expands the attack surface of the RAN. For instance, the need for datasets to train AI algorithms and the use of open interface to retrieve data in real time paves the way to data tampering during both training and inference phases. In this work, we propose MalO-RAN, a framework to evaluate the impact of data poisoning on O-RAN intelligent applications. We focus on AI-based xApps taking control decisions via Deep Reinforcement Learning (DRL), and investigate backdoor attacks, where tampered data is added to training datasets to include a backdoor in the final model that can be used by the attacker to trigger potentially harmful or inefficient pre-defined control decisions. We leverage an extensive O-RAN dataset collected on the Colosseum network emulator and show how an attacker may tamper with the training of AI models embedded in xApps, with the goal of favoring specific tenants after the application deployment on the network. We experimentally evaluate the impact of the SleeperNets and TrojDRL attacks and show that backdoor attacks achieve up to a 0.9 attack success rate. Moreover, we demonstrate the impact of these attacks on a live O-RAN deployment implemented on Colosseum, where we instantiate the xApps poisoned with MalO-RAN on an O-RAN-compliant Near-real-time RAN Intelligent Controller (RIC). Results show that these attacks cause an average network performance degradation of 87%.

Link

The growing performance demands and higher deployment densities of next-generation wireless systems emphasize the importance of adopting strategies to manage the energy efficiency of mobile networks. In this demo, we showcase a framework that enables research on Deep Reinforcement Learning (DRL) techniques for improving the energy efficiency of intelligent and programmable Open Radio Access Network (RAN) systems. Using the open-source simulator ns-O-RAN and the reinforcement learning environment Gymnasium, the framework enables to train and evaluate DRL agents that dynamically control the activation and deactivation of cells in a 5G network. We show how to collect data for training and evaluate the impact of DRL on energy efficiency in a realistic 5G network scenario, including users' mobility and handovers, a full protocol stack, and 3rd Generation Partnership Project (3GPP)-compliant channel models. The tool will be open-sourced upon acceptance of this paper and a tutorial for energy efficiency testing in ns-O-RAN.

Link

RAN Intelligent Controllers (RICs) are programmable platforms that enable data-driven closed-loop control in the O-RAN architecture. They collect telemetry and data from the RAN, process it in custom applications, and enforce control or new configurations on the RAN. Such custom applications in the Near-Real-Time (RT) RIC are called xApps, and enable a variety of use cases related to radio resource management. Despite numerous open-source and commercial projects focused on the Near-RT RIC, developing and testing xApps that are interoperable across multiple RAN implementations is a time-consuming and technically challenging process. This is primarily caused by the complexity of the protocol of the E2 interface, which enables communication between the RIC and the RAN while providing a high degree of flexibility, with multiple Service Models (SMs) providing plug-and-play functionalities such as data reporting and RAN control. In this paper, we propose xDevSM, an open-source flexible framework for O-RAN service models, aimed at simplifying xApp development for the O-RAN Software Community (OSC) Near-RT RIC. xDevSM reduces the complexity of the xApp development process, allowing developers to focus on the control logic of their xApps and moving the logic of the E2 service models behind simple Application Programming Interfaces (APIs). We demonstrate the effectiveness of this framework by deploying and testing xApps across various RAN software platforms, including OpenAirInterface and srsRAN. This framework significantly facilitates the development and validation of solutions and algorithms on O-RAN networks, including the testing of data-driven solutions across multiple RAN implementations.

Link

Next-generation wireless systems, already widely deployed, are expected to become even more prevalent in the future, representing challenges in both environmental and economic terms. This paper focuses on improving the energy efficiency of intelligent and programmable Open Radio Access Network (RAN) systems through the near-real-time dynamic activation and deactivation of Base Station (BS) Radio Frequency (RF) frontends using Deep Reinforcement Learning (DRL) algorithms, i.e., Proximal Policy Optimization (PPO) and Deep Q-Network (DQN). These algorithms run on the RAN Intelligent Controllers (RICs), part of the Open RAN architecture, and are designed to make optimal network-level decisions based on historical data without compromising stability and performance. We leverage a rich set of Key Performance Measurements (KPMs), serving as state for the DRL, to create a comprehensive representation of the RAN, alongside a set of actions that correspond to some control exercised on the RF frontend. We extend ns-O-RAN, an open-source, realistic simulator for 5G and Open RAN built on ns-3, to conduct an extensive data collection campaign. This enables us to train the agents offline with over 300,000 data points and subsequently evaluate the performance of the trained models. Results show that DRL agents improve energy efficiency by adapting to network conditions while minimally impacting the user experience. Additionally, we explore the trade-off between throughput and energy consumption offered by different DRL agent designs.

Link

This demo paper presents a dApp-based real-time spectrum sharing scenario where a 5th generation (5G) base station implementing the NR stack adapts its transmission and reception strategies based on the incumbent priority users in the Citizen Broadband Radio Service (CBRS) band. The dApp is responsible for obtaining relevant measurements from the Next Generation Node Base (gNB), running the spectrum sensing inference, and configuring the gNB with a control action upon detecting the primary incumbent user transmissions. This approach is built on dApps, which extend the O-RAN framework to the real-time and user plane domains. Thus, it avoids the need of dedicated Spectrum Access Systems (SASs) in the CBRS band. The demonstration setup is based on the open-source 5G OpenAirInterface (OAI) framework, where we have implemented a dApp interfaced with a gNB and communicating with a Commercial Off-the-Shelf (COTS) User Equipment (UE) in an over-the-air wireless environment. When an incumbent user has active transmission, the dApp will detect and inform the primary user presence to the gNB. The dApps will also enforce a control policy that adapts the scheduling and transmission policy of the Radio Access Network (RAN). This demo provides valuable insights into the potential of using dApp-based spectrum sensing with O-RAN architecture in next generation cellular networks.

Link

5G and beyond mobile networks will support heterogeneous use cases at an unprecedented scale, thus demanding automated control and optimization of network functionalities customized to the needs of individual users. Such fine-grained control of the Radio Access Network (RAN) is not possible with the current cellular architecture. To fill this gap, the Open RAN paradigm and its specification introduce an “open” architecture with abstractions that enable closed-loop control and provide data-driven, and intelligent optimization of the RAN at the user-level. This is obtained through custom RAN control applications (i.e., xApps) deployed on near-real-time RAN Intelligent Controller (near-RT RIC) at the edge of the network. Despite these premises, as of today the research community lacks a sandbox to build data-driven xApps, and create large-scale datasets for effective Artificial Intelligence (AI) training. In this paper, we address this by introducing ns-O-RAN, a software framework that integrates a real-world, production-grade near-RT RIC with a 3GPP-based simulated environment on ns-3, enabling at the same time the development of xApps, automated large-scale data collection and testing of Deep Reinforcement Learning (DRL)-driven control policies for the optimization at the user-level. In addition, we propose the first user-specific O-RAN Traffic Steering (TS) intelligent handover framework. It uses Random Ensemble Mixture (REM), a Conservative QQ-learning (CQL) algorithm, combined with a state-of-the-art Convolutional Neural Network (CNN) architecture, to optimally assign a serving base station to each user in the network. Our TS xApp, trained with more than 40 million data points collected by ns-O-RAN, runs on the near-RT RIC and controls the ns-O-RAN base stations. We evaluate the performance on a large-scale deployment with up to 126 users with 8 base stations, showing that the xApp-based handover improves throughput and spectral efficiency by an average of 50% over traditional handover heuristics, with less mobility overhead.

Link

Recent years have witnessed the Open Radio Access Network (RAN) paradigm transforming the fundamental ways cellular systems are deployed, managed, and optimized. This shift is led by concepts such as openness, softwarization, programmability, interoperability, and intelligence of the network, which have emerged in wired networks through Software-defined Networking (SDN) but lag behind in cellular systems. The realization of the Open RAN vision into practical architectures, intelligent data-driven control loops, and efficient software implementations, however, is a multifaceted challenge, which requires (i) datasets to train Artificial Intelligence (AI) and Machine Learning (ML) models; (ii) facilities to test models without disrupting production networks; (iii) continuous and automated validation of the RAN software; and (iv) significant testing and integration efforts. This paper is a tutorial on how Colosseum—the world’s largest wireless network emulator with hardware in the loop—can provide the research infrastructure and tools to fill the gap between the Open RAN vision, and the deployment and commercialization of open and programmable networks. We describe how Colosseum implements an Open RAN digital twin through a high-fidelity Radio Frequency (RF) channel emulator and endto- end softwarized O-RAN and 5G-compliant protocol stacks, thus allowing users to reproduce and experiment upon topologies representative of real-world cellular deployments. Then, we detail the twinning infrastructure of Colosseum, as well as the automation pipelines for RF and protocol stack twinning. Finally, we showcase a broad range of Open RAN use cases implemented on Colosseum, including the real-time connection between the digital twin and real-world networks, and the development, prototyping, and testing of AI/ML solutions for Open RAN.

Link

O-RAN is radically shifting how cellular networks are designed, deployed and optimized through network programmability, disaggregation, and virtualization. Specifically, RAN Intelligent Controllers (RICs) can orchestrate and optimize the Radio Access Network (RAN) operations, allowing fine-grained control over the network. RICs provide new approaches and solutions for classical use cases such as on-demand traffic steering, anomaly detection, and Quality of Service (QoS) management, with an optimization that can target single User Equipments (UEs), slices, cells, or entire base stations. Such control can leverage data-driven approaches, which rely on the O-RAN open interfaces to combine large-scale collection of RAN Key Performance Measurements (KPMs) and state-of-the-art Machine Learning (ML) routines executed in the RICs. While this comes with the potential to enable intelligent, programmable RANs, there are still significant challenges to be faced, primarily related to data collection at scale, development and testing of custom control logic for the RICs, and availability of Open RAN simulation and experimental tools for the research and development communities. To address this, we introduce ns-O-RAN, a software integration between a real-world near-real-time RIC and an ns-3 simulated RAN which provides a platform for researchers and telco operators to build, test and integrate xApps. ns-O-RAN extends a popular Open RAN experimental framework (OpenRAN Gym) with simulation capabilities that enable the generation of realistic datasets without the need for experimental infrastructure. We implement it as a new open-source ns-3 module that uses the E2 interface to connect different simulated 5G base stations with the RIC, enabling the exchange of E2 messages and RAN KPMs to be consumed by standard xApps. Furthermore, we test ns-O-RAN with the O-RAN Software Community (OSC) and OpenRAN Gym RICs, simplifying the onboarding from a test environment to production with real telecom hardware controlled without major reconfigurations required. ns-O-RAN is open source and publicly available, together with quick-start tutorials and documentation.

Link

Bluetooth Low Energy (BLE) is rapidly becoming the de-facto standard for short-range wireless communications among resource-constrained wireless devices. Securing this technology and its networking capabilities is paramount, as their widespread use by Internet of Things (IoT) applications demands protection from malicious users. While its security features have remarkably improved over the years, the BLE technology is still prone to severe threats, creating a gap between the standard theoretical design and its implementation. Particularly, the BLE Mesh Profile (Bluetooth Mesh), which enables many-to-many communication, prompts an overall analysis of its security, to ensure that its use preserves the integrity and privacy of end users. This work surveys the state-of-the-art of the security of BLE with an emphasis on Bluetooth Mesh, highlighting the threats that can still hinder their usage. We review the latest specifications in terms of link set up and authentication and describe attacks to both point-to-point and multicast networking. Our work also discusses solutions to mitigate and prevent attacks to the current standard, such as Intrusion Detection Systems, thus improving the general level of security of BLE systems.

Link

Bluetooth Low Energy mesh networks are emerging as the new standard of short burst communications. While security of the messages is guaranteed thought some standard encryption of the messages, a little has been done in terms of the protecting the overall network security status in case of attacks. Indeed, a lot of the classical methods of network analysis require an huge amount of data describing both a legitimate and an attack situations and at the current state of the art there are no public available datasets containing attacks. In order to create a reliable mechanism of network analysis suited for BLE in this paper we propose a machine learning Intrusion Detection System (IDS) based on pattern classification and recognition of the most classical denial of service attacks affecting this kind of networks. Moreover, in order to overcome the gap created by the absence of data, we present our data collection system based on ESP32 that allowed the collection of the packets from the Network and the Model layers of the BLE Mesh stack and the experiments conducted to get the necessary data to train the IDS. After the description of the IDS and the data collector, we describe the results obtained by the experimental setups and we discuss the results of the Intrusion Detection System, commenting on its strengths and the sides where more analysis is needed, proposing for future work new approaches related to the same classification model proposed.

Link

The introduction of new key features into the core specification of Bluetooth Low Energy (BLE) increased its potentialities, thus paving the way to the development of new networking paradigms. The ability for a single node to participate in more than a piconet and to assume both the role of master and slave makes it possible the formation of multi-hop networks that can be used in several emerging application scenarios. Additionally, the inherent low power consumption at the cost of contained throughput makes this technology appealing for Internet of Things (IoT), where power memory and capacity constrained devices exchange messages at relatively low data rates. In this paper, we devise a two layers BLE mesh-based networking paradigm obtained by generalizing Android BE-MESH for hardware-independent sensor networks. Each node enforces a plug-and-play architecture which makes it able to autonomously switch between client and server role, discover and connect to existing scatternets and relay messages through them, making the network able to extend and self re-organize in a distributed fashion. To have our implementation ready for IoT systems we based it on the ESP32 off-the-shelf board. We describe both the implemented functions as well as some practical results proving the effectiveness of the framework in some tested configurations.

Link

We propose and discuss BE-Mesh-Bluetooth low Energy-Meshed network, a new paradigm for BLE (Bluetooth Low Energy) that enables mesh networking among wirelessly interconnected devices, both in a single hop and multi-hop fashion. Starting from the classical Master/Slave paradigm of Bluetooth, we build two new layers based on BLE stack that allow the final user to set-up, in a fast way, the desired network topology while hiding the complexity and low-level details of the BLE stack. We also prototype, as a proof of concept, an open source Android library [1] that implements our communication paradigm and an Android application that allows the exchange of text messages across the mesh network. Last, we demonstrate how BE-Mesh enables Internet access sharing with the whole mesh from a single Internet-connected device.

Link