Education
- Ph.D. in Computer Engineering - Northeastern University
- M.Sc. in ICT Cybersystems - University of Padua (2021)
- B.Sc. in Computer Science - University of Padua (2019)
Research Interests
- AI for Cellular Network Systems
- Open RAN xApps and rApps
- Non-terrestrial 5G/6G UAV Networks
- 5G and beyond cellular networks
Matteo is a Ph.D. student in Computer Engineering at the Institute for the Wireless Internet of Things at Northeastern University, under Prof. Tommaso Melodia. He received his B.S. in Computer Engineering in Computer Science and his M.S. in ICT for Internet and multimedia - Cybersystems from University of Padova in 2019 and 2021, respectively while on 2020 he spent his winter semester as exchange student at Technical university of Denmark (DTU). Proficient in Python and C++, with expertise in machine learning and data analysis for 5G/6G and O-RAN systems. He has developed applications and models to improve network performance, reduce operational costs, and optimize communication. His experience includes implementing Deep Reinforcement Learning models for energy savings in 5G and developing a time-series machine learning model to predict critical network alarms. He also contributed to open-source projects for digital twin frameworks and worked on optimizing UAV 5G communication. (Check Publications section for further details)
Publications
Evaluating cellular systems, from 5G New Radio (NR) and 5G-Advanced to 6G, is challenging because the performance emerges from the tight coupling of propagation, beam management, scheduling, and higher-layer interactions. System-level simulation is therefore indispensable, yet the vast majority of studies rely on the statistical 3GPP channel models. These are well suited to capture average behavior across many statistical realizations, but cannot reproduce site-specific phenomena such as corner diffraction, street-canyon blockage, or deterministic line-of-sight conditions and angle-of-departure/arrival relationships that drive directional links. This paper extends 5G-LENA, an NR module for the system-level Network Simulator 3 (ns-3), with a trace-based channel model that processes the Multipath Components (MPCs) obtained from external ray-tracers (e.g., Sionna Ray Tracer (RT)) or measurement campaigns. Our module constructs frequency-domain channel matrices and feeds them to the existing Physical (PHY)/Medium Access Control (MAC) stack without any further modifications. The result is a geometry-based channel model that remains fully compatible with the standard 3GPP implementation in 5G-LENA, while delivering site-specific geometric fidelity. This new module provides a key building block toward Digital Twin (DT) capabilities by offering realistic site-specific channel modeling, unlocking studies that require site awareness, including beam management, blockage mitigation, and environment-aware sensing. We demonstrate its capabilities for precise beam-steering validation and end-to-end metric analysis. In both cases, the trace-driven engine exposes performance inflections that the statistical model does not exhibit, confirming its value for high-fidelity system-level cellular networks research and as a step toward DT applications.
LinkThe application of small-factor, 5G-enabled Unmanned Aerial Vehicles (UAVs) has recently gained significant interest in various aerial and Industry 4.0 applications. However, ensuring reliable, high-throughput, and low-latency 5G communication in aerial applications remains a critical and underexplored problem. This paper presents the 5th generation (5G) Aero, a compact UAV optimized for 5G connectivity, aimed at fulfilling stringent 3rd Generation Partnership Project (3GPP) requirements. We conduct a set of experiments in an indoor environment, evaluating the UAV's ability to establish high-throughput, low-latency communications in both Line-of-Sight (LoS) and Non-Line-of-Sight (NLoS) conditions. Our findings demonstrate that the 5G Aero meets the required 3GPP standards for Command and Control (C2) packets latency in both LoS and NLoS, and video latency in LoS communications and it maintains acceptable latency levels for video transmission in NLoS conditions. Additionally, we show that the 5G module installed on the UAV introduces a negligible 1% decrease in flight time, showing that 5G technologies can be integrated into commercial off-the-shelf UAVs with minimal impact on battery lifetime. This paper contributes to the literature by demonstrating the practical capabilities of current 5G networks to support advanced UAV operations in telecommunications, offering insights into potential enhancements and optimizations for UAV performance in 5G networks.
LinkThe growing performance demands and higher deployment densities of next-generation wireless systems emphasize the importance of adopting strategies to manage the energy efficiency of mobile networks. In this demo, we showcase a framework that enables research on Deep Reinforcement Learning (DRL) techniques for improving the energy efficiency of intelligent and programmable Open Radio Access Network (RAN) systems. Using the open-source simulator ns-O-RAN and the reinforcement learning environment Gymnasium, the framework enables to train and evaluate DRL agents that dynamically control the activation and deactivation of cells in a 5G network. We show how to collect data for training and evaluate the impact of DRL on energy efficiency in a realistic 5G network scenario, including users' mobility and handovers, a full protocol stack, and 3rd Generation Partnership Project (3GPP)-compliant channel models. The tool will be open-sourced upon acceptance of this paper and a tutorial for energy efficiency testing in ns-O-RAN.
LinkNext-generation wireless systems, already widely deployed, are expected to become even more prevalent in the future, representing challenges in both environmental and economic terms. This paper focuses on improving the energy efficiency of intelligent and programmable Open Radio Access Network (RAN) systems through the near-real-time dynamic activation and deactivation of Base Station (BS) Radio Frequency (RF) frontends using Deep Reinforcement Learning (DRL) algorithms, i.e., Proximal Policy Optimization (PPO) and Deep Q-Network (DQN). These algorithms run on the RAN Intelligent Controllers (RICs), part of the Open RAN architecture, and are designed to make optimal network-level decisions based on historical data without compromising stability and performance. We leverage a rich set of Key Performance Measurements (KPMs), serving as state for the DRL, to create a comprehensive representation of the RAN, alongside a set of actions that correspond to some control exercised on the RF frontend. We extend ns-O-RAN, an open-source, realistic simulator for 5G and Open RAN built on ns-3, to conduct an extensive data collection campaign. This enables us to train the agents offline with over 300,000 data points and subsequently evaluate the performance of the trained models. Results show that DRL agents improve energy efficiency by adapting to network conditions while minimally impacting the user experience. Additionally, we explore the trade-off between throughput and energy consumption offered by different DRL agent designs.
LinkProvided herein are methods and systems for beyond line of sight control of autonomous vehicles including a wireless network having an Open RAN infrastructure in communication with a core network and a MEC infrastructure, a MEC orchestrator deployed in the Open RAN infrastructure and including MEC computing nodes and a ground control station, a catalog in communication with the orchestrator and including MEC function apps for operating the MEC infrastructure and/or the core network, and Open RAN apps for operating the Open RAN infrastructure, wherein the orchestrator is configured to process data from the MEC infrastructure, the core network, the open RAN infrastructure, or combinations thereof, and instantiate the MEC function apps in the MEC infrastructure and/or core network, instantiate the Open RAN apps in the Open RAN infrastructure, or combinations thereof to manage a plurality of vehicle functions and/or wireless network functions responsive to data from the vehicles.
LinkWireless Sensor Networks (WSNs) are pivotal in various applications, including precision agriculture, ecological surveillance, and the Internet of Things (IoT). However, energy limitations of battery-powered nodes are a critical challenge, necessitating optimization of energy efficiency for maximal network lifetime. Existing strategies like duty cycling and Wake-up Radio (WuR) technology have been employed to mitigate energy consumption and latency, but they present challenges in scenarios with sparse deployments and short communication ranges. This paper introduces and evaluates the performance of Unmanned Aerial Vehicle (UAV)-assisted mobile data collection for WuR-enabled WSNs through physical and simulated experiments. We propose two one-hop UAV-based data collection strategies: a naïve strategy, which follows a predetermined fixed path, and an adaptive strategy, which optimizes the collection route based on recorded metadata. Our evaluation includes multiple experiment categories, measuring collection reliability, collection cycle duration, successful data collection time (latency), and node awake time to infer network lifetime. Results indicate that the adaptive strategy outperforms the naïve strategy across all metrics. Furthermore, WuR-based scenarios demonstrate lower latency and considerably lower node awake time compared to duty cycle-based scenarios, leading to several orders of magnitude longer network lifetime. Remarkably, our results suggest that the use of WuR technology alone achieves unprecedented network lifetimes, regardless of whether data collection paths are optimized. This underscores the significance of WuR as the technology of choice for all energy critical WSN applications.
LinkO-RAN is radically shifting how cellular networks are designed, deployed and optimized through network programmability, disaggregation, and virtualization. Specifically, RAN Intelligent Controllers (RICs) can orchestrate and optimize the Radio Access Network (RAN) operations, allowing fine-grained control over the network. RICs provide new approaches and solutions for classical use cases such as on-demand traffic steering, anomaly detection, and Quality of Service (QoS) management, with an optimization that can target single User Equipments (UEs), slices, cells, or entire base stations. Such control can leverage data-driven approaches, which rely on the O-RAN open interfaces to combine large-scale collection of RAN Key Performance Measurements (KPMs) and state-of-the-art Machine Learning (ML) routines executed in the RICs. While this comes with the potential to enable intelligent, programmable RANs, there are still significant challenges to be faced, primarily related to data collection at scale, development and testing of custom control logic for the RICs, and availability of Open RAN simulation and experimental tools for the research and development communities. To address this, we introduce ns-O-RAN, a software integration between a real-world near-real-time RIC and an ns-3 simulated RAN which provides a platform for researchers and telco operators to build, test and integrate xApps. ns-O-RAN extends a popular Open RAN experimental framework (OpenRAN Gym) with simulation capabilities that enable the generation of realistic datasets without the need for experimental infrastructure. We implement it as a new open-source ns-3 module that uses the E2 interface to connect different simulated 5G base stations with the RIC, enabling the exchange of E2 messages and RAN KPMs to be consumed by standard xApps. Furthermore, we test ns-O-RAN with the O-RAN Software Community (OSC) and OpenRAN Gym RICs, simplifying the onboarding from a test environment to production with real telecom hardware controlled without major reconfigurations required. ns-O-RAN is open source and publicly available, together with quick-start tutorials and documentation.
Link