Education
- Ph.D. in Electrical Engineering from the University at Buffalo, the State University of New York in 2018
Research Interests
Pedram Johari is a Principal Research Scientist at the Institute for Wireless Internet of Things at Northeastern University (Boston, MA) under the direction of Prof. Tommaso Melodia . Pedram has received his Ph.D. in Electrical Engineering from the University at Buffalo, the State University of New York in 2018 under the supervision of Prof. Josep M. Jornet . He has served as the CTO of an IoT-tech startup in New York during 2018 and 2019. Pedram has also served as an Adjunct Instructor (and later as a Research Assistant Professor - by courtesy appointment) at the University at Buffalo in 2018 and 2019.
Publications
Evaluating cellular systems, from 5G New Radio (NR) and 5G-Advanced to 6G, is challenging because the performance emerges from the tight coupling of propagation, beam management, scheduling, and higher-layer interactions. System-level simulation is therefore indispensable, yet the vast majority of studies rely on the statistical 3GPP channel models. These are well suited to capture average behavior across many statistical realizations, but cannot reproduce site-specific phenomena such as corner diffraction, street-canyon blockage, or deterministic line-of-sight conditions and angle-of-departure/arrival relationships that drive directional links. This paper extends 5G-LENA, an NR module for the system-level Network Simulator 3 (ns-3), with a trace-based channel model that processes the Multipath Components (MPCs) obtained from external ray-tracers (e.g., Sionna Ray Tracer (RT)) or measurement campaigns. Our module constructs frequency-domain channel matrices and feeds them to the existing Physical (PHY)/Medium Access Control (MAC) stack without any further modifications. The result is a geometry-based channel model that remains fully compatible with the standard 3GPP implementation in 5G-LENA, while delivering site-specific geometric fidelity. This new module provides a key building block toward Digital Twin (DT) capabilities by offering realistic site-specific channel modeling, unlocking studies that require site awareness, including beam management, blockage mitigation, and environment-aware sensing. We demonstrate its capabilities for precise beam-steering validation and end-to-end metric analysis. In both cases, the trace-driven engine exposes performance inflections that the statistical model does not exhibit, confirming its value for high-fidelity system-level cellular networks research and as a step toward DT applications.
LinkOpen Radio Access Networks (RANs) leverage disaggregated and programmable RAN functions and open interfaces to enable closed-loop, data-driven radio resource management. This is performed through custom intelligent applications on the RAN Intelligent Controllers (RICs), optimizing RAN policy scheduling, network slicing, user session management, and medium access control, among others. In this context, we have proposed dApps as a key extension of the O-RAN architecture into the real-time and user-plane domains. Deployed directly on RAN nodes, dApps access data otherwise unavailable to RICs due to privacy or timing constraints, enabling the execution of control actions within shorter time intervals. In this paper, we propose for the first time a reference architecture for dApps, defining their life cycle from deployment by the Service Management and Orchestration (SMO) to real-time control loop interactions with the RAN nodes where they are hosted. We introduce a new dApp interface, E3, along with an Application Protocol (AP) that supports structured message exchanges and extensible communication for various service models. By bridging E3 with the existing O-RAN E2 interface, we enable dApps, xApps, and rApps to coexist and coordinate. These applications can then collaborate on complex use cases and employ hierarchical control to resolve shared resource conflicts. Finally, we present and open-source a dApp framework based on OpenAirInterface (OAI). We benchmark its performance in two real-time control use cases, i.e., spectrum sharing and positioning in a 5th generation (5G) Next Generation Node Base (gNB) scenario. Our experimental results show that standardized real-time control loops via dApps are feasible, achieving average control latency below 450 microseconds and allowing optimal use of shared spectral resources.
LinkThis demo presents SeizNet, an innovative system for predicting epileptic seizures benefiting from a multi-modal sensor network and utilizing Deep Learning (DL) techniques. Epilepsy affects approximately 65 million people worldwide, many of whom experience drug-resistant seizures. SeizNet aims at providing highly accurate alerts, allowing individuals to take preventive measures without being disturbed by false alarms. SeizNet uses a combination of data collected through either invasive (intracranial electroencephalogram (iEEG)) or non-invasive (electroencephalogram (EEG) and electrocardiogram (ECG)) sensors, and processed by advanced DL algorithms that are optimized for real-time inference at the edge, ensuring privacy and minimizing data transmission. SeizNet achieves > 97% accuracy in seizure prediction while keeping the size and energy restrictions of an implantable device.
LinkAccurate channel modeling in real-time faces remarkable challenge due to the complexities of traditional methods such as ray tracing and field measurements. AI-based techniques have emerged to address these limitations, offering rapid, precise predictions of channel properties through ground truth data. This paper introduces an innovative approach to real-time, high-fidelity propagation modeling through advanced deep learning. Our model integrates 3D geographical data and rough propagation estimates to generate precise path gain predictions. By positioning the transmitter centrally, we simplify the model and enhance its computational efficiency, making it amenable to larger scenarios. Our approach achieves a normalized Root Mean Squared Error of less than 0.035 dB over a 37,210 square meter area, processing in just 46 ms on a GPU and 183 ms on a CPU. This performance significantly surpasses traditional high-fidelity ray tracing methods, which require approximately three orders of magnitude more time. Additionally, the model's adaptability to real-world data highlights its potential to revolutionize wireless network design and optimization, through enabling real-time creation of adaptive digital twins of real-world wireless scenarios in dynamic environments.
LinkThis demo paper presents a dApp-based real-time spectrum sharing scenario where a 5th generation (5G) base station implementing the NR stack adapts its transmission and reception strategies based on the incumbent priority users in the Citizen Broadband Radio Service (CBRS) band. The dApp is responsible for obtaining relevant measurements from the Next Generation Node Base (gNB), running the spectrum sensing inference, and configuring the gNB with a control action upon detecting the primary incumbent user transmissions. This approach is built on dApps, which extend the O-RAN framework to the real-time and user plane domains. Thus, it avoids the need of dedicated Spectrum Access Systems (SASs) in the CBRS band. The demonstration setup is based on the open-source 5G OpenAirInterface (OAI) framework, where we have implemented a dApp interfaced with a gNB and communicating with a Commercial Off-the-Shelf (COTS) User Equipment (UE) in an over-the-air wireless environment. When an incumbent user has active transmission, the dApp will detect and inform the primary user presence to the gNB. The dApps will also enforce a control policy that adapts the scheduling and transmission policy of the Radio Access Network (RAN). This demo provides valuable insights into the potential of using dApp-based spectrum sensing with O-RAN architecture in next generation cellular networks.
LinkSoftwarized and programmable Radio Access Networks (RANs) come with virtualized and disaggregated components, increasing the supply chain robustness and the flexibility and dynamism of the network deployments. This is a key tenet of Open RAN, with open interfaces across disaggregated components specified by the O-RAN ALLIANCE. It is mandatory, however, to validate that all components are compliant with the specifications and can successfully interoperate, without performance gaps with traditional, monolithic appliances. Open Testing & Integration Centers (OTICs) are entities that can verify such interoperability and adherence to the standard through rigorous testing. However, how to design, instrument, and deploy an OTIC which can offer testing for multiple tenants, heterogeneous devices, and is ready to support automated testing is still an open challenge. In this paper, we introduce a blueprint for a programmable OTIC testing infrastructure, based on the design and deployment of the Open6G OTIC at Northeastern University, Boston, and provide insights on technical challenges and solutions for O-RAN testing at scale.
LinkWireless network emulators are being increasingly used for developing and evaluating new solutions for Next Generation (NextG) wireless networks. However, the reliability of the solutions tested on emulation platforms heavily depends on the precision of the emulation process, model design, and parameter settings. To address, obviate, or minimize the impact of errors of emulation models, in this work, we apply the concept of Digital Twin (DT) to large-scale wireless systems. Specifically, we demonstrate the use of Colosseum, the world's largest wireless network emulator with hardware-in-the-loop, as a DT for NextG experimental wireless research at scale. As proof of concept, we leverage the Channel emulation scenario generator and Sounder Toolchain (CaST) to create the DT of a publicly available over-the-air indoor testbed for sub-6 GHz research, namely, Arena. Then, we validate the Colosseum DT through experimental campaigns on emulated wireless environments, including scenarios concerning cellular networks and jamming of Wi-Fi nodes, on both the real and digital systems. Our experiments show that the DT is able to provide a faithful representation of the real-world setup, obtaining an average similarity of up to 0.987 in throughput and 0.982 in Signal to Interference plus Noise Ratio (SINR).
LinkDigital twins are now a staple of wireless networks design and evolution. Creating an accurate digital copy of a real system offers numerous opportunities to study and analyze its performance and issues. It also allows designing and testing new solutions in a risk-free environment, and applying them back to the real system after validation. A candidate technology that will heavily rely on digital twins for design and deployment is 6G, which promises robust and ubiquitous networks for eXtended Reality (XR) and immersive communications solutions. In this paper, we present BostonTwin, a dataset that merges a high-fidelity 3D model of the city of Boston, MA, with the existing geospatial data on cellular base stations deployments, in a ray-tracing-ready format. Thus, BostonTwin enables not only the instantaneous rendering and programmatic access to the building models, but it also allows for an accurate representation of the electromagnetic propagation environment in the real-world city of Boston. The level of detail and accuracy of this characterization is crucial to designing 6G networks that can support the strict requirements of sensitive and high-bandwidth applications, such as XR and immersive communication.
LinkThis paper introduces an innovative framework designed for progressive (granular in time to onset) prediction of seizures through the utilization of a Deep Learning (DL) methodology based on non-invasive multimodal sensor networks. Epilepsy, a debilitating neurological condition, affects an estimated 65 million individuals globally, with a substantial proportion facing drug-resistant epilepsy despite pharmacolog-ical interventions. To address this challenge, we advocate for predictive systems that provide timely alerts to individuals at risk, enabling them to take precautionary actions. Our framework employs advanced DL techniques and uses personalized data from a network of non-invasive electroencephalogram (EEG) and electrocardiogram (ECG) sensors, thereby enhancing prediction accuracy. The algorithms are optimized for real-time processing on edge devices, mitigating privacy concerns and minimizing data transmission overhead inherent in cloud-based solutions, ultimately preserving battery energy. Additionally, our system predicts the countdown time to seizures (with 15-minute intervals up to an hour prior to the onset), offering critical lead time for preventive actions. Our multimodal model achieves 95% sensitivity, 98% specificity, and 97% accuracy, averaged among 29 patients.
LinkIn this paper, we introduce SeizNet, a closed-loop system for predicting epileptic seizures through the use of Deep Learning (DL) method and implantable sensor networks. While pharmacological treatment is effective for some epilepsy patients (with ~65M people affected worldwide), one out of three suffer from drug-resistant epilepsy. To alleviate the impact of seizure, predictive systems have been developed that can notify such patients of an impending seizure, allowing them to take precautionary measures. SeizNet leverages DL techniques and combines data from multiple recordings, specifically intracranial electroencephalogram (iEEG) and electrocardiogram (ECG) sensors, that can significantly improve the specificity of seizure prediction while preserving very high levels of sensitivity. SeizNet DL algorithms are designed for efficient real-time execution at the edge, minimizing data privacy concerns, data transmission overhead, and power inefficiencies associated with cloud-based solutions. Our results indicate that SeizNet outperforms traditional single-modality and non-personalized prediction systems in all metrics, achieving up to 99% accuracy in predicting seizure, offering a promising new avenue in refractory epilepsy treatment.
LinkRecent years have witnessed the Open Radio Access Network (RAN) paradigm transforming the fundamental ways cellular systems are deployed, managed, and optimized. This shift is led by concepts such as openness, softwarization, programmability, interoperability, and intelligence of the network, which have emerged in wired networks through Software-defined Networking (SDN) but lag behind in cellular systems. The realization of the Open RAN vision into practical architectures, intelligent data-driven control loops, and efficient software implementations, however, is a multifaceted challenge, which requires (i) datasets to train Artificial Intelligence (AI) and Machine Learning (ML) models; (ii) facilities to test models without disrupting production networks; (iii) continuous and automated validation of the RAN software; and (iv) significant testing and integration efforts. This paper is a tutorial on how Colosseum—the world’s largest wireless network emulator with hardware in the loop—can provide the research infrastructure and tools to fill the gap between the Open RAN vision, and the deployment and commercialization of open and programmable networks. We describe how Colosseum implements an Open RAN digital twin through a high-fidelity Radio Frequency (RF) channel emulator and endto- end softwarized O-RAN and 5G-compliant protocol stacks, thus allowing users to reproduce and experiment upon topologies representative of real-world cellular deployments. Then, we detail the twinning infrastructure of Colosseum, as well as the automation pipelines for RF and protocol stack twinning. Finally, we showcase a broad range of Open RAN use cases implemented on Colosseum, including the real-time connection between the digital twin and real-world networks, and the development, prototyping, and testing of AI/ML solutions for Open RAN.
LinkThe ever-growing number of wireless communication devices and technologies demands spectrum-sharing techniques. Effective coexistence management is crucial to avoid harmful interference, especially with critical systems like nautical and aerial radars in which incumbent radios operate missioncritical communication links. In this demo, we showcase a framework that leverages Colosseum, the world’s largest wireless network emulator with hardware-in-the-loop, as a playground to study commercial radar waveforms coexisting with a cellular network in CBRS band in complex environments. We create an ad-hoc high-fidelity spectrum-sharing scenario for this purpose. We deploy a cellular network to collect IQ samples with the aim of training an ML agent that runs at the base station. The agent has the goal of detecting incumbent radar transmissions and vacating the cellular bandwidth to avoid interfering with the radar operations. Our experiment results show an average detection accuracy of 88%, with an average detection time of 137 ms.
LinkBecause of the ever-growing amount of wireless consumers, spectrum-sharing techniques have been increasingly common in the wireless ecosystem, with the main goal of avoiding harmful interference to coexisting communication systems. This is even more important when considering systems, such as nautical and aerial fleet radars, in which incumbent radios operate mission-critical communication links. To study, develop, and validate these solutions, adequate platforms, such as the Colosseum wireless network emulator, are key as they enable experimentation with spectrum-sharing heterogeneous radio technologies in controlled environments. In this work, we demonstrate how Colosseum can be used to twin commercial radio waveforms to evaluate the coexistence of such technologies in complex wireless propagation environments. To this aim, we create a high-fidelity spectrum-sharing scenario on Colosseum to evaluate the impact of twinned commercial radar waveforms on a cellular network operating in the CBRS band. Then, we leverage IQ samples collected on the testbed to train a machine learning agent that runs at the base station to detect the presence of incumbent radar transmissions and vacate the bandwidth to avoid causing them harmful interference. Our results show an average detection accuracy of 88%, with accuracy above 90% in SNR regimes above 0 dB and SINR regimes above --20 dB, and with an average detection time of 137 ms.
LinkJamming attacks have plagued wireless communication systems and will continue to do so going forward with technological advances. These attacks fall under the category of Electronic Warfare (EW), a continuously growing area in both attack and defense of the electromagnetic spectrum, with one subcategory being electronic attacks (EA). Jamming attacks fall under this specific subcategory of EW as they comprise adversarial signals that attempt to disrupt, deny, degrade, destroy, or deceive legitimate signals in the electromagnetic spectrum. While jamming is not going away, recent research advances have started to get the upper hand against these attacks by leveraging new methods and techniques, such as machine learning. However, testing such jamming solutions on a wide and realistic scale is a daunting task due to strict regulations on spectrum emissions. In this paper, we introduce eSWORD (emulation (of) Signal Warfare On Radio-frequency Devices), the first large-scale framework that allows users to safely conduct real-time and controlled jamming experiments with hardware-in-the-loop. This is done by integrating METEOR, an electronic warfare (EW) threat-emulating software developed by the MITRE Corporation, into the Colosseum wireless network emulator that enables large-scale experiments with up to 49 software-defined radio nodes. We compare the performance of eSWORD with that of real-world jamming systems by using an over-the-air wireless testbed (considering safe measures when conducting experiments). Our experimental results demonstrate that eSWORD achieves up to 98% accuracy in following throughput, signal-to-interference-plus-noise ratio, and link status patterns when compared to real-world jamming experiments, testifying to the high accuracy of the emulated eSWORD setup.
LinkLarge-scale wireless testbeds are being increasingly used in developing and evaluating new solutions for next generation wireless networks. Among others, high-fidelity FPGA-based emulation platforms have unique capabilities for faithfully modeling real-world wireless environments in real-time and at scale, while guaranteeing repeatability. However, the reliability of the solutions tested on emulation platforms heavily depends on the precision of the emulation process, which is often overlooked. To address this unmet need in wireless network emulator-based experiments, in this paper we present CaST, a Channel emulation generator and Sounder Toolchain for creating and characterizing realistic wireless network scenarios with high accuracy. CaST consists of (i) a framework for creating mobile wireless scenarios from ray-tracing models for FPGA-based emulation platforms, and (ii) a containerized Software Defined Radio-based channel sounder to precisely characterize the emulated channels. We demonstrate the use of CaST by designing, deploying and validating multi-path mobile scenarios on Colosseum, the world's largest wireless network emulator. Results show that CaST achieves ≤ 20 ns accuracy in sounding Channel Impulse Response tap delays, and 0.5 dB accuracy in measuring tap gains.
LinkColosseum is an open-access and publicly-available large-scale wireless testbed for experimental research via virtualized and softwarized waveforms and protocol stacks on a fully programmable, “white-box” platform. Through 256 state-of-the-art software-defined radios and a massive channel emulator core, Colosseum can model virtually any scenario, enabling the design, development and testing of solutions at scale in a variety of deployments and channel conditions. These Colosseum radio-frequency scenarios are reproduced through high-fidelity FPGAbased emulation with finite-impulse response filters. Filters model the taps of desired wireless channels and apply them to the signals generated by the radio nodes, faithfully mimicking the conditions of real-world wireless environments. In this paper, we introduce Colosseum as a testbed that is for the first time open to the research community. We describe the architecture of Colosseum and its experimentation and emulation capabilities. We then demonstrate the effectiveness of Colosseum for experimental research at scale through exemplary use cases including prevailing wireless technologies (e.g., cellular and Wi-Fi) in spectrum sharing and unmanned aerial vehicle scenarios. A roadmap for Colosseum future updates concludes the paper.
LinkPractical experimentation and prototyping are core steps in the development of any wireless technology. Often times, however, this crucial step is confined to small laboratory setups that do not capture the scale of commercial deployments and do not ensure result reproducibility and replicability, or it is skipped altogether for lack of suitable hardware and testing facilities. Recent years have seen the development of publicly-available testing platforms for wireless experimentation at scale. Examples include the testbeds of the PAWR program and Colosseum, the world's largest wireless network emulator. With its 256 software-defined radios, 24 racks of powerful compute servers and first-of-its-kind channel emulator, Colosseum allows users to prototype wireless solutions at scale, and guarantees reproducibility and replicability of results. This tutorial provides an overview of the Colosseum platform. We describe the architecture and components of the testbed as a whole, and we then showcase how to run practical experiments in diverse scenarios with heterogeneous wireless technologies (e.g., Wi-Fi and cellular). We also emphasize how Colosseum experiments can be ported to different testing platforms, facilitating full-cycle experimental wireless research: design, experiments and tests at scale in a fully controlled and observable environment and testing in the field. The tutorial concludes with considerations on the flexible future of Colosseum, focusing on its planned extension to emulate larger scenarios and channels at higher frequency bands (mmWave).
LinkRecent years have seen the introduction of large-scale platforms for experimental wireless research. These platforms, which include testbeds like those of the PAWR program and emulators like Colosseum, allow researchers to prototype and test their solutions in a sound yet realistic wireless environment before actual deployment. Emulators, in particular, enable wireless experiments that are not site-specific as those on real testbeds. Researchers can choose among different radio frequency (RF) scenarios for real-time emulation of a vast variety of different situations, with different number of users, RF bandwidth, antenna counts, hardware requirements, etc. Although very powerful, in that they can emulate virtually any real-world deployment, emulated scenarios are only as useful as how accurately they can reproduce the targeted wireless channel and environment. Achieving emulation accuracy is particularly challenging, especially for experiments at scale for which emulators require considerable amounts of computational resources. In this paper, we propose a framework to create RF scenarios for emulators like Colosseum starting from rich forms of input, like those obtained by measurements through radio equipment or via software (e.g., ray-tracers and electromagnetic field solvers). Our framework optimally scales down the large set of RF data in input to the fewer parameters allowed by the emulator by using efficient clustering techniques and channel impulse response re-sampling. We showcase our method by generating wireless scenarios for the Colosseum network emulator by using Remcom's Wireless InSite, a commercial-grade ray-tracer that produces key characteristics of the wireless channel. Examples are provided for line-of-sight and non-line-of-sight scenarios on portions of the Northeastern University main campus
Link