Liyang Zhang

Ph.D. in Electrical and Computer Engineering, 2019

Education

  • Ph.D. in Electrical and Computer Engineering - Northeastern University (2019)
  • M.S. in Electrical Engineering - State University of New York at Buffalo (2014)
  • B.S. in Electrical Engineering - Tsinghua University, Beijing, China (2008)

Research Interests

Liyang Zhang earned his Ph.D. in Electrical and Computer Engineering from Northeastern University in 2019. He received his M.S. in Electrical Engineering from State University of New York at Buffalo in 2014, and B.S. in Electrical Engineering from Tsinghua University, Beijing, China in 2008. From 2008 to 2011 he was an engineer at Infinova Technology in Shenzhen, China. He was an intern at Huawei Research Center, Santa Clara, CA from November 2016 to May 2017. He worked in the Wireless Networks and Embedded Systems Laboratory under Professor Tommaso Melodia. His research interests included wireless security, wireless communication and networking theory, Internet of Things, and machine learning. After graduation, he joined Google’s Network Infrastructure Team as a Software Engineer.

Publications

2025

Journals and Magazines

5G and beyond cellular systems embrace the disaggregation of Radio Access Network (RAN) components, exemplified by the evolution of the fronthaul (FH) connection between cellular baseband and radio unit equipment. Crucially, synchronization over the FH is pivotal for reliable 5G services. In recent years, there has been a push to move these links to an Ethernet-based packet network topology, leveraging existing standards and ongoing research for Time-Sensitive Networking (TSN). However, TSN standards, such as Precision Time Protocol (PTP), focus on performance with little to no concern for security. This increases the exposure of the open FH to security risks. Attacks targeting synchronization mechanisms pose significant threats, potentially disrupting 5G networks and impairing connectivity.In this article, we demonstrate the impact of successful spoofing and replay attacks against PTP synchronization. We show how a spoofing attack is able to cause a production-ready O-RAN and 5G-compliant private cellular base station to catastrophically fail within 2 seconds of the attack, necessitating manual intervention to restore full network operations. To counter this, we design a Machine Learning (ML)-based monitoring solution capable of detecting various malicious attacks with over 97.5% accuracy.

Link

Conference Papers

Following state-of-the-art research results, which showed the potential for significant performance gains by applying AI/ML techniques in the cellular Radio Access Network (RAN), the wireless industry is now broadly pushing for the adoption of AI in 5G and future 6G technology. Despite this enthusiasm, AI-based wireless systems still remain largely untested in the field. Common simulation methods for generating datasets for AI model training suffer from “reality gap” and, as a result, the performance of these simulation-trained models may not carry over to practical cellular systems. Additionally, the cost and complexity of developing high-performance proof-of-concept implementations present major hurdles for evaluating AI wireless systems in the field. In this work, we introduce a methodology which aims to address the challenges of bringing AI to real networks. We discuss how detailed Digital Twin simulations may be employed for training site-specific AI Physical (PHY) layer functions. We further present a powerful testbed for AI-RAN research and demonstrate how it enables rapid prototyping, field testing and data collection. Finally, we evaluate an AI channel estimation algorithm over-the-air with a commercial UE, demonstrating that real-world throughput gains of up to 40% are achievable by incorporating AI in the physical layer.

Link

2024

Conference Papers

2019

Journals and Magazines

Book Chapters

Wireless networks require fast-acting, effective and efficient security mechanisms able to tackle unpredictable, dynamic, and stealthy attacks. In recent years, we have seen the steadfast rise of technologies based on machine learning and software-defined radios, which provide the necessary tools to address existing and future security threats without the need of direct human-in-the-loop intervention. On the other hand, these techniques have been so far used in an ad hoc fashion, without any tight interaction between the attack detection and mitigation phases. In this chapter, we propose and discuss a Learning-based Wireless Security (LeWiS) framework that provides a closed-loop approach to the problem of cross-layer wireless security. Along with discussing the LeWiS framework, we also survey recent advances in cross-layer wireless security.

Link

2018

Journals and Magazines

2017

Journals and Magazines

Conference Papers

2016

Journals and Magazines

2015

Conference Papers

2014

Conference Papers

2010

Journals and Magazines