“AI-Driven Cognitive Radios for Next-Generation Wireless”
Thursday, Nov. 14 at 1:00 pm
Artificial intelligence (AI) based on the “second wave” of machine learning (ML), which exploits big-data-based “deep learning” models for dense neural networks, is rapidly disrupting massive application spaces (e.g., data-science, web search, media, and social networks), but has not been coherently applied to wireless networks. For example, while state of the art software-defined radios (SDRs) can efficiently implement cognitive radio (CR) algorithms, the cognition algorithms in such vanilla SDRs are largely limited to white space detection via spectrum sensing. This talk will introduce our attempts to close this significant hardware-AI gap by integrating cutting-edge AI algorithms with adaptive cognitive radio (CR) hardware, resulting in AI-driven CR receiver architectures. The proposed CR receivers closely integrate deep AI with programmable RF hardware to autonomously adapt both their RF transfer functions and their SDR algorithms. This hardware-in-the-loop architecture is particularly suitable for finding energy-efficient solutions to the hard problem of dynamic spectral access (DSA) on multiple spatial, spectral, and temporal scales, thus providing unprecedented spectral efficiency for next-generation wireless networks.
We are currently integrating our hardware and algorithms within an AI-driven CR chip set that autonomously monitors and segments radio frequency (RF) scenes in the congested 1-10 GHz frequency bands. The proposed chip set acquires RF scenes by using a broadband antenna array and analyzes them using built-in bio-inspired hardware, fast array signal processing, and low-complexity ML algorithms. In particular, we use a bio-inspired single-chip RF spectrum analyzer based on a model of the mammalian cochlea (inner ear) and known as the “RF cochlea” for rapid and energy-efficient spectrum monitoring over the 1-8 GHz range. In addition, we show that bio-inspired ML algorithms based on aspects of visual and natural language processing (e.g., the functional separation between “foveal” and “peripheral” pathways in vision) provide robust saliency and self-attention mechanisms. The resulting wireless-specific ML approach delivers a good tradeoff between situational awareness and computational complexity.
Soumyajit Mandal received his B. Tech degree from IIT Kharagpur, India in 2002 with top honors, and his M.S. and Ph.D. degrees from MIT in 2004 and 2009. From 2010-2014 he was a Research Scientist at the Schlumberger-Doll Research Center. He is currently the T. and A. Schroeder Assistant Professor at Case Western Reserve University, and will join the University of Florida as an Associate Professor in January 2020. His research interests include integrated circuits and systems, scientific instrumentation, precision sensors, and biomedical imaging. His projects include bio-inspired (neuromorphic and cytomorphic) integrated circuits, biomedical circuits and systems, structural health monitoring, cognitive RF systems, MEMS/NEMS interfaces, low-field and zero-field magnetic resonance, and other topics related to sensing and computing. He received the MIT Microsystems Technology Laboratories (MTL) Doctoral Dissertation Award (2009), Mentor, Learning, and T. Keith Glennan Fellowships (2015-2018), Nord and ACES grants (2015), Case School of Engineering Graduate Teaching Award (2018), and the IIT Kharagpur Young Alumni Achievers Award (2018). He has published over 125 papers in international journals and conferences, and has been awarded 19 patents.