Seminar: Suren Jayasuriya

“Towards Acoustic Cameras: Neural Deconvolution and Rendering for Synthetic Aperture Sonar”
Thursday, Feb. 15 at 1:00pm
LAR 234
Add to Calendar

Abstract

Acoustic imaging leverages sound to form visual products with applications including biomedical ultrasound and sonar. In particular, synthetic aperture sonar (SAS) has been developed to generate high-resolution imagery of both in-air and underwater environments. In this talk, we explore the application of implicit neural representations and neural rendering for SAS imaging and highlight how such techniques can enhance acoustic imaging for both 2D and 3D reconstructions. Specifically, we discuss challenges of neural rendering applied to acoustic imaging especially when handling the phase of reflected acoustic waves that is critical for high spatial resolution in beamforming. We present two recent works on enhanced 2D circular SAS deconvolution in air as well as a general neural rendering framework for 3D volumetric SAS. This research is the starting point for realizing the next generation of acoustic cameras for a variety of applications in air and water environments for the future.

Biography

Dr. Suren Jayasuriya is an assistant professor at Arizona State University, in the School of Arts, Media and Engineering (AME) and Electrical, Computer and Energy Engineering (ECEE) since 2018. Before this, he was a postdoctoral fellow at the Robotics Institute at Carnegie Mellon University in 2017. Suren received his Ph.D. in ECE at Cornell University in Jan 2017 and graduated from the University of Pittsburgh in 2012 with a B.S. in Mathematics (with departmental honors) and a B.A. in Philosophy. His research interests range from computational cameras, computer vision and graphics, and acoustic imaging/remote sensing. His website can be found at: https://sites.google.com/asu.edu/imaging-lyceum.