CNEL Seminar: Ivan Ruchkin

Presented by the Computational NeuroEngineering Laboratory

“Calibration Guarantees for Closed-Loop Safety Chance Prediction”
Wednesday, Nov. 8 at 3:00pm
NEB 589

Abstract

Autonomous robotic systems are increasingly deployed in complex and safety-critical environments. To help these systems perform well in such environments, learning-enabled components implemented with neural networks are responsible for critical perception and control functions. Unfortunately, the complex interactions between the open environments and deep learning components lead to behaviors that are hard to analyze or predict, thus becoming a major obstacle to ensuring safe and trustworthy autonomy. It would be useful to have an online measure of safety that quantifies the diverse uncertainties of the system’s future behavior.

Across multiple recent works, a novel safety assurance paradigm has emerged, which I refer to as Verify-Then-Monitor. This paradigm prescribes two steps:

  1. Verify the safety of as much of the system’s model/state space as possible at design
  2. Monitor the probability of safety at run time to account for unexpected uncertainties

However, this promising paradigm has so far failed to address the crucial issue of trustworthy monitoring: how can we guarantee that the online monitor produces a probability estimate that is well-calibrated to the true chance of safety? This talk will summarize our recent answers to this question in two settings. The first setting combines Bayesian filtering with probabilistic model checking of Markov decision processes, instantiated in the context of controlling critical infrastructure. The second setting is confidence monitoring of formalized assumptions behind closed-loop neural-network verification for an autonomous underwater vehicle.

Biography

Dr. Ivan Ruchkin is an assistant professor at the Department of Electrical and Computer Engineering, University of Florida, where he leads the Trustworthy Engineered Autonomy (TEA) Lab. His research makes autonomous systems safer and more trustworthy by combining techniques from formal methods and artificial intelligence. Ivan received his PhD from Carnegie Mellon University and completed his postdoctoral training at the University of Pennsylvania. His contributions were recognized with multiple Best Paper awards, a Gold Medal in the ACM Student Research Competition, and the Frank Anger Memorial Award for the crossover of ideas between the software engineering and embedded systems communities. More information can be found at https://ivan.ece.ufl.edu.