CNEL Seminar: Wenqian Xue

Presented by the Computational NeuroEngineering Laboratory

Wenqian Xue

“Data-Driven Model-Free Inverse Reinforcement Learning for Linear Systems”
Wednesday, Oct. 18 at 3:00pm
NEB 589

Abstract

Optimal control theory assumes that the performance objective function of an agent is known. However, in real interactions the performance intention may not be known, so that optimal control cannot be applied. The study of inverse reinforcement learning (RL) tries to compute the unknown performance objective of an agent by using measured trajectories or behaviors online in real-time.

We formulate inverse RL as an expert-learner interaction whereby the optimal performance intent of an expert agent is unknown to a learner agent. The learner observes the states and actions of the expert and hence seeks to reconstruct the performance objective of the expert and generate optimal decisions to mimic the optimal response of the expert. To solve such a control problem, we develop inverse RL algorithms that are extensions of RL and inverse optimal control, which focuses on removing the requirement for knowledge of the expert-learner agent dynamics and improving learning efficiency. The effectiveness is verified by theoretical analysis and numerical simulations.

Biography

Dr. Wenqian Xue received her M.SE. and Ph.D. degree in control theory and control engineering from State Key Laboratory of Synthetical Automation for Process Industries in Northeastern University, China, in 2018 and 2022, respectively. She was a research assistant with the University of Texas at Arlington, USA, from 2019 to 2021. She is a postdoctoral associate in the Department of Electrical & Computer Engineering at the University of Florida. Her research interests include reinforcement learning, data-driven control, inverse optimal control, signal processing and industrial operational control.