Changliu Liu, Assistant Professor
Biography
Changliu Liu is an assistant professor in the Robotics Institute, School of Computer Science, Carnegie Mellon University (CMU), where she leads the Intelligent Control Lab. Prior to joining CMU in Jan 2019, Dr. Liu was a postdoc at Stanford Intelligent Systems Laboratory in 2018. She received her Ph.D. in Engineering together with Master degrees in Engineering and Mathematics from University of California at Berkeley (in 2017, 2014, 2015 respectively) where she worked at the Mechanical Systems & Control Lab. She received her bachelor degrees in Engineering and Economics from Tsinghua University (in 2012). Her research interests lie in the design and verification of human-centered intelligent systems with applications to manufacturing and transportation and on various robot embodiments, including robot arms, mobile robots, legged robots, and humanoid robots. Dr. Liu co-founded Instinct Robotics, a robotics company for intelligent manufacturing. Dr. Liu holds senior membership in IEEE, and membership in ASME and AAAI. She published the book “Designing robot behavior in human-robot interactions” with CRC Press in 2019. She is the co-founder of the International Neural Network Verification Competition launched in 2020. Her work has been recognized by NSF Career Award, Amazon Research Award, Ford URP Award, Advanced Robotics for Manufacturing Champion Award, Young Investigator Award at International Symposium of Flexible Automation, and many best/outstanding paper awards. Her research has been covered by IEEE Spectrum, ATI News, Robtiq Blog, etc, and received support from many government agencies and industrial partners, including NSF, NIST, DARPA, ARM Institute, Boeing, Siemens, Lockheed Martin, etc. She demonstrated their human-robot collaboration system for flexible manufacturing to the US President in 2022. She was a member of the academic council for Grit Venture from 2021 to 2023. She served as the associate editor of Mechatronics from 2023 to 2024 and she is currently the assiciate editor of ASME Journal of Dynamic Systems, Measurement and Control.
16-714 - Advanced Control for Robotics
This course will discuss advanced control algorithms that can make robots behave more intelligently. This course is directed to students—primarily graduate although talented undergraduates are welcome as well—interested in advanced control. Key topics: Model predictive control, adaptive control, learning control, Lyapunov theory
16-883 - Provably safe robotics
Safe autonomy has become increasingly critical in many application domains. It is important to ensure not only the safety of the ego robot, but also the safety of other agents (humans or robots) that directly interact with the autonomy. For example, robots should be safe to human workers in human-robot collaborative assembly; autonomous vehicles should be safe to other road participants. For complex autonomous systems with many degrees of freedom, safe operation depends on the correct functioning of all system components, i.e., accurate perception, optimal decision making, and safe control. This course deals with both the design and the verification of safe robotic systems. From the design perspective, we will talk about how to assure safety through planning, prediction, learning, and control. From the verification perspective, we will talk about verification of deep neural networks, safety or reachability analysis for closed loop systems, and analysis of multi-agent systems.
Fall 2019 Spring 2021 Spring 2023 Spring 2024 Spring 2025
16-899 - Adaptive control and reinforcement learning
This course will discuss algorithms that learn and adapt to the environment. This course is directed to students—primarily graduate although talented undergraduates are welcome as well—interested in developing adaptive software that makes decisions that affect the world. This course will discuss adaptive behaviors both from the control perspective and the learning perspective.
Spring 2020 Fall 2020 Fall 2021 Fall 2022
Online Course: Neural Verification
Deep neural networks are widely used for nonlinear function approximation with applications spanning from computer vision to control. Although these networks involve the composition of simple arithmetic operations, it can be very challenging to verify whether a particular network satisfies certain input-output properties. This online course introduces methods that have emerged recently for soundly verifying such properties.