The ultimate goal of our research is to build intelligent and autonomous robots that think, behave and interact with the world in the way that human beings do, so that they can better serve, assist and collaborate with people in their daily lives across work, home and leisure.
The fundamental research question is to ensure those robots operate efficiently and safely in a human-involved environment. I am interested in addressing the microscopic aspect of the problem, e.g. the design of the behavior system (i.e. a mapping from observation to action) for single robot, as well as the macroscopic aspect of the problem, e.g. the analysis and validation of the human-robot system from a multi-agent perspective. Such human-robot system can be human-robot collaboration system in production lines, future transportation system with both automated and human-driven vehicles, or general cyber physical social system (CPSS).
- 2021~ Task Agnostic Real-Time Perception and Control with Few-Shot Cross-Platform Adaptation
- 2020~ Safe Uncaged Industrial Robots
- 2020~2021 Hierarchical Motion Planning for Efficient and Provably Safe Human-Robot Interactions
- 2019~2021 6DoF Robot Assembly Station of Consumer Electronic Production
- 2019~2020 Automatic Onsite Polishing of Large Complex Surfaces by Real Time Planning and Control
- 2019 Adaptable Behavior Prediction for Autonomous Driving
- 2018~ Verification of Deep Neural Networks and Systems with NN Components
- 2018~ Micro to Macro Traffic Management and Modeling with Autonomous Vehicles
- 2017~ Safe and Efficient Robot Collaboration System (SERoCS)
- 2014~ 2017 Robustly-Safe Automated Driving (ROAD) Systems
- 2013~ 2017 Robot Safe Interaction Systems (RSIS) for Intelligent Industrial Co-Robots
Many industrial tasks nowadays require machines to be flexible, i.e., they should be able to 1) understand the changing environments and tasks and 2) generate corresponding actions in real time. For example, for machine tending, the manipulators should generate actions regarding the real-time configuration of the materials, which needs to be perceived online. Hence, it is important to enable real-time perception-action loops for these intelligent manipulators in these flexible tasks. However, it remains challenging to optimally and efficiently configure and adapt these perception-action loops, under changing environments and tasks. For different tasks and environments, the optimal configuration of the perception-action loops may vary significantly, e.g., the mounting location of the camera, the focus on the camera, and the optimal update frequency of the perception-action loop. Moreover, there are variations across different hardware platforms so that the optimal configuration for one platform may not be optimal for the other. To ensure optimality and consistency across different platforms, we will develop a task agnostic few-shot learning method that can 1) automatically calibrate the perception-action loop to optimize user specified objectives (e.g., minimizing cycle time, maximizing the task success rate); 2) monitor and adapt the system in real time if the environment changes to maintain optimality.
Safe operation of intelligent robots in interactive environments depends on accurate prediction of others and consequent safe control of the ego robot. However, it remains challenging to 1) generate high-fidelity prediction of humans; 2) soundly verify the uncertainty associated with the prediction; and 3) incorporate the prediction and the verified uncertainty in the control of the ego robot. This project targets to address these three issues by incorporating recent progresses in 1) human motion prediction through imitation learning and online adaptation; 2) sound verification of deep neural networks; and 3) safe control of robot motion through the safe set algorithm. The work can be applied to human robot collaboration in production lines.
Sponsor: Ford Motor Company
Safe and efficient robot motion planning is critical to ensure desired human-robot interactions. However, there are very few methods that can comprehensively address uncertainties in human behaviors, robot model mismatch, robot computation limits, and measurement and actuation noises in an integrated planning theme. This proposal targets to develop a planning method that addresses all these challenges using hierarchical motion planning. A hierarchical planning theme has multiple planners running in parallel that use different dynamic models of the system and have different planning horizons, sampling frequencies, and replanning frequencies. It can generate trajectory plans from coarse to fine, and efficiently separate time-sensitive computation (e.g., real-time safety response) from time-insensitive computation (e.g., figuring out the reference trajectory towards the goal). A set of design principles will be developed to ensure sound safety guarantees as well as the performance of the hierarchical planner (e.g., stable and dead-lock free).
Sponsor: Amazon Research Award
Assembly of consumer electronics in manufacturing is a time-consuming and labor-intensive task. Assembly is one of the most important robot application in computer, communication, consumer electronic (3C) production. Programing a traditional industrial robotic manufacturing system requires a significant amount of time and resources (and therefore investment). This makes it difficult for production line to switch from one product to the next in a cost-effective way. Because of that high up-front cost, the life cycle of a production line is 2-5 years. However, the accelerated pace of product innovation has reduced the life cycle of each product to 3 - 6 months. The change in production cost comes from 3 aspects: (i) new fixture design, (ii) system fine-tuning, (iii) system re-calibration. To make robots and solutions more competitive in this field, there is a need to develop a workstation that avoid these 3 steps, or minimize human configuration efforts to simplify assembly works. This is required to reduce the effective life cycle of a production line to match the ones of the products. The objective of this project is to develop new manipulation technology that enables automatic assembly of delicate parts onto PCB without the need of expensive re-programming of the robot. The plan is to leverage machine learning to help interpret the information from the various sensors (force feedback, visual sensors, etc.) and train a robotic system to adequately grab various components and insert these components into the pre-designated slots on a PCB.
Abstract: Polishing and grinding of metallic parts is an important manufacturing operation in many industrial applications. It remains challenging to polish large complex surfaces. Polishing is predominantly done manually which is very time consuming, expensive and more importantly, can be a safety hazard for human operators using hand-held devices. We propose the assembly of a robot capable of polishing complex and relatively large surface areas of complex free form fabricated parts. By developing this robot for a complex and potentially un-safe operation, this application will lower the technical, operational, and economic barriers for companies to adopt robotic technologies.
Sponsor: Advanced Robotics for Manufacturing Institute
In highly interactive driving scenarios, accurate prediction of other road participants is critical for safe and efficient navigation of autonomous cars. Prediction is challenging due to the difficulty in modeling various driving behavior, or learning such a model. The model should be interactive and reflect individual differences. Imitation learning methods are able to learn interactive models.
However, the learned models usually average out individual differences. When used to predict trajectories of individual vehicles, these models are biased. This project investigates adaptable prediction frameworks, which performs online adaptation of the offline learned models to recover individual differences and time-varying behaviors for better prediction. In particular, we combine a family of recursive least square parameter adaptation algorithms (RLS-PAA) with various offline learned models. RLS-PAA has analytical solutions and is able to adapt the model for every single vehicle efficiently online.
Deep neural networks are widely used for nonlinear function approximation with applications ranging from computer vision to control. Although these networks involve the composition of simple arithmetic operations, it can be very challenging to verify whether a particular network satisfies certain input-output properties. This project classifies methods that have emerged recently for soundly verifying such properties. These methods borrow insights from reachability analysis, optimization, and search. We investigate fundamental differences and connections between existing algorithms. In addition, we provide pedagogical implementations of existing methods and compare them on a set of benchmark problems. Moreover, we will extend these tools to verify closed-loop systems with NN components by combining with control-theoretic analysis.
Automated vehicles are believed to be the key solution for future mobility. As more and more automated vehicles drive on public roads, they will interact with each other, and with other road participants such as human-driven vehicles and pedestrians. Those interactions will deeply change and redefine today’s transportation system.
The fundamental question is: how can we achieve a safe and efficient transportation system through the design of the driving strategies for single automated vehicle? This project is aimed to develop an understanding on how the behavior design of single agent and their communication strategy may affect the overall multi-agent system, and how to achieve the best design from the system perspective. Indeed, the macro transportation system depends on the micro behaviors of road participants, while the micro behaviors of road participants are affected by others in the transportation system. From micro design to macro analysis, we are expected to gain better understandings of the micro-macro relationships and achieve a safe and efficient transportation system through the introduction of automated vehicles.
In factory automation, humans and robots comprise the two major work forces. Traditionally, humans and robots have not physically collaborated with each other during operation, in significant part because full automation with robots was the goal. In recent years, however, it has been recognized that there are tremendous advantages if robots are brought out of their cages and to allow them to share work space with and to collaborate with humans to take advantage of the best of two worlds - on one hand, the reliable execution of tasks by robots without wear handling objects of a wide range of sizes and weights, and on the other hand, the intelligence of humans and their five senses-based adaptability and flexibility. For collaboration between humans and robots to be successful, it is a prerequisite to ensure the safety of the humans in such collaboration. At the same time, it is important to ensure that robots collaborate with humans to ensure the best performance possible.
Automated driving is widely viewed as a promising technology to revolutionize today’s transportation system, so as to free the human drivers, ease the road congestion and lower the fuel consumption among other benefits. Substantial research efforts are directed into this field from research groups and companies. When the automated vehicles drive on public roads, they are automatically given social attributions. While existing technologies can assure high-fidelity sensing, real-time computation and robust control, the challenges lie in the interactions between the automated vehicle and the environment which includes other manually driven vehicles. We proposed a framework in designing the driving behavior for automated vehicles to prevent or minimize occurrences of collisions among vehicles and obstacles while maintaining efficiency (e.g. maintaining high speed on freeway).
With the development of modern robotics, robots are entering people’s life in multiple ways. As identified by National Robotics Initiative (NRI), future intelligent robots can be co-defenders, co-explorers, co-inhabitants and even co-workers to human. To successfully launch those co-robots, we must make sure that they are safe to human users.
However, this is not a easy task as the robots are operating in a dynamic uncertain environment (DUE) together with other intelligent agents such as humans. In this project, we address the safety issues in the context of (1) multi-agent interactions (2) sensing and knowledge representations (3) learning and predictions (4) human modeling and (5) constrained optimal control and decision-making.