Our proposal “Toward Lifelong Safety of Autonomous Systems in Uncertain and Interactive Environments” gets selected by NSF Career Award.

This Faculty Early Career Development Program (CAREER) grant will fund research that enables autonomous systems to operate safely in close interaction with humans, as required, for example, in next generation manufacturing infrastructure, thereby promoting the progress of science, and advancing the national prosperity. Until recently, humans were physically separated from robots to prevent injuries and fatalities. Modern robotics focuses on humans and collaborative robotic systems working together on the same tasks. A safety hazard in such interactive environments is the occurrence of human errors. It is critical that safety conscious responses be programmed into collaborative robotic systems to guarantee safe behavior even when tasks or environments change. This project will develop a new algorithmic framework for safety assurance of autonomous robotic systems that aims for optimal performance when safety can be managed, anticipates and compensates for inevitable failures when it cannot, and learns from past mistakes. This framework will increase trustworthiness of autonomous systems while minimizing human efforts in deployment and maintenance, critical steps toward granting full autonomy to intelligent robots in uncertain and interactive environments, including such application domains as industrial robotics and autonomous driving. Through close integration of research and education, this project will contribute to new interdisciplinary training in robotics and autonomy, accessible dissemination of research in robot safety to the public, and opportunities for interactive learning through a remotely operated robotic platform. Partnerships with the Advanced Robotics for Manufacturing Institute, the Girls of Steel Robotics program, and the Choate Rosemary Hall college-preparatory school will be leveraged to provide opportunities for graduate student internships with small manufacturers and broaden participation in research of individuals from currently underrepresented groups.

This research aims to make fundamental contributions to a theory of cross-task safe guardians that augment existing hardware platforms without manual tuning, monitor and optimally modify their nominal task-oriented control actions to satisfy constraints representing safety requirements, and accomplish these objectives under time-varying uncertainty. It achieves this aim by investigating data-efficient model learning algorithms that accurately track the dynamics of an interactive environment, as well as by designing adaptive controllers that safely adjust the control strategy according to newly learned dynamic models. A responsibility-based evolutionary adversarial learning approach is developed to enable the adaptive safe control algorithm to achieve optimal performance given limits on available resources. Evaluation of the safe guardian and intelligent optimizer approaches is achieved in simulation and experimentally using autonomous vehicles interacting with human-operated vehicles in different traffic conditions, as well as in space-sharing applications involving robot arm manipulators and other human or robotic agents.

More details can be found on the NSF Webpage.