The core research focuses on long-term collaborative autonomy, multisensory perception, robot adaptation, and human-robot/swarm teaming, particularly in unstructured, unknown, and/or adversarial environments. Broadly, our interests lie in the fields of robotics, machine learning, artificial intelligence, and augmented reality. The main application domains include underground robotics, robot-assisted inspection, search and rescue, autonomous driving, and Internet of Robotic Things (IoRT). The research and educational activities are currently supported by DOD and Metcalf (a big thanks!).


Code and Datasets


Examples of Application Domains

Robotic Solutions for the Underground

The global community is increasingly exploring underground environments for sustainable and resilient solutions to societal problems. Communities are moving infrastructure such as roads, rail, data centers, and water storage and treatment facilities below ground. Inspection and rescue operations in underground environments are both unsafe and challenging, requiring the use of a number of advanced technologies like robotics and robotic swarms. Many challenges must be addressed to design effective robotics solutions for the underground. For example, robots must be able to recognize victims and critical objects, localize themselves in similar underground environments, navigate autonomously over various terrain, and collaborate under communication constraints.

Picture

Robot-Assisted Surveying, Inspection and Reconnaissance

Surveying, inspection, and reconnaissance of an environment has many real-world applications. For example, it is essential to detect pipeline erosions and track their growth rate over time across multiple inspections, as a pipeline accident can cost millions of dollars for healthcare bills, pipeline replacement, environmental response, and clean-up operations. Many computational challenges are present in such applications, including how robots can recognize objects of interest (e.g., erosions in pipes), how to localize them in multiple runs potentially across a long time span, and how to track and predict their changes over time (e.g., growth rate of erosions).

Picture

Examples of Research Topics

Representation Learning for Long-Term Autonomy

In long-term autonomy, robots must operate over long periods of time, e.g., hours, days, years, and ultimately in a lifetime. Recently, place recognition for loop closure detection in long-term Simultaneous Localization and Mapping (SLAM) has attracted a significant attention, which introduces a new, significant challenge -- long-term appearance changes. For example, the same outdoor place at sunny summer noon and during snowy winter evening can look significantly different. We investigate representation learning methods to address long-term place recognition by learning world representations that can consistently encode long-term scenes and are robust to long-term environment changes.


Picture

Situational Decision Making and Planning

After reasoning about humans and the surrounding environment, collaborative robots need the crucial capability of making appropriately decisions to interact with people and facilitate ongoing tasks. Our research focuses on developing decision making methods that consider the uncertainty of robot reasoning results and risks of robot actions, and are aware of unexpected events that have never been experienced by robots. We also investigate methods that integrate planning and perception to enable new robot capabilities such as active perception and adaptive situational planning, especially in situations when humans are in the loop.


Picture

Robot Awareness of Human Intents and Activities

We envision that robots and humans team up and work together side-by-side with an interaction style that is not based on direct controls and commands from humans to robots, but rather on the idea that robots can implicitly infer human intents and activities through passive observation. This would allow a person to collaborate with robots in a natural manner, as he/she would when teaming with human teammates, thus bypassing the difficulty of cognitive overload that occurs when humans are required to explicitly supervise robot teammates.


Picture

Human and Object Detection and Tracking

Robust efficient detection and tracking of humans and objects in complex environments is critical to ensure safe and effective robot operations in human-centered environments, and is often the first step to enable robot awareness of human behaviors. Our research is focused on developing approaches to detect and track multiple humans and objects in real time with deformable shapes and various human-human, human-object interactions, to address challenges of lighting variation, occlusion, and robot movement, and to integrate multimodal observations when a robot is equipped with multiple sensors.

Picture