The core research focuses on long-term collaborative autonomy, multisensory perception, robot adaptation, and human-robot/swarm teaming, particularly in unstructured, dynamic, open, and/or adversarial environments. Broadly, our interests lie in the fields of robotics, machine learning, artificial intelligence, and augmented reality. The main application domains include underground robotics, robot-assisted inspection and repair, search and rescue, autonomous driving, and Internet of Robotic Things (IoRT).
Robotic Solutions for Underground Exploration
The global community is increasingly exploring underground environments for sustainable and
resilient solutions to societal problems.
Communities are moving infrastructure such as roads, data centers, and water treatment facilities below ground.
Exploration, inspection and rescue operations in underground environments are both unsafe and challenging,
requiring the use of a number of advanced technologies like robotics and robotic swarms.
Many challenges must be addressed to design effective robotics solutions for the underground.
For example, robots must be able to recognize victims and critical objects, localize themselves in similar underground environments, navigate autonomously over various terrain, and collaborate under communication constraints.
Robot-Assisted Reconnaissance, Inspection and Repair
Reconnaissance, inspection, and repair in dangerous environment has many real-world applications.
For example, it is essential to detect treats in power plant boilers or pipeline networks, and track their growth rate over time across multiple inspections,
as a boiler or pipeline accident can cost millions of dollars for healthcare bills, infrastructure replacement, environmental response, and clean-up operations.
Many computational challenges are present in such applications,
including how robots can recognize objects of interest (e.g., erosions in boilers or pipes),
how to localize them in multiple runs potentially across a long time span,
and how to track and predict their changes over time (e.g., growth rate of erosions).
Robot Modeling of People and Teamwork
We envision that robots and humans team up and work together side-by-side with an interaction style that is not based on direct controls and commands from humans to robots, but rather on the idea that robots can implicitly infer human intents and activities through passive observation.
This would allow a person to collaborate with robots in a natural
manner, as he/she would when teaming with human teammates, thus bypassing the difficulty of cognitive
overload that occurs when humans are required to explicitly supervise robot teammates.
This direction of research focuses on estimating human states (e.g., activities, emotions, intents, and goals)
and modeling teamwork from robots' multisensory observations.
Distributed Collaborative Localization and Tracking
Robust and efficient detection and tracking of objects in complex environments
is critical to ensure safe and effective operations of autonomous agents
in real-world human-centered environments.
Our research is focused on developing approaches of collaborative perception from multiple agents to detect, localize, and track multiple objects with deformable shapes and significant appearance changes during long periods of time.
This research supports our applications including human-robot teaming,
autonomous driving, and augmented reality (AR).
Code and Datasets
- (Code) FABL:Feature and Body-part Learning to enable real-time robot awareness of human behaviors. Released: 09/2016.
- (Code) SRAL:Shared Representative Appearance Learning for long-term place recognition. Released: 08/2016.
- (Dataset) MUTA: Multisensory Unstructured Terrain Adaptation dataset to facilitate research on robot adaptation to unstructured terrains in field applications.
- (Dataset) MOLP: Multimodal Omni-directional Long-term Place-recognition to facilitate research on long-term multimodal place recognition. Released: 06/2018.
- (Dataset) Gymnast Detection and Activity Recognition Dataset to facilitate gymnast analysis from 3D cameras. Released: 05/2016.
The research and educational activities are currently supported by Metcalf Archaeological Consultants, Inc. (Metcalf), Army Research Office (ARO), National Science Foundation (NSF), Department of Energy (DOE), Department of Transportation (DOT) Pipeline and Hazardous Materials Safety Administration (PHMSA), Army Research Laboratory (ARL), United States Air Force Academy (USAFA), Colorado Energy Research Collaboratory (CERC), and Toyota InfoTechnology Center. Big thanks to the sponsors!