Share this article

land

Follow my robot: teaching autonomous team mates by example

Soldiers are highly trained before being deployed, but unexpected situations sometimes arise and individuals need to learn new skills from team-mates, often while on operation. Berenice Baker finds out about new research into whether autonomous robots could learn to navigate in unfamiliar terrain in the same way as humans – by demonstration.

Image courtesy of US Army

Army robots are arguably already here in the form of autonomous drones, weapons and UAVs, with transport logistics vehicles hot on their heels. But if a new generation of robots is to work more closely with human counterparts in a team, they will need the learning ability to deal with situations that haven’t been pre-programmed in a lab.


Researchers at the US Army Research, Development and Engineering Command (RDECOM) Research Laboratory (ARL) and the Robotics Institute at Carnegie Mellon University (CMU) have developed a new technique that allows mobile robot platforms to navigate autonomously in environments while carrying out actions a human would expect of the robot in a given situation. One of the research team's goals is to provide reliable autonomous robot team mates to soldiers, including in dangerous situations.


Global Defence Technology spoke to researchers Dr Maggie Wigness and Dr John Rogers at ARL about their work.

Dr Maggie Wigness and Dr John Rogers. Image courtesy of US Army

Berenice Baker:
What is the background to the project? 

Dr Maggie Wigness:

Dr John Rogers and I have been working together on using visual perception for robot navigation since the end of my PhD studies in 2015. We focused mostly on how to efficiently train visual perception systems that could be used to define binary traversal costs, that is, the robot can or cannot traverse on a specific terrain.


After attending an annual Robotics Collaborative Technology Alliance (RCTA) review meeting, we realised that Dr Luis Navarro-Serment (of CMU) was using a learning technique for predicting pedestrian traversal patterns that could be adapted to our robot traversal applications. We began this collaboration on learning non-binary traversal costs and traversal behaviours via human demonstration in early 2017.

Berenice Baker:
What advantages can a robot team mate offer over humans?

Dr Maggie Wigness:

An intelligent and autonomous robot team mate provides opportunities to keep humans from having to perform tasks that are potentially dangerous or highly repetitive and that may put strain on a human. For example, there may be very little intel about a region that a team needs to explore.


It is possible to send the robot ahead of soldiers to collect information and scout the area so soldiers have more situational awareness prior to entering the site themselves. Repetitive or burdensome tasks such as persistent surveillance or rucksack/supply carrying can also be tasked to robots to help reduce soldier fatigue.

Berenice Baker:
What kind of tasks would a robot with this navigation intelligence carry out when working with a team of soldiers?

Dr John Rogers: 

A robot team mate with this type of advanced navigation system could learn to perform a variety of tasks alongside its human counterparts, such as scouting ahead of the formation to warn of upcoming dangers, manoeuvring within the group to help transport equipment or recover injured soldiers, or following behind to protect against flanking attacks. Navigation based on semantic cues is one of the key enabling technologies for each of these capabilities.

How does the learned intelligence method improve over traditional robot navigation systems?


JR: Traditionally, one would have to specify the costs of traversing each type of terrain to get a navigation system to behave as desired. This turns out to be very difficult, and hard to extend to new types of terrain. Our method enables a human team mate to adjust the robot's navigation behaviour quickly, with a few demonstrations.

Berenice Baker:
Could you offer an example of how on-the-fly learning in the field would take place when mission requirements change?

Dr John Rogers: 

The robot may be executing a navigation task such as a delivery, reconnaissance, or a patrol. Then an unexpected environment change occurs, for example, terrain types are altered by flooding. The robot could be behaving 'sub-optimally'; it is driving on the wrong terrain or avoiding safer terrains.


A soldier operator could seize the robot controls and correct the robot's behaviour through teleoperation one or a few times. Each time the operator corrects the robot's behaviour, the algorithm uses that information to learn and adapt to mimic this desired behaviour.

Berenice Baker:
How did ARL and CMU ARL collaborate on the project? 

Dr Maggie Wigness: 

The initial learning from demonstration algorithm came from our collaborator’s lab at CMU. Researchers at ARL focused on integrating this learning and prediction algorithm on a mobile robot platform. This integration includes linking the learning with the robot’s onboard sensors, that is camera and LIDAR, and also interfacing the prediction output with the planning and navigation stack that runs on the robot.


We at ARL also spent time developing an efficient strategy to collect demonstrations and train the prediction system. Collaboration efforts between ARL and CMU helped identify improvements that could be made to the learning algorithm, real-world applications for this capability, and evaluation strategies to demonstrate the utility of our approach.

Berenice Baker:
What were the main challenges you had to overcome during your research?

Dr Maggie Wigness: 

When working with a fielded robot, there are many moving parts and systems that need to be integrated to produce an autonomous agent, including the perception, mapping, prediction and learning aspect of the robot. The collaboration effort for this research worked so well because each of the researchers has an area of expertise related to one of these components. More than once we have coordinated trips to work together at the same location to get the integration sorted out. This was essential to produce a robot that could be fielded to perform live navigation operation in a real-world environment.

Berenice Baker:
What’s next for your research in this field?

Dr John Rogers:  

The learning from demonstration paradigm used in this algorithm could be adapted to many types of tasks beyond mobile robot terrain navigation. Intricate or complex tasks are already trained with human demonstration. Our approach is a way for these types of tasks to be adapted to changing, unforeseen, or even novel environments with few additional training examples. We also envision that this type of algorithm could be used for generalising training across all compatible robots so that they all learn from each other's retraining experience, and adapted to specific needs with a particular operator or with a unit which has a unique mission, which should not be shared widely with other robots.


It has been a privilege to work with such talented collaborators; ARL's Robotics Collaborative Technology Alliance (CTA) programme has provided us with unique opportunities to work with top researchers from academia and focus on army-relevant problems.