The program is studying how assistive systems can help service members complete complex tasks in the field, such as battlefield medicine, helicopter co-piloting, and mechanical repair.
A person wearing a VR headset picks up a pour-over coffee filter.
A visitor wearing a Microsoft HoloLens 2 headset was guided through the process of making pour-over coffee at an event on MIT campus, where multiple teams demonstrated their prototype augmented-reality task-assistance systems. Photo: Glen Cooper

Military service members have increasingly complex tasks to perform but may lack the specific expertise necessary to fully understand the work they complete. Minor mistakes made during these tasks, such as aircraft maintenance, can result in unsafe environments and could lead to mission failure. The Defense Advanced Research Projects Agency (DARPA) Perceptually-Enabled Task Guidance (PTG) project is investigating ways to reduce the burden on service members by using artificial intelligence (AI) and augmented reality (AR).

DARPA is working with teams across industry and academia to understand how AI and AR may be effectively applied in helping complete complex tasks in the fields of battlefield medicine, co-pilot guidance, and mechanical repair. The Laboratory is working with DARPA to independently evaluate these systems and provide feedback and guidance to understand how relevant the technology may be. The teams participating in the project and developing prototypes are from Kitware, New York University, Northeastern University, Palo Alto Research Center, the University of Michigan, the University of Florida, the University of Texas at Dallas, Raytheon BBN, Northrop Grumman, and RedShred.

"Our job is to evaluate each of these prototypes, construct the evaluation scenarios and the right metrics, and guide the program to ensure that the technology is both useful and relevant to the missions," says Marianne DeAngelus, a senior staff member in the Homeland Sensors and Analytics Group.

Current work on the PTG project is focused on assisting a person in completing a single task and evaluating each prototyped AI/AR system's effectiveness in guiding and monitoring a user through that task. The Laboratory evaluators assessed each prototype system on specific criteria, such as its ability to detect human errors, its user experience, and its functionality.

A photo of two military personnel in a helicopter.
The Perceptually-enabled Task Guidance (PTG) program aims to develop virtual “task guidance” assistants that can work with different sensor platforms to help military personnel perform complex physical tasks and expand their skillsets.

The Laboratory provided the teams with a specific set of cooking scenarios to prepare for: preparing pinwheels, brewing pour-over coffee, and microwaving a mug cake. In November, teams gathered at MIT campus to demonstrate their prototype systems to the Laboratory evaluators, the DARPA program office, and guests from the military.

The teams' prototype systems utilized multiple types of AR hardware, including the Microsoft HoloLens 2 AR headset. Because no specific methods were mandated by DARPA or the Laboratory evaluators, teams took a wide range of approaches. Some teams used voice commands for user interaction, while others used menus that appeared on a user's hand and could be "pinned" to the wall.

After the teams demonstrated their prototypes at MIT, the Laboratory evaluators spent four weeks independently evaluating each prototype. This process involved testing each prototype system to see how the AI would handle user errors by doing the wrong tasks or completing tasks slightly incorrectly.  More advanced testing stressed each system's ability to perceive objects or complete the task under difficult lighting conditions.  

A person's hand holding a "virtual" menu as seen through a VR headset.
The Northeastern University team’s prototype system displayed a menu on the user’s hand while they used an augmented reality headset. Photo: Northeastern University

"The core of the evaluation was error recognition, and results were mixed. There was no single winner in terms of which system worked better, and there was no one error that every system caught," DeAngelus says, adding that these results reflect the difficulty of AI operating in real-world environments. "These results tell us that combining some AI approaches might overcome shortcomings. We called this out in our report so that teams can learn from each other quickly and collaborate."

The Laboratory evaluators also sought input from Laboratory staff who were not involved with the project, asking them to rate each system's usability, the helpfulness of the system, and their trust in the system.                                                

In June, Lincoln Laboratory began the next round of evaluations on specific military tasks, such as applying a tourniquet to a wounded soldier, fixing a small engine, or completing preflight procedures in a Blackhawk helicopter. The system prototypes for the military use cases incorporate the fundamental technologies in perception, attention, knowledge transfer, and user modeling that were developed by teams in the cooking challenge. 

In the next phase of the project, the teams will work on adapting their current prototypes for more complex tasks and multitasking. The Laboratory is also helping to recognize critical shortcomings in existing systems, such as the expensive, bulky computer hardware required to use these prototypes or issues with how the prototype systems recognize objects. In future years, the Laboratory will evaluate these systems in collaboration with active-duty military personnel in the fields of battlefield medicine, helicopter co-piloting, and maintenance.