DARPA Selects SoarTech to Measure Human – AI Trust

29 November 2021

Contact Our Team

For more information about how Halldale can add value to your marketing and promotional campaigns or to discuss event exhibitor and sponsorship opportunities, contact our team to find out more

 

The Americas -
holly.foster@halldale.com

Rest of World -
jeremy@halldale.com



DARPA-Ace-1HD

SoarTech's TrustMATE technologies for data collection, autonomous systems, speech recognition and intention interpretation, and modeling and simulation were used in their Phase 1 efforts on the Defense Advanced Research Projects Agency (DARPA) Air Combat Evolution (ACE) program. Specifically Technical Area 2 (TA2), ACE - Trust and Reliability in User-System Teaming (ACE-TRUST) implemented an experimental methodology for modeling and measuring pilot trust in dogfighting autonomy, as well as tested a novel human-machine interface (HMI) communicating artificial intelligence (AI) trustworthiness.

The unmanned Intelligence (UI) developed by Dynetics and SoarTech will enable pilots to perform a mission commander task, but the focus of ACE-TRUST is on the pilot’s trust of the dogfighting autonomy.

If the autonomy is doing something it shouldn’t, the human should take control. Just as important is to allow the autonomy to be in control when it is better at the task than the human. Trust levels in the AI is dynamic over the human-agent interaction. This trust Interaction needs to be measured, modeled, and calibrated for optimal human-AI teaming for successful warfighting.

“It is easy to think that a technology will be used, but user acceptance is actually a lot harder especially when safety and critical task execution is required,” said Lauren Reinerman-Jones, Ph.D. and SoarTech senior scientist. “Calibrating human-system trust enables the appropriate levels of reliance on autonomy. Accurate measurement of trust that is fieldable is essential for using autonomy the right amount at the right time. Our methods and models are contextually relevant and operationally ready.”

Based on pilot interactions with the system, combined with physiological data, pilot’s trust was measured and modeled. The next step is to assess that trust against the capabilities of the autonomy, and work deliberately to calibrate the pilot’s trust appropriately to the capabilities of the dogfighting algorithms.

Trust in the autonomy will allow operators to multi-task with confidence. The idea is to capture the model for human trust. SoarTech is recording the trustworthiness (the state of the actual artificial intelligence). How do we communicate the AI state and intent to the human through an HMI that could be visual or audio indicators?

SoarTech was the lead for phase 1 of the research, with support by Collins Aerospace, University of Iowa Operator Performance Laboratory, and Raytheon.

“We will be transitioning our accomplishments on DARPA ACE to an AFRL [Air Force Research Laboratory] Phase 2 SBIR to expand our objective trust modeling capabilities by capturing the dynamic nature of trust throughout the lifecycle of Human-Autonomy Teaming,” said Reinerman-Jones. “The vision for this work is to have a robust objective model of trust that can be adapted by the trustworthiness of the autonomy. We will use system trustworthiness to develop explainable and transparent human-machine interfaces using visuals, sounds, or even touch to calibrate the operator’s trust. AI that is trusted is the future for legal, moral, and ethical autonomy.”

Related articles



More Features

More features