Blog

Blog PostsImproving Trust Between Humans and AI Teammates in Mission-Critical Environments

Improving Trust Between Humans and AI Teammates in Mission-Critical Environments

‘TRUST’M’ models human-machine trust to establish, maintain, and repair trust for long-term teaming efficacy

Aptima, Inc., a trailblazer in leveraging artificial intelligence and advanced analytics to enhance mission readiness, announced today that it has completed a $1 Million AFWERX Direct to Phase II Small Business Technology Transfer (STTR) award to develop Trust Resilience in User-System Team Modeling (TRUST’M), a system that models a human’s trust in a machine teammate, assesses the machine’s actual competence, and adjusts the machine’s behavior to calibrate the human’s trust.

When human teammates have not properly calibrated trust towards the capabilities of their machine partner, they can exhibit all-or-nothing behavior. With too much trust, human teammates neglect to review their machine teammate’s work, and with too little trust human teammates ignore machine suggestions and feedback. Machine teammates also need to measure the human teammate’s trust levels to help the human to delineate task responsibilities, maintain awareness of the machine teammate’s capabilities, and maintain a competent sight picture of the operational space. Maintaining trust between human-machine teams is challenging when risk is high or perceived competence of a teammate changes, leading to team misalignment over time. The ability to establish, maintain, and repair trust is essential to maintain long-term teaming efficacy. To promote strong intra-team collaboration it is necessary to (1) set and maintain human trust in the intent of machine partners; (2) establish and reinforce the machine’s trust in the human’s assessment of competence; and (3) drive interventions to repair and re-align trust after acute changes in the team’s perceived competence.

In response to this problem, Aptima developed TRUST’M, a system that models a human’s trust in a machine teammate, assesses the machine’s actual competence, and adjusts the machine’s behavior to calibrate the human’s trust. Aptima with partners at Carnegie Mellon University (led by Dr. Cleotilde Gonzalez) selected a task, AI teammate, and approach to model co-training with dynamic trust adjustment and develop a system for maintaining and repairing trust in human-machine teams.

  • The task was based on intelligence analyst use cases.
  • The teammate was an AI cognitive assistant called ALFRED developed by Aptima for Army analysts that recommends information for an analyst to review based on priority information requests (PIRs). The human teammate saves items that support their conclusions to reports associated with each PIR, and reject items and keywords that are irrelevant.
  • The modeling approach for TRUST’M included an instance-based learning theory (IBLT) model of human trust in ALFRED based on models of trust developed by Dr. Gonzalez and colleagues.

When fully instantiated, TRUST’M will track the user’s behavior and feedback, determine the discrepancy between the human’s apparent assessment and TRUST’M’s assessment of ALFRED’s competence, and predict experiences (e.g. behaviors in ALFRED) to calibrate the human’s trust in ALFRED to the correct range. TRUST’M will dynamically adjust trust by changing ALFRED’s behavior to maintain the optimal trust levels and repair trust when over- or under-trust occurs.

Aptima welcomes the adoption or merging of your technology with one or more of our SBIR Topics. We are eligible for SBIR Enhancement funding, as well as TACFI and STRATFI awards, all of which are sole source.

For more information, please contact aptima_info@aptima.com.

USSPACECOM Photo by Christopher DeWitt

gacorway
gacor way
gacorway login
gacorway slot
GACORWAY slot gacorway GACORWAY