Can Artificial Intelligence Agents Develop Trust With Humans? New Research Says Yes!

INFORMS Journal Management Science New Study Key Takeaways:

  • Artificial intelligence agents can autonomously develop trust and trustworthiness strategies like humans in economic exchange scenarios.
  • This research marks a significant stride toward creating intelligent systems that can cultivate social intelligence and trust through pure self-learning interaction.
  • This is a first step to build multi-agent-based decision support systems in which interacting artificial agents can leverage social intelligence to achieve better outcomes.

  

BALTIMORE, MD, February 21, 2024 – Artificial intelligence (AI) has made great strides in the past few years, even months. New research in the INFORMS journal Management Science finds that AI agents can build trust – like that of humans.

“Human-like trust and trustworthy behavior of AI can emerge from a pure trial-and-error learning process and the conditions for AI to develop trust are similar to those enabling human beings to develop trust,” says Yan (Diana) Wu of San Jose State University.

“Discovering AI’s ability to mimic human trust behavior purely through self-learning processes mirrors conditions fostering trust in humans.”

Wu, with co-authors Jason Xianghua Wu of the University of New South Wales, UNSW Business School, Kay Yut Chen of The University of Texas at Arlington and Lei Hua of The University of Texas at Tyler, say it’s not just about AI learning to play a game; it’s a significant stride toward creating intelligent systems that can cultivate social intelligence and trust through pure self-learning interaction. 

The paper, “Building Socially Intelligent AI Systems: Evidence from the Trust Game using Artificial Agents with Deep Learning,” constitutes a first step to build multi-agent-based decision support systems in which interacting artificial agents can leverage social intelligence to achieve better outcomes. 

“Our research breaks new ground by demonstrating that AI agents can autonomously develop trust and trustworthiness strategies akin to humans in economic exchange scenarios,” says Chen.

The authors explain that contrasting AI agents with human decision-makers could help deepen knowledge of AI behaviors in different social contexts.  

“Since social behaviors of AI agents can be endogenously determined through interactive learning, it may also provide a new tool for us to explore learning behaviors in response to the need for cooperation under specific decision-making scenarios,” concludes Hua.

“In an era where AI is evolving quickly and hesitation and mistrust of this technology is rampant, this research showcases the positive impact of AI and the opportunities available from this technology. Not only can it be used for good, but it can also foster trust the same way humans do—a groundbreaking revelation as we look toward the future of AI” concluded Wu.

Link to full study.

 

About INFORMS and Management Science

Management Science is a premier peer-reviewed scholarly journal focused on research using quantitative approaches to study all aspects of management in companies and organizations. It is published by INFORMS, the leading international association for data and decision science professionals. More information is available at www.informs.org or @informs.

###

  

Contact:

Ashley Smith

443-757-3578

[email protected]

  

Subscribe and stay up to date on the latest from INFORMS. 

Sign Up For Email Updates

Can Artificial Intelligence Agents Develop Trust With Humans? New Research Says Yes!

Media Contact

Ashley Smith
Public Affairs Coordinator
INFORMS
Catonsville, MD
[email protected]
443-757-3578

See all Releases