Caught my Eye by Katina: NIST Proposes Method for Evaluating User Trust in Artificial Intelligence Systems

by | May 21, 2021 | 0 comments

#

NIST Proposes Method for Evaluating User Trust in Artificial Intelligence Systems is an article posted on the National Institute of Standards and Technology website.

“Every time you speak to a virtual assistant on your smartphone, you are talking to an artificial intelligence — an AI that can, for example, learn your taste in music and make song recommendations that improve based on your interactions. However, AI also assists us with more risk-fraught activities, such as helping doctors diagnose cancer. These are two very different scenarios, but the same issue permeates both: How do we humans decide whether or not to trust a machine’s recommendations? 

“This is the question that a new draft publication from the National Institute of Standards and Technology (NIST) poses, with the goal of stimulating a discussion about how humans trust AI systems. The document, Artificial Intelligence and User Trust (NISTIR 8332), is open for public comment until July 30, 2021. 

“The report contributes to the broader NIST effort to help advance trustworthy AI systems. The focus of this latest publication is to understand how humans experience trust as they use or are affected by AI systems.

“According to NIST’s Brian Stanton, the issue is whether human trust in AI systems is measurable — and if so, how to measure it accurately and appropriately…”

Please click here to continuing reading this article 

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

LATEST NEWS

Save the Date! Charleston In Between

Charleston In Between Wednesday, July 28, 10:30 am - 3:00 pm Eastern The Charleston Conference is planning a very special “In Between” half-day virtual mini-conference event in late July to explore important late-breaking developments that can’t wait til November for...

SUBSCRIBE TO OUR PODCAST

Share This