Metanav

U.S. Standards Agency Announces New System To Measure Trust in AI

trust-in-AI

In order for humans to fully embrace artificial intelligence (AI), they need to trust it. While there hasn’t been a standard way for organizations to gage how much people trust their AI models, that could soon change. The National Institute of Standards and Technology (NIST), an agency that falls under the U.S. Department of Commerce, has developed a scoring system that quantifies human trust in AI systems. 

In a recently released paper, NIST researchers provide details about the system, which they hope will help businesses and developers who use AI systems make informed decisions and identify areas where people lack trust in them.

The system involves two scores:

1)A user trust potential score: Measures details about a person using an AI system, including their age, gender, cultural beliefs, and experience with other AI systems. 

2) The perceived system trustworthiness score. Covers technical aspects, such as whether an outdated user interface makes people question the trustworthiness of an AI system. The proposed system score assigns weights to nine characteristics, including accuracy and explainability. Other factors and weights for factors that play into trusting AI, such as reliability and security, are still being worked out.

According to the paper, expectations around an AI system will depend on how it is used. For example, a system used by healthcare professionals to diagnose cancer should be more exact than one that recommends books.

However, some experts fear there may not be enough factors reflected in trust scores. For example, someone’s mood or evolving attitude towards AI could affect their trust. Explanations also influence trust, and some experts think their role should be factored into the system. Often, explanations without data can lead people to trust AI more than they should. 

“Explanations can bring about unusually high trust even when it is not warranted, which is a recipe for problems,” Himabindu Lakkaraju, a Harvard University assistant professor who studies how trust impacts human decision making in professional settings, told Wired. “But once you start putting numbers on how good the explanation is, then people’s trust slowly calibrates.”

AI Regulations Take Shape Worldwide

The U.S. is far from the only country grappling with how to deal with trust in AI. Forty nations, including the U.S., have signed onto the Organisation for Economic Co-operation and Development (OECD) AI Principles, embracing their importance, and around 12 European nations signed a document saying trustworthiness and innovation are intertwined.

Both the NIST and the OECD, a group of 38 countries dedicated to shaping policies that improve lives, are developing tools to identify AI systems as either low or high risk. Meanwhile, Canada has developed the Directive on Automated Decision-Making, an algorithm impact assessment, with an objective to “ensure that Automated Decision Systems are deployed in a manner that reduces risks to Canadians and federal institutions, and leads to more efficient, accurate, consistent, and interpretable decisions made pursuant to Canadian law.”

Many countries are also looking to implement regulations in order to help foster trust in AI. Currently, lawmakers in the EU are discussing AI regulations that could help shape global standards for the various risk levels of AI, as well as provide techniques for regulating these systems. Similar to the GDPR privacy law, such regulations could entice large, global organizations to rethink their practices around AI technology. 

As artificial intelligence regulations come to fruition, it is becoming increasingly clear that organizations need to be prepared for their arrival. Some recommended ways to build trust around your AI systems now include establishing internal AI standards, training employees, and implementing an institutional review board to oversee the technology.

Establishing AI Standards for Your Organization

Artificial intelligence continues to spread across various industries, such as healthcare, manufacturing, transportation, and finance among others. It’s vital to keep in mind rigorous ethical standards designed to protect the end-user when leveraging these new digital environments. AI Standards: Roadmap for Ethical and Responsible Digital Environments, is a new five-course program from IEEE that provides instructions for a comprehensive approach to creating ethical and responsible digital ecosystems.

Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.

Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!

Resources

Johnson, Khari. (22 June 2021). This Agency Wants to Figure Out Exactly How Much You Trust AI. Wired.

Directive on Automated Decision-Making. Government of Canada. 

, , ,

Trackbacks/Pingbacks

  1. NIST: Recommendations for Identifying and Managing Bias in AI - IEEE Innovation at Work - July 21, 2021

    […] recently published guidance that can help them. This is the same agency that also recently released a scoring system that helps developers quantify human trust in AI […]

Leave a Reply

https://www.googletagmanager.com/gtag/js?id=G-BSTL0YJSGF