Metanav

NIST: Recommendations for Identifying and Managing Bias in AI

managing-bias-in-artificial-intelligence

Organizations that fail to prevent bias from slipping into their artificial intelligence (AI) systems may soon find themselves in regulatory trouble. As we discussed in a previous post, the U.S. Federal Trade Commission announced in April that private companies that sell biased AI systems can be subject to laws under the FTC Act, which outlaws unfair types of competition as well as unfair acts or practices that impact commerce. That same month, the European Union imposed regulations aimed at mitigating mass surveillance and the use of AI as a manipulation tool.

Given the growing risks of bias in AI, “some sort of legislation or regulation is inevitable,” Christian Troncoso, the senior director of legal policy for the Software Alliance, a trade group that represents some of the largest and longest-running software companies, told the New York Times.  “Every time there is one of these terrible stories about A.I., it chips away at public trust and faith.”

Organizations can prepare for impending regulations by developing their own internal standards aligned with current best practices. To help with this, the National Institute of Standards and Technology (NIST), a non-regulatory agency of the U.S. Department of Commerce responsible for promoting “innovation and industrial competitiveness,” recently published guidance that can help them. This is the same agency that also recently released a scoring system that helps developers quantify human trust in AI systems.

Guidelines from the NIST

The NIST guidelines, titled “A Proposal for Identifying and Managing Bias in Artificial Intelligence,” gives organizations insight into how to spot and handle biases that can erode public trust in AI systems. The paper outlines a framework for addressing biases during the three main stages in an AI system’s life cycle, including:

Pre-design phase:
This stage focuses on early decision making on  the problem the AI model will solve, how the problem gets framed, and identification and quantification of data that will be used. This phase determines who should make those decisions, and which individuals and teams should hold the most authority over them. This is important because decision makers can have personal points of view that impact later stage development, potentially resulting in a biased AI model.

The design and development stage:
This stage is where modeling, engineering, and validation happen. Parties involved in this stage include the software designers, engineers, and data scientists responsible for performing risk management techniques. These individuals tend to focus on the accuracy of their models without consideration for context. This focus can lead to biased results for minority groups known as “ecological fallacy”. 

One way to prevent this is to implement a practice called “cultural effective challenge,” which creates an environment that allows developers to challenge and question steps in the modeling and engineering process in order to remove bias. Another possible solution is to require developers to defend their methods, which encourages new ways of thinking.

Deployment stage:
During this stage, users begin interacting with the developed technology. These users often come from different backgrounds and professions, and may create unintended uses for the technology. During this phase,  “intention gaps” often take shape. (This is a gap between what decision makers intended during the pre-design phase and the unintended consequences of those decisions after deployment).

Deployment monitoring and auditing can help solve this problem. For example, a technique called “counterfactual fairness,” which leverages causal methods to create algorithms that are fair, can help developers weigh social biases against prediction accuracy.

Currently, there are no guaranteed methods that prevent bias from creeping into AI systems. However, by following NIST’s recommendations, organizational leaders can ensure they are doing their best to prevent or mitigate harmful bias in these systems.

Establishing AI Standards for Your Organization

Artificial intelligence continues to spread across various industries, such as healthcare, manufacturing, transportation, and finance among others. It’s vital to keep in mind rigorous ethical standards designed to protect the end-user when leveraging these new digital environments. AI Standards: Roadmap for Ethical and Responsible Digital Environments, is a new five-course program from IEEE that provides instructions for a comprehensive approach to creating ethical and responsible digital ecosystems.

Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.

Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!

Resources

Metz, Cade. (30 June 2021). Using A.I. to Find Bias in A.I. The New York Times. 

Wiggers, Kyle. (25 June 2021). AI Weekly: NIST proposes ways to identify and address AI bias. Venture Beat. 

(22 June 2021). NIST Proposes Approach for Reducing Risk of Bias in Artificial Intelligence. NIST.

Down, Leann, Jonas, Adam, Schwartz, Reva, Tabassi, Elham (June 2021). A Proposal for Identifying and Managing Bias in Artificial Intelligence (Draft NIST Special Publication 1270). The National Institute of Standard and Technology.

, , , , , , , ,

Trackbacks/Pingbacks

  1. What To Expect from NIST’s Artificial Intelligence Risk Management Framework - IEEE Innovation at Work - August 27, 2021

    […] discussed in a previous post, the U.S. National Institute of Standards and Technology (NIST), a non-regulatory agency of the […]

  2. Six Ways To Manage Risk in Your AI Systems - IEEE Innovation at Work - October 15, 2021

    […] so do its risks, such as its potential for bias and privacy infringements. As we’ve discussed in previous posts, governments around the world are beginning to develop requirements and guidance around AI. […]

  3. Three Ways Organizations Can Ensure AI Standards Are More Than Afterthoughts - IEEE Innovation at Work - May 10, 2022

    […] ethical standards that ensure these systems don’t harm the public, such those that aim to prevent unintentional biases based on the data these systems are trained on, have been less quick to evolve. According to a […]

Leave a Reply

https://www.googletagmanager.com/gtag/js?id=G-BSTL0YJSGF