Metanav

What To Expect from NIST’s Artificial Intelligence Risk Management Framework

artificial-intelligence-risk-management

The United States, Canada, and Europe have all begun taking steps to regulate artificial intelligence (AI). The states of California and Washington are considering proposals aimed at mitigating bias in artificial intelligence applications, which is a clear sign that AI developers and those who deploy these systems need to take regulations seriously. 

As discussed in a previous post, the U.S. National Institute of Standards and Technology (NIST), a non-regulatory agency of the U.S. Department of Commerce responsible for promoting innovation and industrial competitiveness, recently published guidance that can help organizations identify and manage bias in AI. 

Under direction from the U.S. Congress, NIST is also gathering insight and feedback from top AI public and private stakeholders—including industry, civil society groups, academic institutions, federal agencies, foreign governments, standards developing organizations, and researchers—to develop a framework for promoting the responsible deployment of AI technologies. Known as the Artificial Intelligence Risk Management Framework (AI RMF), the guidance will serve as an important tool for building trustworthy AI, and will be a necessary step to getting the public to embrace the technology. 

“While there is no objective standard for ethical values, as they are grounded in the norms and legal expectations of specific societies or cultures, it is widely agreed that AI must be designed, developed, used, and evaluated in a trustworthy and responsible manner to foster public confidence and trust,” NIST states. “Trust is established by ensuring that AI systems are cognizant of and are built to align with core values in society, and in ways which minimize harms to individuals, groups, communities, and societies at large.”

The Artificial Intelligence Risk Management Framework

Elham Tabassi, NIST’s Information Technology Laboratory chief of staff, told Nextgov that the agency’s current efforts focus on cultivating trust in AI’s “design, development, use, and governance,” which include developing data and measures to assess artificial intelligence and related technical standards. 

“The framework is intended to provide a common language that can be used by AI designers, developers, users, and evaluators as well as across and up and down organizations,” Tabassi said. “Getting agreement on key characteristics related to AI trustworthiness—while also providing flexibility for users to customize those terms—is critical to the ultimate success of the AI RMF.”

According to NIST, the framework aims to help these AI stakeholders “better manage risks across the AI lifecycle,” and aims to:

  • “foster the development of innovative approaches to address characteristics of trustworthiness, including accuracy, explainability and interpretability, reliability, privacy, robustness, safety, security (resilience), and mitigation of unintended and/or harmful bias, as well as of harmful uses. 
  • consider and encompass principles such as transparency, fairness, and accountability during design, deployment, use, and evaluation of AI technologies and systems. 
  • consider risks from unintentional, unanticipated, or harmful outcomes that arise from intended uses, secondary uses, and misuses of the AI.” 

Once NIST finishes gathering feedback from stakeholders, it will use this information to develop a risk management framework for AI. The agency intends for the framework to be “adaptable to many different organizations, AI technologies, lifecycle phases, sectors, and uses.”

While the framework is still being fleshed out, it will ultimately serve as an important guide for organizations who want to develop standards for trustworthy AI applications, as well as those who want to avoid running afoul of future regulations. You can read NIST’s entire Request for Information here.

Establishing AI Standards for Your Organization

Artificial intelligence continues to spread across various industries, such as healthcare, manufacturing, transportation, and finance among others. It’s vital to keep in mind rigorous ethical standards designed to protect the end-user when leveraging these new digital environments. AI Standards: Roadmap for Ethical and Responsible Digital Environments, is a new five-course program from IEEE that provides instructions for a comprehensive approach to creating ethical and responsible digital ecosystems.

Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.

Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!

Resources

Vincent, Brandi. (9 August 2021). NIST Prioritizes External Input in Development of AI Risk Management Framework. Nextgov. 

Pasquale, Frank and Malgieri, Gianclaudio. (30 July 2021). If You Don’t Trust A.I. Yet, You’re Not Wrong. The New York Times.

National Institute of Standards and Technology (29 July 2021). Artificial Intelligence Risk Management Framework. Federal Register.

, , , , ,

No comments yet.

Leave a Reply

https://www.googletagmanager.com/gtag/js?id=G-BSTL0YJSGF