Six Ways To Manage Risk in Your AI Systems

As artificial intelligence (AI) becomes more common, so do its risks, such as its potential for bias and privacy infringements. As we’ve discussed in previous posts, governments around the world are beginning to develop requirements and guidance around AI. Organizations that are not yet developing AI standards in alliance with these requirements may soon find themselves struggling to keep up with regulations. However, there are steps they can start taking now to navigate these shifting requirements.

Six Steps for Managing Risk in AI

According to Michael K. Atkinson and Rukiya Mohamed, attorneys at Crowell & Moring specializing in national security practice and regulatory enforcement, you should approach AI risk management the same way you would approach onboarding new employees. Following AI frameworks designed by governmental agencies, the Intelligence Community’s AI Ethics Framework, and the European Commission’s High-Level Expert Group on Artificial Intelligence’s Ethics Guidelines for Trustworthy AI, they recommend six steps to reduce risk in your AI:  

  1. Build integrity into your organization’s AI from the design stage. “Just as employees need to be aligned with an organization’s values, so too does AI,” Atkinson and Mohamed write in VentureBeat. “Organizations should set the right tone from the top on how they will responsibly develop, deploy, evaluate, and secure AI consistent with their core values and a culture of integrity.”
  2. Onboard AI as your organization would new employees and third-party vendors. “As with humans, this due diligence process should be risk-based,” the authors write. This will involve checking the “the equivalent of the AI’s resume and transcript,” such as “the quality, reliability, and validity of data sources used to train the AI,” and the risks of using AI whose proprietary data is not available. It also includes checking “the equivalent of references to identify potential biases or safety concerns in the AI’s past performance,” as well as “deep background” checks, such as reviewing source code with the providers’ consent in order to “root out any security or insider threat concerns.”
  3. Ingrain AI into your organizational culture before deployment. “Like other forms of intelligence, AI needs to understand the organization’s code of conduct and applicable legal limits, and, then, it needs to adopt and retain them over time,” Atkinson and Mohamed write. “AI also needs to be taught to report alleged wrongdoing by itself and others. Through AI risk and impact assessments, organizations can assess, among other things, the privacy, civil liberties, and civil rights implications for each new AI system.”
  4. Manage, evaluate, and hold AI accountable. Similar to how it might take a risk-based, probational approach to delving out responsibilities to new employees, your organization should do the same with AI.  “Like humans, AI needs to be appropriately supervised, disciplined for abuse, rewarded for success, and able and willing to cooperate meaningfully in audits and investigations,” the authors write. “Companies should routinely and regularly document an AI’s performance, including any corrective actions taken to ensure it produced desired results.”
  5. Keep AI safe from various dangers, such as physical harm and cyber threats, similar to what is done for employees. “For especially risky or valuable AI systems, safety precautions may include insurance coverage, similar to the insurance that companies maintain for key executives,” they write. 
  6. Terminate or retire AI systems that don’t meet your organization’s values and standards or that simply age out. “Organizations should define, develop, and implement transfer, termination, and retirement procedures for AI systems,” Atkinson and Mohamed write. “For especially high-consequence AI systems, there should be clear mechanisms to, in effect, escort AI out of the building by disengaging and deactivating it when things go wrong.”

Keeping up with evolving AI requirements and guidelines isn’t easy. However, managing risk in your AI systems isn’t much different from how you are already doing it with employees. Like humans, AI systems are prone to bias and mistakes. As such, it’s fair to treat them with the same level of scrutiny. 

Establishing AI Standards for Your Organization

Artificial intelligence continues to spread across various industries, such as healthcare, manufacturing, transportation, and finance among others. It’s vital to keep in mind rigorous ethical standards designed to protect the end-user when leveraging these new digital environments. AI Standards: Roadmap for Ethical and Responsible Digital Environments, is a new five-course program from IEEE that provides instructions for a comprehensive approach to creating ethical and responsible digital ecosystems.

Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.

Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!


Atkinson, Michael K. and Mohamed, Rukiya. (19 September 2021). Want to develop a risk-management framework for AI? Treat it like a human. VentureBeat. 

, ,

No comments yet.

Leave a Reply