Metanav

How Institutional Review Boards Can Reduce AI Risks 

institutional-review-boards

Organizations are increasingly adopting artificial intelligence (AI) standards to mitigate risks associated with the technology, such as its propensity for bias. While developing AI standards is necessary, they also need to be upheld in order to be effective. To do so, organizations can consider establishing a body of experts charged with overseeing AI standards and ethics. 

In general, institutional review boards (IRBs) ensure organizations are upholding their basic ethical principles by authorizing, rejecting, and recommending changes to research projects and products. In the United States, these governing bodies have proven effective at reducing ethical risks in the field of medicine. IRBs can provide similar oversight for organizations involved in artificial intelligence. 

When establishing an IRB for your organization, there are three main issues to consider, according to Harvard Business Review.

Who Should Sit on the Board?

Your IRB should include a diverse group of experts capable of systematically pinpointing and reducing ethical risks in your AI applications. It should include: 

  • engineers and product designers who can explain the technology and its potential impact on users;
  • lawyers and security officers who are knowledgeable about current laws, regulations, and privacy standards;
  • experts who specializes in ethics;
  • subject matter experts from various backgrounds who specialize in the application at hand (for example, a doctor’s oversight could be helpful for AI applications used in hospitals);
  • and at least one expert who is not affiliated with your organization in order to bring a sense of objectivity to the committee’s decision making.

What Jurisdiction Should the IRB Hold?

When it comes to artificial intelligence applications, try to consult institutional review boards as early as possible, preferably even before research or product development begins. After all, it’s a lot easier and cheaper to make alterations to a project before you start working on it. You wouldn’t want to invest time and money on a project that turns out to be a major ethical risk. 

You also need to determine how much authority your IRB will possess. In the medical field, IRBs are given ultimate authority—once an IRB rejects a proposal, it won’t be reconsidered, and if the IRB proposes changes, the revisions must be made. You’ll need to decide if your IRB has this much power, or if, for example, you want to put an appeals process in place. However, you should keep in mind that the more authority your IRB has, the more effective it is likely to be at reducing risk. 

What are the Values That Will Guide Your IRB?

Developing a core set of values for your IRB will be relatively easy. Rather, the more difficult aspect is instituting mechanisms that prevent these values from being twisted or broadly interpreted.

In the medical field, more than just principles guide decisions. For example, medical IRBs typically compare cases to ones decided upon in the past, which allows IRBs to stay consistent in how they apply principles. 

Similarly, institutional review boards charged with AI oversight can look to previous cases to apply their principles consistently. Let’s say, for example, that your IRB declined to approve a contract with a particular country due to ethical risks related to how that government functions. It could apply the reasoning behind that decision to similar cases in the future. Additionally, if a certain case is unprecedented, an IRB can apply fictionalized scenarios to help it understand how it should apply its principles. 

Setting up an IRB in your organization will help you create a ground-up approach to AI oversight. Additionally, it will build trust among your employees and customers, and make your organization more competitive in an environment where concern over AI is higher than ever. 

Establishing AI Standards for Your Organization

Artificial intelligence continues to spread across various industries, including healthcare, manufacturing, transportation, and finance among others. It’s vital to keep in mind rigorous ethical standards designed to protect the end-user when leveraging these new digital environments. AI Standards: Roadmap for Ethical and Responsible Digital Environments, is a new five-course program from IEEE that provides instructions for a comprehensive approach to creating ethical and responsible digital ecosystems.

Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.

Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!

Resources

Blackman, Reid. (1 April 2021). If Your Company Uses AI, It Needs an Institutional Review Board. Harvard Business Review. 

, , , ,

Trackbacks/Pingbacks

  1. Three Major Trends In Artificial Intelligence Regulation - IEEE Innovation at Work - May 12, 2021

    […] we discussed in a previous post, this can be done by appointing an Institutional Review Board. Such a board should include lawyers, […]

Leave a Reply

https://www.googletagmanager.com/gtag/js?id=G-BSTL0YJSGF