Artificial intelligence (AI) regulations are coming. In March, five of the largest U.S. financial regulators sent banks an information request inquiring about how they are using artificial intelligence. Such a request indicates that the financial sector will soon be facing guidelines around the technology. Additionally, the U.S. Federal Trade Commission (FTC) released definitions in April around unfairness in AI, and the European Commission unveiled a proposal for AI regulations that will impose fines of as much as 6% of an organization’s yearly revenue if they violate them.
As regulations quickly evolve, it can be difficult for organizations to know exactly how to prepare. However, understanding the following major trends can help you stay ahead.
AI Risk Assessments
Regulatory agencies are increasingly requiring organizations to conduct risk assessments of their AI systems, which regulators often define as “algorithmic impact assessments,” or “IA for AI.”
Your organization should document how it has reduced and (if possible) resolved the risks of your AI applications. Be sure to clearly describe the risk posed by each AI system, and precisely detail how you are resolving these risks.
Diverse Expert Oversight
Regulators are increasingly requiring organizations to set up a process of accountability and independence in the evaluation and testing process of their AI systems. One way to help ensure this is to select a number of experts who can bring a different set of objectives to your AI project besides those of the technical team building the system.
As we discussed in a previous post, this can be done by appointing an Institutional Review Board. Such a board should include lawyers, ethicists, product developers, security officers, and subject matter experts who specialize in the same field as the AI application. (For example, if the application will be used in health care, these experts can include doctors, nurses, and patient advocates).
Continuous Review Process
Regulatory agencies often require organizations to create a process for continuous review of their AI systems. While both a thorough risk assessment and objective evaluation are vital, these alone won’t make your AI system 100% reliable as AI systems can be susceptible to failures. As such, your organization should set up a process that ensures your AI systems are under continuous review, such as a routine auditing process. By allowing you to spot weaknesses in these systems and quickly fix them, this will also ensure the systems stay dependable long after they have been approved.
Many regulators also expect organizations to detail the review process in the documentation of their AI system. Such details may include a review timeline, as well as a list of who will be charged with conducting reviews, along with all the different groups that will be involved in the process.
Organizations are understandably confused about how to prepare for artificial intelligence regulations—especially as regulatory agencies are still figuring out what these regulations will entail. However, you can keep ahead of regulations by paying close attention to these trends and figuring out how to embrace them early on.
Establishing AI Standards for Your Organization
Artificial intelligence continues to spread across various industries, including healthcare, manufacturing, transportation, and finance among others. It’s vital to keep in mind rigorous ethical standards designed to protect the end-user when leveraging these new digital environments. AI Standards: Roadmap for Ethical and Responsible Digital Environments, is a new five-course program from IEEE that provides instructions for a comprehensive approach to creating ethical and responsible digital ecosystems.
Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.
Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!
Burt, Andrew. (30 April 2021). New AI Regulations Are Coming. Is Your Organization Ready? Harvard Business Review.