Demand for artificial intelligence (AI) is growing. According to Data Prot, a website that educates people about internet privacy, 37% of businesses and organizations employ AI technology. While AI can make organizations more efficient and profitable, these systems are not without risks, such as their propensity for bias and privacy infringements. Governments across the world are taking steps to curb these potential harms.
Recently, the UK’s Centre for Data Ethics and Innovation (CDEI) unveiled a roadmap for developing an “assurance ecosystem” for artificial intelligence. According to the market research and consumer insight industry news site Research Live, this new artificial intelligence roadmap, published in 2021 under the order of the UK’s National AI Strategy, contains six top priorities:
- Create demand for dependable and effective trustworthiness throughout the AI supply chain, enhancing understanding of risks, as well as accountability for mitigation
- Develop a “dynamic, competitive AI assurance market” that provides a variety of successful services and tools
- Develop standards that provide a shared language for AI assurance
- Create an “accountable AI assurance profession” that guarantees effective AI assurance services
- Provide requirements to help organizations meet regulations
- Enhance connections between industry and independent researchers in a way that helps them create assurance methods and identify AI risks
Other governments are also issuing regulations and guidance for organizations that use, sell, and develop AI systems. As previously reported, the European Union recently proposed a first-ever comprehensive legislative package to regulate the technology.
In comparison, the U.S. has taken a more patchwork approach, with a number of agencies issuing their own specific guidance. As the legal news site JDSupra reports, these agencies include the Department of Commerce, the National Institute of Standards and Technology (NIST), the Federal Trade Commission (FTC), the Food and Drug Administration (FDA), and the National Security Commission on Artificial Intelligence (NSCAI). As the list grows increasingly longer, legal experts believe in the likelihood of a federal AI regulation passing in the U.S.
“Companies should craft policies and procedures across the organization in order to create a compliance-by-design program that promotes AI innovation, but also ensures transparency and explainability of systems,” wrote experts from the business law firm Orrick in JDSupra. “Companies should also audit and review their usage regularly and document these processes to comply with regulators who may seek further information.”
Major Hospital Group Establishes Data and AI Center of Excellence
Many organizations are already working to ensure their AI systems are ethical. One example is Intermountain Healthcare, a hospital system in the U.S. state of Utah. As HealthITAnalytics reports, the company developed a Data Science and Artificial Intelligence Center of Excellence to provide ethical oversight and to help Intermountain continuously enhance its AI practices. The center will consist of a team of experts from a number of disciplines, including data analytics, applied mathematics and statistics, computer science, behavioral sciences, econometrics, computational linguistics, clinical informatics, and clinical specialists. This team will collaborate to develop large datasets that an AI system can “sort through accurately and efficiently.” Aside from reducing bias in their AI systems, these changes are also expected to help the hospital improve patient care.
“We can synthesize the insights from our populations of data and see what interventions and what care pathways are working across populations of patients,” Greg Nelson, Assistant Vice President of Analytics Services at Intermountain Healthcare, told HealthITAnalytics. “AI really helps us sift through what’s working.”
AI can be a great tool for improving business outcomes. However, it’s vital to ensure that these systems do not inadvertently harm people. By establishing AI standards at your organization, you can ensure your AI systems are safe while staying ahead of growing regulations.
Establishing AI Standards for Your Organization
Artificial intelligence continues to spread across various industries, such as healthcare, manufacturing, transportation, and finance among others. It’s vital to keep in mind rigorous ethical standards designed to protect the end-user when leveraging these new digital environments. AI Standards: Roadmap for Ethical and Responsible Digital Environments, is a new five-course program from IEEE that provides instructions for a comprehensive approach to creating ethical and responsible digital ecosystems.
Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.
Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!
Resources
(23 November 2021). Ethical Artificial Intelligence Standards To Improve Patient Outcomes. HealthITAnalytics.
McKenney, Ryan, Sussman, Heather Egan, Wolfington, Alyssa (19 November 2021). U.S. Artificial Intelligence Regulation Takes Shape. JDSupra.
No comments yet.