As artificial intelligence (AI) systems are built on larger and larger quantities of data, the potential threat that they can pose to the public also grows. For example, automated systems could equipped with biased algorithms that discriminate against women and minority groups. Additionally, AI-based software, such as facial recognition technology, can jeopardize the privacy of millions of people. To get ahead of the problem, governments are beginning to regulate rapidly advancing AI, and organizations that develop these systems will eventually be required to comply.
In Europe, a law known as the General Data Protection Regulation (GDPR) oversees data privacy for EU citizens and residents. While the United States has yet to pass specific AI regulations, the country is expected to begin rolling out state and federal regulations in the coming years. One example is the Algorithmic Accountability Act, which would mandate that organizations examine and repair potentially harmful flaws in computer algorithms. Similar to the General Data Privacy Regulation (GDPR), the Algorithmic Accountability Act would make impact assessments mandatory for automated decision and information systems that are high risk.
Additionally, many businesses are taking steps to establish their own AI standards to ensure their systems are ethical and safe, as well as to protect them from potential liability.
Here are four expert-recommended principles organizations can consider for their AI standards.
What Should AI Standards Include?
Transparency: Huma Abidi, senior director of AI software products at Intel, recommends that AI developers define and create clear, quantifiable standards, and processes that can be measured in terms of quality and robustness. She told Venture Beat that ethical AI systems should be “fair, transparent, [and] explainable.”
One example is a paper titled “Datasheet for Datasets.” The paper focuses on a standardized process for machine learning that documents datasets, which states that “every dataset [should] be accompanied with a datasheet that documents its motivation, composition, collection process, recommended uses, and so on.” Another example is a machine learning documentation project called “Model Cards for Model Reporting.” The paper explains: “Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information.”
According to Abibi, these “basic principles” should be built into workflows.
“My point is that like any other software product, you want to make sure it’s robust and all that, but for AI, you especially—besides having standards and processes—you need to add these additional things,” she told Venture Beat.
A cautious, iterative approach to AI development: According to Rashida Hodge, VP of North America Go-to-Market, Global Markets, at IBM, businesses should develop cautious, iterative approaches to AI development. The process should include a lifecycle that forces organizations to return to it regularly as the data evolves, and tailor their AI models to any changes as necessary.
“Just like how we as humans process information and process nuance, as we read more information, as we go visit a different place, we have different perspectives. And we bring nuance to how we make decisions; we should look at AI applications in the exact same way,” Hodge told Venture Beat.
Oversight: Organizations should avoid siloing their analytics teams, which can inadvertently lead to “analytic city states” that make streamlining technology and ideas challenging. Scott Zoldi, Chief Analytics Officer at FICO recommends organizations appoint a single chief analytic officer to be responsible for creating and enforcing organizational standards.
“You can safely build more houses when you don’t have to draft a new building code for every house. Likewise, you shouldn’t have to worry about rolling the dice as to which artist will be building your model,” Zoldi wrote in Enterprise AI News.
Professionalization: It’s important that everyone in an organization is aware of how their job impacts AI development, even if their role is small. As discussed in a previous post, “tactics of professionalization” is one way organizations can standardize AI development broadly across their enterprises. According to these principles, AI developers should set up committed multidisciplinary teams, train all their employees, and clearly define who within the organization is accountable for the consequences of their AI systems.
Establishing AI Standards for Your Organization
Artificial intelligence continues to spread across various industries, including healthcare, manufacturing, transportation, and finance among others. It’s vital to keep in mind rigorous ethical standards designed to protect the end-user when leveraging these new digital environments. AI Standards: Roadmap for Ethical and Responsible Digital Environments, is a new five-course program from IEEE that provides instructions for a comprehensive approach to creating ethical and responsible digital ecosystems.
Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.
Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!
Colaner, Seth. (3 January 2020). Evolve: Operationalizing diversity, equity, and inclusion in your AI projects. Venture Beat.
Brumfield, Cynthia. (8 December 2020). New AI privacy, security regulations likely coming with pending federal, state bills. CSO.
Lucini, Fernando. (24 September 2020). Getting AI results by “going pro.” Accenture Research Report.
Zoldi, Scott. (6 November 2020). It’s Time To Set Industry Standards for AI. Enterprise AI.
[…] AI regulations and standards are still developing and not entirely predictable, it’s clear that organizations need to begin preparing for them. Those that don’t may soon find themselves struggling to keep […]