Artificial intelligence (AI) is evolving rapidly. According to multinational professional services company, Accenture, businesses spent $306 billion USD on AI applications over the past three years. Despite the advancement of AI, there are currently no specific ethical regulations around the technology—though some governments, including the European Union, are working to establish them. Meanwhile, many organizations are beginning to develop AI standards that will ensure their applications are trustworthy and safe for customers. For example, IBM has taken major steps to build trustworthiness for its AI applications, including the creation of an AI ethics board and AI policies, such as the company’s Principles for Trust and Transparency.
How Can Organizations Establish AI Standards?
When you receive a meal at a restaurant, you know the food is likely safe to eat. This is because a level of trust exists across the various professionalized fields—the farmers, suppliers, ingredient manufacturers, and restaurant staff who worked to create the meal. However, when it comes to the various stakeholders who are developing AI applications, fewer professionalized roles exist. Furthermore, these roles are not well known among the public. Much like food industry stakeholders, which all must follow specific standards, AI developers should also establish standards that ensure trust.
Successful AI developers establish “tactics of professionalization” across their organizations. According to Fernando Lucini, Global Lead Data Science & ML Engineering — Applied Intelligence at Accenture, developers should set up committed multidisciplinary teams, train their employees, and clearly define who within the organization is accountable for the consequences of their AI systems. To achieves this level of professionalization, he recommends the following steps:
- Set up definable AI roles within your organization: In professionalized industries like food and agriculture, the roles of teams and individuals responsible for the final product are clearly established and understood. The same rule needs to apply to the role of your AI professionals.
- Train and educate your AI professionals: Companies need to understand the skills gaps in their AI workforce and provide the necessary supplemental training and education. To keep training consistent, companies should establish career levels for AI professionals and prerequisites. This includes training and coursework designed to help define clear paths for moving up the ranks.
- Establish formal AI processes: Professionalized industries have a standard way of testing and evaluating products and services. Companies involved in the development of AI need to create similar processes for developing, deploying, and managing AI systems. For instance, they should create clear guidance for employees and teams on how to work with one another. These guidelines should also cover select technologies for the creation of AI and how to then apply those technologies.
- AI literacy must be democratized across organizations: Organizations need to ensure all departments are educated in AI, even if they do not work directly with the technology. For example, the more your marketing team knows about the AI technology behind an application, the better they will be at communicating its benefits to customers.
Establishing AI Standards for Your Organization
Artificial intelligence continues to spread across various industries, including healthcare, manufacturing, transportation, and finance among others. It’s vital to keep in mind rigorous ethical standards designed to protect the end-user when leveraging these new digital environments.
Check out AI Standards: Roadmap for Ethical and Responsible Digital Environments, a new five-course program available on the IEEE Learning Network (ILN) today.
Rossi, Francesca. (5 November 2020). How IBM Is Working Toward a Fairer AI. Harvard Business Review.
Lucini, Fernando. (24 September 2020). Getting AI results by “going pro.” Accenture Research Report.