A 2019 survey from Gartner found that 37% of businesses and organizations employ artificial intelligence (AI), DataProt reported. However, few organizations are taking steps to mitigate the risk associated with AI systems, such as their propensity for bias and privacy infringements. A 2021 PwC research report found that just 20% of enterprises had instituted an AI ethics framework, while only 35% intended to enhance their AI governance and processes. With governments increasingly moving towards passing AI regulations, the timeframe for organizations to develop ethical AI standards is getting shorter.
During an interview with Analytics India Magazine, Satyakam Mohanty, Chief Product Officer at Fosfor by L&T Infotech, a global technology consulting and digital solutions company, said responsible AI is the only way for organizations to reduce potential risks associated with the technology.
“The great AI debate opens various facets of ethics, but without a common agreement and agreed standard, its impact and repercussions on the way organizations operate is not quantifiable,” Mohanty told the magazine. “Fairness and explainability can be managed and scaled by introducing data bias mitigation practices and algorithmic bias mitigation processes and ensuring higher standard explainability frameworks into the implementations and decision-making process. By utilizing ethics as a key decision-making tool, AI-driven companies save time and money in the long run by building robust and innovative solutions from the start.”
How to Develop an AI Standards Framework
How can your organization begin building a successful AI standards framework? Writing in Harvard Business Review, AI ethics experts Reid Blackman and Beena Ammanath recommend that organizations start by putting together a team of senior-level experts that encompasses, at minimum, technologists, legal/compliance experts, ethicists, and business leaders who understand what the organization needs to achieve in terms of ethical AI.
Once you have a team in place, they recommend taking these steps:
- First, identify your organization’s AI ethical standard:
What is the minimum ethical standard your organization is willing to meet in terms of AI? If your AI system is discriminatory towards a certain group but is still far less discriminatory towards them than traditional human-run systems, will your organizations consider that an acceptable benchmark? This is a similar dilemma to what autonomous vehicle manufacturers must consider. For example, if autonomous vehicles occasionally kill passengers and pedestrians but at a lower rate than traditional vehicles, should those vehicles be considered safe? Although these are difficult questions to grapple with, asking them will help your organization set the right frameworks and guidelines to ensure ethical product development.
- Determine “gaps” between where your organization is currently and what your standards need:
While there may be plenty of technical solutions to your AI ethics dilemma, none are likely to be enough to reduce the risks substantially enough to safeguard your organization. As such, your AI ethics team will need to ask: what are its skills and knowledge limitations, what are the risks it is trying to reduce, in what ways can software/quantitative analysis help and not be able to help, what needs to be done in terms of qualitative assessments, and how mature does the technology need to be to meet ethics expectations?
- Gain insight into what’s behind the bias in your AI and then strategize solutions:
While it’s generally true that biased AI systems are reflections of biased training data and/or societal bias, the real problem is more complex. For example, you need to understand sources of discriminatory outputs, as well as potential biases, as knowing this will help you understand how to decide the best strategy for reducing bias.
Implementing artificial intelligence standards at your organization will take time, but the risk reduction they provide will be well worth the effort. Does your organization have the right knowledge and skills necessary to build an effective AI standards roadmap?
Establishing AI Standards for Your Organization
Artificial intelligence continues to spread across various industries, including healthcare, manufacturing, transportation, and finance among others. It’s vital to keep in mind rigorous ethical standards designed to protect the end-user when leveraging these new digital environments. AI Standards: Roadmap for Ethical and Responsible Digital Environments, is a new five-course program from IEEE that provides instructions for a comprehensive approach to creating ethical and responsible digital ecosystems.
Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.
Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!
Krishna, Sri. (29 March 2022). Talking Ethical AI with Fosfor’s Satyakam Mohanty. Analytics India Magazine.
Blackman, Reid and Ammanath, Beena (21 March 2022). Ethics and AI: 3 Conversations Companies Need to Have. Harvard Business Review.
Jovanovic, Bojan. (8 March 2022). 55 Fascinating AI Statistics and Trends for 2022. DataProt.
Likens, Scott; Shehab, Michael; Rao, Anand. AI Predictions 2021. PwC Research.