Artificial intelligence (AI) is quickly taking over the tech space. According to Bill Gates, who recently spoke at the annual meeting of the American Association for the Advancement of Science, computational power in AI applications is doubling, on average, every three and a half months. The question is: how will advancing artificial intelligence affect the world we currently live in?
As AI evolves, it’s expected to give a boost to some highly anticipated technologies, including gene editing, which Gates says has the potential to squelch diseases like malaria. However, it also poses complex challenges. While the public worries that AI can replace human labor in the workforce, many experts are instead concerned about the dangers it poses to privacy and safety. For example, will the biases of human developers creep into AI decision making? Will AI-backed facial recognition applications create a surveillance nightmare for the public? Another looming question: As AI rapidly develops, will regulations be able to keep up?
How Will Governments Approach Regulation?
Governments are just now starting to grapple with how to best deal with the wave of technological change that AI is expected to deliver. At the annual Consumer Electronics Show in Las Vegas in January, the U.S. government revealed new regulatory principles for the technology. With a focus on the nation’s values around individual liberties, the principles aim for regulation without reaching the point of overregulation, which could repress innovation. Meanwhile, it’s anticipated that soon-to-be-released regulations from the European Union will center on oversight and transparency.
Tech organizations largely support some degree of government regulation. According to a new KPMG LLP report, nearly 70% of technology executives think the government should be involved in regulating AI, and 90% believe businesses throughout all sectors should develop ethics policies to guide their AI projects.
However, the private sector still plays an important role in AI ethics.
“The reality is that if the private sector doesn’t address these issues now, a government eventually will,” writes Carolyn Herzog in The Hill. “But with the rapid rate of innovation in machine learning, regulation will always have a hard time keeping pace. That’s why companies and non-profit enterprises must take the lead by setting high standards that promote trust and ensuring that their staff complete mandatory professional training in the field of AI ethics. It is essential that anyone working in this field has a solid foundation on these high-stakes issues.”
How Should Tech Leaders Prepare?
Leaders in tech companies who want to develop ethical standards around AI will need to ask hard philosophical questions. For instance, should a developer base their project on: the belief system of their culture or attempt to reach a universally agreed-to standard? What is the intended outcome for the project? Questions like these are important to consider when examining their products or service. In the autonomous vehicle landscape, for example, developers will need to reflect on whether the large-scale data collection that will be necessary for driverless vehicles is a justified security risk if those vehicles give us a cleaner planet and accident-free roads.
There are straightforward approaches, as well. Tech leaders can create a system of oversight, which could include measures such as hiring a chief AI ethics officer, taking steps to educate their leadership about the potentials of AI, and making a goal to keep pace with government regulations.
“Companies should start thinking now about how they will retrain and educate employees as AI is introduced to work alongside them,” writes Kay Firth-Butterfield, Head of Artificial Intelligence and Machine Learning, World Economic Forum. “They must also consider what new markets might be opened by ethical design development and use of AI, and plan for how they will check for changes in algorithms or design to ensure ethical approaches.”
For now, tools are available to help AI developers avoid potential flaws and biases in their applications. For example, Google Cloud’s Model Cards and AI Explanations, and IBM’s AI Fairness 360 and AI Explainability 360 toolkits are designed to give them insight into the algorithms and decision making behind their applications and how they can improve them.
The Importance of AI and Ethics
As AI continues to grow and integrate with various aspects of business, there’s never been a greater need for practical artificial intelligence and ethics training. IEEE offers continuing education that provides professionals with the knowledge needed to integrate AI within their products and operations. Artificial Intelligence and Ethics in Design, designed to help organizations apply the theory of ethics to the design and business of AI systems, is a two-part course program available online. It also serves as useful supplemental material in academic settings.
Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.
Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!
Hellmann, Melissa. (11 February 2020). AI is here to stay, but are we sacrificing safety and privacy? A free public Seattle U course will explore that. Seattle Times.
KPMG. (13 February 2020). Most Tech Sector Leaders Believe Companies Should Have An Artificial Intelligence Ethics Policy: KPMG Report. PR Newswire.
Boyle, Alan. (14 February 2020). Why Bill Gates thinks gene editing and artificial intelligence could save the world. Yahoo!Finance.
Guillen, Mauro and Reddy, Srikar. (16 February 2020). We know ethics should inform AI. But which ethics? Gigabit.
Firth-Butterfield, Kay. (23 January 2020). Five Ways Companies Can Adopt Ethical AI. (Forbes).
Lopez, Maribel. (21 January 2020). Preparing For AI Ethics And Explainability In 2020. Forbes.
Herzog, Carolyn. (18 January 2020). How to Build Ethical AI. The Hill.
Egerton, John. (7 January 2020). CES 2020: White House Unveils AI Regulatory Principles. Multichannel.