Metanav

Governments, Major Tech Companies, and the Vatican All Push for Greater AI Oversight

AI-oversight

Artificial intelligence (AI) is expected to dramatically alter human society. However, many experts worry about the potential dangers of these systems, including their propensity for bias.

Whether AI systems inherit prejudice directly from their human manufacturers or societal biases embedded in the data sets that are used to train them, these systems have no way of understanding their actions or changing their behavior. For instance, if a bank uses an AI system tasked with maximizing profits in order to determine how creditworthy a customer is, it’s easily feasible for that system to prey on individuals with low credit scores and issue them risky loans. Furthermore, if these AI systems aren’t built to be transparent—in a manner often known as a “black box” system, humans will have no insight into the decision processes or who to hold responsible if the systems cause harm.

Although it’s still maturing, Explainable AI (XAI) can be a solution to the “black box” conundrum. Currently being developed by DARPA, the technology would come with built-in tools that give humans insight into decision-making and other vital information. Such features give the technology the potential to build trust among users. However, there may be disadvantages. For example, some organizations fear XAI will jeopardize their intellectual property (IP) or compromise accuracy.

EU Commission Announces Plans to Regulate AI

In February, the European Commission proposed a plan to stringently regulate AI and invest billions of euros into R&D over the coming decade. 

“An AI system needs to be technically robust and accurate in order to be trustworthy,” stated the commission digital czar Margrethe Vestager during a recent press conference

The proposed regulations build on the European Union’s 2018 AI strategy. The proposal includes requirements aimed at ensuring strict human oversight of AI, including:

  • a prohibition on “black box” AI systems;
  • governance over big data sets used to train the systems; 
  • identification of who is responsible for the system’s actions 

DoD Announces Five Principles of AI Ethics

In February, the United States Department of Defense announced plans to embrace five principals in AI ethics. The Defense Innovation Board, which spent 15 months deliberating with renowned technologists and AI experts, plans to use these principles in all areas of the military both on and off the battlefield. These principles are as follows:

1) Responsible: DOD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment and use of AI capabilities.
2) Equitable: The department will take deliberate steps to minimize unintended bias in AI capabilities.
3) Traceable: The department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources and design procedures and documentation.
4) Reliable: The department’s AI capabilities will have explicit, well-defined uses, and the safety, security and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life cycles.
5) Governable: The department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

Vatican, IBM, and Microsoft Co-Sign AI Resolutions

Even the Vatican wants to ensure future AI systems are safe. In February, the Pontifical Academy for Life, Microsoft, IBM, the Food and Agricultural Organization of the United Nations (FAO), and the Italian Government, cosigned a resolution that outlines six major principles for AI’s development and deployment. According to a recent press release, these principles are:

1) Transparency: In principle, AI systems must be explainable.
2) Inclusion: The needs of all human beings must be taken into consideration so that everyone can benefit and all individuals can be offered the best possible conditions to express themselves and develop. 
3) Responsibility: Those who design and deploy the use of AI must proceed with responsibility and transparency.
4) Impartiality: Do not create or act according to bias, thus safeguarding fairness and human dignity. 
5) Reliability: AI systems must be able to work reliably.
6) Security and privacy: AI systems must work securely and respect the privacy of users. These principles are fundamental elements of good innovation. 

“AI is an incredibly promising technology that can help us make the world smarter, healthier and more prosperous, but only if it is shaped at the outset by human interests and values,” stated John Kelly III, Vice President of IBM, in a press release. “The Rome Call for AI Ethics reminds us that we have to choose carefully whom AI will benefit and we must make significant concurrent investments in people and skills. Society will have more trust in AI when people see it being built on a foundation of ethics, and that the companies behind AI are directly addressing questions of trust and responsibility.”

AI and Ethics

As AI continues to grow and integrate with various aspects of business, there’s never been a greater need for practical artificial intelligence and ethics training. IEEE offers continuing education that provides professionals with the knowledge needed to integrate AI within their products and operations. Artificial Intelligence and Ethics in Design, a two-part online course program, is designed to help organizations apply the theory of ethics to the design and business of AI systems. It also serves as useful supplemental material in academic settings.

Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.

Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!

Resources

Patel, Mannish. (20 March 2020). The Ethics of AI : AI in the financial services sector: grand opportunities and great challenges. The Finch Times.

Chandler, Simon. (4 March 2020). Vatican AI Ethics Pledge Will Struggle To Be More Than PR Exercise. Forbes.

Lopez, Todd. (25 February 2020). DOD Adopts 5 Principles of Artificial Intelligence Ethics. U.S. Defense Department.

Wallace, Nicholas. (19 February 2020). Europe plans to strictly regulate high-risk AI technology. Science Magazine.

(28 February 2020). Press Release. The Call for AI Ethics was signed in Rome. Academy for Life.

, , , , , , ,

Trackbacks/Pingbacks

  1. How Can Organizations Ensure Their AI Systems Are Ethical? - IEEE Innovation at Work - September 17, 2020

    […] applications are rapidly expanding, and so are the various threats they pose. From algorithms with embedded racial biases to “black box” systems that give humans no insight into an AI’s decision making process, it’s becoming increasingly […]

  2. Four Steps Organizations Can Take To Establish AI Standards - IEEE Innovation at Work - December 7, 2020

    […] are currently no specific ethical regulations around the technology—though some governments, including the European Union, are working to establish them. Meanwhile, many organizations are beginning to develop AI standards […]

  3. Four Principles Organizations Should Include in Artificial Intelligence Standards  - IEEE Innovation at Work - January 10, 2021

    […] that they can pose to the public also grows. For example, automated systems could equipped with biased algorithms that discriminate against women and minority groups. Additionally, AI-based software, such as […]

Leave a Reply

https://www.googletagmanager.com/gtag/js?id=G-BSTL0YJSGF