In conversations about the future of artificial intelligence (AI), the idea that machines will soon take over our whole lives and even eliminate jobs, increasing the numbers of people unemployed, usually comes into play.
Unless you’re talking with Geoffrey Hinton.
One of the biggest names in AI, Hinton is known as the godfather of AI for his pioneering work in neural networks. He is now professor of computer science at the University of Toronto and part of the Google Brain project.
In his book, Architects of Intelligence: The Truth About AI from the People Building It, Martin Ford talks with Hinton about the economic and social ramifications of AI, and Hinton says that dramatically increasing productivity should be a good thing.
According to Hinton, “People are looking at the technology as if the technological advances are the problem. The problem is in the social systems, and whether we’re going to have a social system that shares fairly, or one that focuses all the improvement on the 1% and treats the rest of the people like dirt. That’s nothing to do with technology.”
This is another take on the ethics involved in developing AI. There are design decisions and regulations to consider, but we must also take into account who will receive the bulk of the advantages AI promises.
“What governments ought to do is put mechanisms in place so that when people act in their own self-interest, it helps everybody,” says Hinton. “High taxation is one such mechanism: When people get rich, everybody else gets helped by the taxes. I certainly agree that there’s a lot of work to be done in making sure that AI benefits everybody.”
He continues, “I hope the rewards will outweigh the downsides, but I don’t know whether they will, and that’s an issue of social systems, not with the technology.”
Like Hinton, Databricks co founder and CEO Ali Ghodsi, says the most advanced work that is being done with AI is not trying to replace the human brain but augmenting it and helping humans to accomplish challenging tasks. AI, with its programmed algorithms, is making little progress in anything that requires creativity and is not super-structured. It’s no match for a human, who can offer endless reflections on the decision making process.
“I don’t think AI at its core is bad for humanity,” says Ghodsi, pointing out that it is not decreasing the amount of resources on the planet, or the food, education, and healthcare available to people.
However, only a handful of companies are actually accomplishing their goals with AI, and it’s not necessarily the companies with humanity at the heart of what they do. Companies like Google and Amazon, the 1%, have the resources to hire tens of thousands of Silicon Valley engineers, many of whom hold PhDs or are top professors recruited from leading universities. Those engineers are focused solely on a few narrow problems like autonomous vehicles or targeted ad click-through rates.
The rest of the companies, don’t have the same resources and they are finding that there is actually great complexity surrounding the problems they’re trying to tackle with AI.
In healthcare, organizations are attempting to use AI in imaging to help identify cancerous tumors. They need data scientists and data engineers, as well as subject matter experts. But using AI to identify tumors is not even close to being fully automated. Google, with its countless PhDs, can develop technology with the ability to tell cats from dogs, and if it’s wrong, well, that can be funny. But in healthcare, saying a tumor is cancerous — or saying it isn’t — and being wrong can have life-altering consequences.
The Ethics of AI
IEEE offers a practical online learning program to help your organization apply the ethics of AI to the business of design. Our 10-course Artificial Intelligence and Ethics in Design program is designed for industry professionals, with courses that are focused on integrating AI and autonomous systems within product and systems design. Connect with an IEEE content specialist today and find out how this cutting-edge training will help you increase your bottom line.
Merchant, Brian. (1 Jan 2019). The ‘Godfather of Deep Learning’ on Why We Need to Ensure AI Doesn’t Just Benefit the Rich. Gizmodo.
Wells, Joyce. (14 Aug 2017). Artificial Intelligence Has a 1% Problem. Database Trends and Applications.