Few technologies have been by equal measures as captivating and controversial as the ongoing emergence of artificial intelligence (AI).

Most recently, the tech universe was rocked following the November 2023 firing and unprecedented rehiring of Sam Altman, CEO of OpenAI, a leading artificial intelligence company and maker of ChatGPT. That same month, and roughly a year after its breakthrough introduction in November 2022, ChatGPT announced the release of a powerful upgraded version that, among other capabilities, allows users to make their own customized chatbots and include chatbot creations in the company’s new “GPT Store.” The latest version of ChatGPT also offers a new legal “shield” that reportedly protects professional users against claims of copyright infringement.

However, with the continued growth of AI and on the tail of the aforementioned developments, a host of repercussions and concerns have emerged. Among them, AIHungry.com’s research on Google Trends revealed that a search for the term “AI Taking Jobs” reached record-high levels in November 2023. Furthermore, that the term experienced a 400% increase in search activity between November 2022 and November 2023.

While these statistics reflect society’s unease with what it sees as a growing reality, experts agree that those who don’t understand AI or how to use it are at the greatest risk of being replaced by it. This underscores the importance of acquiring new skills as a way to gain a competitive advantage and future-proof yourself from workplace developments like automation.

The Pros and Cons of AI

According to a recent summary of key statistics and predictions collected from prominent industry sources, publications, and market research firms, the growth and evolution of AI will potentially drive a mixed bag of results. These results are expected to have both positive and negative ramifications on the future of society and the workplace. Among them:

  • AI could replace up to one billion jobs worldwide over the coming decade— though it may also create 97 million new jobs by 2025.
  • One out of three businesses surveyed in a recent study claimed to be replacing at least some human functions in their workflow with AI solutions.
  • Administrative/repetitive functions, as well as jobs in such fields as bookkeeping and proofreading, are most at risk of being replaced by AI solutions. Manual labor jobs, as well as those requiring creativity and/or interpersonal skills (such as writing and legal services), are reportedly at the least risk of being replaced by AI.
  • In a recent study, one in four employees surveyed in the U.S. believes that their job may be replaced by an AI solution in the next five years, with 37% expressing concern over the possibility of this displacement.
  • On the other hand, nearly 20% of workers in that study welcomed the growth of AI based on their belief that it will relieve them of some tedious/repetitive tasks. 85% of those surveyed support the move towards automation for “hazardous or unhealthy” jobs.
  • Three out of four employees surveyed in another study, however, believe that the widespread adoption of AI will end up driving inequality in the workplace, with women being at 10% greater risk of job loss due to automation than their male counterparts.

Government Oversight

As the debate over AI rages on between stakeholders worldwide to determine how the technology can best help– not hurt– citizens, companies, and employees, calls for governmental parameters around the use of artificial intelligence are growing louder.

As shared during an October 2023 hearing by the U.S. Senate Committee on Health, Education, Labor and Pensions’ Subcommittee on Employment and Workplace Safety, a joint World Economic Forum and Accenture report revealed that some 40% of the 19,000 individual tasks across 867 occupations studied could be impacted and/or replaced by the ‘large language model’ (LLM) tools used by AI. With generative AI expected to impact everything from the state of both existing and future jobs to privacy, legal, and ethical considerations and more, industry leaders in the U.S. are asking Congress to establish a “rational, risk-based” regulatory framework for AI that will take the needs of employers, employees, and other constituents into consideration.

The U.S. White House Office of Management of Budget supported this request in October 2023 by asking each of its executive agencies to designate a Chief AI Officer (CAIO) to be in charge of “advancing responsible AI innovation” and “managing risks from the use of AI.” According to the official White House memo, “Artificial intelligence (AI) is one of the most powerful technologies of our time [and] we must seize the opportunities AI presents while managing its risks….particularly those affecting the safety and rights of the public.”

Stay on the Cutting-Edge of AI

The world of AI remains a moving target. With AI systems “advancing so rapidly and unpredictably that even on the rare occasions lawmakers and regulators have tried to tackle them, their proposals quickly become obsolete,” according to New York Times journalists Karen Weise and Cade Metz.

The rapid forward motion of AI will have ramifications on the global labor pool. A summary of key statistics and predictions reports that 120 million workers worldwide will need “upskilling” in the next three years due to developments in artificial intelligence. The key to avoiding AI job automation, according to the report? “Creativity, emotional intelligence, and STEM skills.

Are you on top of the full extent of AI’s direction, impact on society and business, and evolving design requirements? Are you shoring up your skill sets to minimize the risk of replacement by automation? AI-related course programs from IEEE are designed to keep learners abreast of the many opportunities, challenges, and considerations to be taken into account when developing, planning, using, or training for the expansion of artificial intelligence across its many applications.

Artificial intelligence-related courses from IEEE include:

Resources

(1 November 2023). US Senate Subcommittee Focuses on AI in the Workplace. IAPP.

(1 November 2023). White House OMB Issues AI Memorandum to Federal Agencies. IAPP.

Miller, Jim. (11 November 2023). AI Replacing Jobs Statistics: 40 Automation and AI Stats for 2023. AIHungry.com. 

Weise, Karen and Metz, Cade. (8 December 2023). The Morning: AI’s Big Year. The New York Times.

Wilson, Mark. (November 2023). ChatGPT Gets its Biggest Update So Far – Here are 4 Upgrades That Are Coming Soon. TechRadar. 

Perrigo, Billy. (22 November 2023). Sam Altman Returns as OpenAI CEO. Here’s How It Happened. Time.

(September 2023). Jobs of Tomorrow: Large Language Models and Jobs. World Economic Forum/Accenture.

Lufkin, Braun. (18 April 2022). What ‘Upskilling’ Means for the Future of Work. BBC.

Artificial intelligence (AI) continues to dominate headlines, thanks to its potential to revolutionize countless industries. From manufacturing and healthcare to banking and retail, AI is streamlining automatable and administrative tasks across the board.

Beyond efficiency, AI plays a critical role in high-impact applications. It helps detect cybersecurity threats, prevent retail fraud, and improve autonomous vehicle navigation by recognizing driver patterns and predicting accidents. Additionally, AI enhances customer experiences by personalizing marketing and service interactions.

In essence, machines are now replicating, and even expanding, the capabilities of the human mind. As a result, AI is reshaping the future of business.

A New Industrial Era

Because of its transformative power, the World Economic Forum has dubbed AI part of the “fourth industrial revolution.” This new era merges the physical, digital, and biological worlds, following earlier revolutions driven by steam, electricity, and computing.

Forbes contributor Bernard Marr calls it the “Intelligence Revolution,” underscoring AI’s sweeping impact on society and industry.

AI: A Double-Edged Sword

Although often used interchangeably, AI actually falls into two distinct categories: artificial general intelligence (AGI) and generative artificial intelligence (generative AI).

AGI refers to the ability of machines to understand, learn, and perform intellectual tasks as humans would based on the processing of demonstrated customer patterns. Examples of this include:

  • personalized product recommendations provided by Amazon
  • customized workouts and health goals suggested by apps (such as the MyFitnessPal app formerly owned by sports apparel and gear provider Under Armour) that base their recommendations on collected health data for physical activity, sleep, and diet
  • smart assistants like Alexa and Siri that can control home technology, dial the telephone upon request, and more

Generative AI refers to a form of artificial intelligence that learns the patterns and structure of inputted data and responds by generating text, images, or other media with similar characteristics. An example includes the much-publicized ChatGPT, a chatbot introduced in November 2022 by OpenAI, that can produce output of a desired length, format, style, level of detail, and language on most any topic.

Experts confirm that AI can help businesses enhance their productivity by leaps and bounds. For example, research firm Gartner estimates that AI can save companies around the world over 6 billion employee-hours annually. On an economic level, a recent study by global management consulting firm McKinsey & Company predicts that the analytics enabled by AI could add US $13 trillion to our global GDP by 2030.

At the same time, however, AI also raises its share of issues and ethical concerns. Among them, generative AI can lend itself to the alteration of text, images, and video in the form of inaccurate, misleading, manipulative, and/or potentially dangerous “deep fake” or fraudulent content. It also raises questions about ownership rights of created content and its eligibility for copyright protection.

Helping Industry Navigate the Complex Field of AI

Recognizing both the unprecedented importance and complexity of artificial intelligence, IEEE offers several course programs in AI and machine learning designed to help navigate these exciting, complicated, and rapidly-evolving technologies.

  • Machine Learning: Predictive Analysis for Business DecisionsIdeal for computer engineers, business executives, industry executives, industry leaders, business leaders, technical managers, data scientists, and data engineers, this five-course program provides an overview of the different types of machine learning that are fueling businesses today, how these forms of AI use software, algorithms, and models in their design, and how attendees can deploy scalable machine learning into their own processes to achieve their business goals. 
  • Artificial Intelligence and Ethics in DesignIdeal for data engineers, AI/ML engineers, design engineers, computer engineers, security engineers, electrical engineers, software engineers, UX designers, engineering managers, technical leaders, functional consultants, business users, research engineers, robotics engineers, machine learning engineers, and computer vision engineers, this five-course program covers such topics as law, compliance, and ethics in artificial intelligence, ethical challenges in data protection and safety, and responsible design in the algorithmic era. 
  • Artificial Intelligence and Ethics in Design: Responsible InnovationThis five-course program is designed to help learners understand the ethics specifications that must be met when designing AI systems for European (and other) markets. Topics include causes of bias, transparency and accountability for robots and AI systems, and legal and implementation issues of enterprise AI.

To discover more IEEE courses about artificial intelligence, browse the IEEE Learning Network catalog.


 
Resources:

Forbes Technology Council. (13 January 2022). 16 Industries and Functions That Will Benefit from AI In 2022 and Beyond. Forbes.

Fourth Industrial Revolution. World Economic Forum.

Marr, Bernard. (10 August 2020). What Is the Artificial Intelligence Revolution and Why Does It Matter To Your Business? Forbes.

Schroer, Alyssa. (19 May 2023). What Is Artificial Intelligence? Built In.

Mohan, Malethy. (22 March 2023). The Difference Between Generative AI and Traditional AI. LinkedIn.

Kanade, Vijay. What Is General Artificial Intelligence (AI)? Definition, Challenges, and Trends. Spiceworks.com.

Rajagopalan, Ramesh. 10 Examples of Artificial Intelligence in Business. Online Degrees.

Elliott, Timo. (9 March 2020). The Power of Artificial Intelligence Vs. the Power Of Human Intelligence. Forbes.

Dilmegani, Cem. (22 April 2023). Generative AI Ethics: Top 6 Concerns. AIMultiple.

Deep learning is having a moment. There was a time where we could only dream of partially autonomous vehicles and voice-activated assistants. Today, however, these inventions are a regular part of our lives. A subfield of machine learning (ML) and artificial intelligence (AI), deep learning algorithms are designed to learn like a human brain. Deep learning continually analyzes data using an advanced technology known as “artificial neural networks,” which are operated by a series of algorithms that can perceive complex relationships in data sets. These neural networks allow computers to see, hear, and speak—it is the reason we can talk to our phones and dictate emails to our computers. 

Algorithms have always been part of the digital world, where they are trained and developed in perfectly simulated environments. The current wave of deep learning facilitates AI’s leap from the digital to the physical world. While the applications are endless—from manufacturing to agriculture—there are still challenges of accuracy, clean data, and reinforcement learning.

Deep Learning in the Real World

AI researchers are working to introduce deep learning to our physical, three-dimensional world. Experts anticipate that deep learning will advance several sectors over the next few years, including:

  • Self-Driving vehicle capabilities: The handling of novel situations is the main problem for autonomous vehicle engineers. With growing exposure to millions of scenarios, a deep learning algorithm’s regular cycle of testing and implementation ensures safe driving. Global industry growth for autonomous cars is 16% a year. The global autonomous vehicle market reached nearly US$106 billion in 2021, and one forecast projects it will grow to US$2.3 trillion by 2030.
  • Fraud news detection and news aggregation: Deep learning is heavily utilized in news aggregation, which attempts to tailor news to consumers’ preferences. Reader personas are defined with greater complexity to filter out content based on a reader’s interests, as well as geographical, social, and economic factors. Furthermore, there is always room for improvement in filtering out fake news and misinformation.
  • Natural Language Processing (NLP): One of the most challenging things for computers to learn is how to comprehend the complexity of human language, including its syntax, semantics, tonal subtleties, expressions, and even sarcasm. The global market for Natural Language Processing (NLP) is expected to reach US$25.7 billion by 2027.
  • Healthcare: Some of the deep learning projects gaining traction in the healthcare industry include assisting with the quick diagnosis of life-threatening diseases, addressing the shortage of qualified doctors and healthcare providers, and standardizing pathology results and treatment plans. By 2026, artificial intelligence has the potential to save the clinical healthcare business more than US$150 billion.

Getting “Data-Centric AI” with Deep Learning

Andrew Ng is among the pioneers of deep learning and, according to Fortune, he’s also one of the most thoughtful AI experts on how real businesses are using the technology. Ng has become a champion for what he calls “data-centric AI.” Ng believes developers and businesses should be asking questions like: What data is used to train the algorithm? How is it gathered and processed? How is it governed? 

Data-centric AI is the practice of “smartsizing” data so that a successful system can be built using the least amount of data possible. If data is carefully prepared, a company may need far less of it than they think—saving both time and money .Calling it as important as the shift to deep learning that occurred over the past decade, Ng believes that the shift to data-centric AI is the most important shift businesses need to make today. 

Be Prepared for Future of Deep Learning

As deep learning facilitates AI’s leap from the digital to the physical world, it is important to stay current with the latest technology advances. The IEEE Academy on Artificial Intelligence is designed for those who work in industry and need to understand new technical information quickly so they can apply it to their work. Learn more about the program>>

Interested in enrolling? Visit the IEEE Learning Network

 

Resources:

Placek, Martin. (16 January 2023). Size of the global autonomous vehicle market in 2021 and 2022, with a forecast through 2030. Statista.

Carsurance. (20 February 2022). 24 Self-Driving Car Statistics & Facts. Carsurance.

Global Industry Analysts, Inc. (April 2021). Natural Language Processing (NLP) – Global Market Trajectory & Analytics. Research and Markets

Gordon, Nicholas. (30 July 2021). Don’t buy the ‘big data’ hype, says cofounder of Google Brain. Fortune. 

Ingle, Prathamesh. (9 July 2022). Top Deep Learning Applications in 2022. Marktechpost.

Fine, Ken. (15 January 2022). How digital experiences are fueling the new digital economy. VentureBeat. 

Todorov, Georgi. (20 April 2022). 92 Stunning Artificial Intelligence Stats, Facts and Figures in 2022. Thrive My Way.

Woertman, Bert-Jan. (30 April 2022). Deep learning is bridging the gap between the digital and the real world. VentureBeat.

World Economic Forum. (20 July 2022). Is AI the only antidote to disinformation? The European Sting.

Technology has always presented numerous opportunities for improving and transforming healthcare. Such improvements include reducing human errors, improving clinical outcomes, facilitating care coordination, improving practice efficiencies, and tracking data over time. Machine learning (ML) has already proven effective at disease identification and prediction, recognizing patterns that are too subtle for the human eye to detect, guiding physicians towards better-targeted therapies and improved outcomes for patients. Researchers have also used ML as a tool to recognize signs of depression and suicidality by assessing patients’ voices, picking up changes in speech too subtle for a doctor to notice. Artificial intelligence (AI) and machine learning can expand our approach to mental health. 

Mapping Mental Health

Researchers at Massachusetts General Hospital have developed an artificial intelligence model that generates ‘personalized maps’ to guide individuals toward improved mental well-being. In this study, the researchers developed a model based on deep learning, a type of machine learning that uses layered algorithmic architectures to analyze data. The researchers also identified the most depression-prone psychological configurations on the self-organizing maps, which they used to develop an algorithm to help individuals move away from potentially dangerous mental states. 

Shortest Path to Human Happiness

Deep Longevity, in collaboration with Harvard Medical School, offers another deep learning approach to mental health. Researchers have created two digital models of psychology that work together to find a path to happiness. 

The first model depicts the trajectories of the human mind as it ages. The second model is a self-organizing map that serves as the foundation for a recommendation engine for mental health applications. This learning algorithm splits all respondents into clusters depending on their likelihood of developing depression and determines the shortest path to mental stability for any individual. 

Combining Technology & Therapy is Key

Anyone with a smartphone can access conversational agent phone apps, also known as chatbots, which are meant to help users cope with the anxieties of daily life. These language processing systems can imitate human discussion by simulating conversations with a therapist via text. They can be a gateway to therapy or can reinforce lessons from in-person sessions. Research has shown that some people prefer interaction with chatbots rather than with real humans.

With the help of AI and machine learning, researchers are hoping the brain can help identify mental health issues. By applying specially designed algorithms to brain scans, labs could identify distinctive features that determine a patient’s optimal treatment. Machine learning could also assist in suicide-prevention. Currently, doctors only have a slight advantage over random probability in recognizing this risk. But algorithms, using data that are easily accessible to health care providers, can predict attempts with significantly improved accuracy.

Stay Current with Technology Advances

From healthcare to security, machine learning plays a critical role in developing the technology that will determine our future. Covering machine learning models, algorithms, and platforms, Machine Learning: Predictive Analysis for Business Decisions, is a five-course program from IEEE.

Connect with an IEEE Content Specialist today to learn more about this program and how to get access to it for your organization.

Interested in the program for yourself? Visit the IEEE Learning Network.

Resources

Deep Longevity LTD. (2 July 2022). Harvard Developed AI Identifies the Shortest Path to Human Happiness. SciTechDaily.

Gavrilova, Yulia. (4 July 2022). AI Chatbots & Mental Healthcare. IOT for All.

Glick, Molly. (1 July 2022). Your Next Therapist Could Be a Chatbot App. Discover.

Kennedy, Shania. (28 June 2022). AI-Generated ‘Maps’ May Help Improve Mental Well-being. Health IT Analytics. 

Kesari, Ganes. (24 May 2021). AI Can Now Detect Depression from Just Your Voice. Forbes. 

Rutherford, Lucie. (18 February 2022). Medicine Meets Big Data: Clinicians Look to AI For Disease Prediction and Prevention. UVAToday.

Savage, Neil. (25 March 2020). How AI is improving cancer diagnostics. Nature.

Artificial intelligence (AI) is more present in our lives than ever. With varied uses, AI can predict what we want to see as we scroll through social media, as well as help to solve global challenges like hunger, environmental changes, and pandemics. This technology has countless applications in the real world. A McKinsey survey illustrates that AI adoption followed an upward trajectory in the year 2021 and continues to do so. According to the survey, “56 percent of all respondents report AI adoption in at least one function.”

However, AI technology is not always beneficial—AI can violate privacy, AI-generated output cannot always be explained, and AI can be biased. When the data feeding an AI system is not representative of the diversity and plurality of our societies, it can produce biased or discriminatory outcomes.

An often-cited example is facial recognition technology. Used to access mobile phones and bank accounts, it’s also being increasingly employed by law enforcement authorities. With emerging problems accurately identifying women and darker-skinned people, facial recognition is far from being perfected. This is not surprising when you look at how AI is developed: only 1 in 10 software developers worldwide are women. Furthermore, developers come overwhelmingly from western countries. 

Hardcoding Ethics into AI

Humans can be biased, but people possess the ability to recognize how their conclusions may be biased, discriminatory, or unethical. While there is some recent debate over the “sentient” qualities of AI programs, they cannot “think” or “feel”. AI performance depends entirely on its coding. Because AI does not have this meta-cognitive ability, it is up to people to override unethical decisions when they arise. Unethical AI is not a consequence simply of programming deficiencies, but rather of not fully considering how ethical requirements should be incorporated into the learning algorithm during development. 

Organizations using AI need to become more proactive and formulate actionable AI ethics policies by thinking about ethics from the start. This approach already is deemed essential to cyber security products, where “security by design” development principles drives the need to assess risks and hardcode security from the start. This mindset should be applied to the development of AI tools so these can be deployed responsibly and without bias. This process will be critical as societies and cultures change over time, and AI products should always reflect current values.

How to Create an AI Ethics Policy 

Aligning AI ethics is not just a moral responsibility, it is also a business imperative. It requires action to build an AI ethics-aware culture. Reid Blackman, CEO of Virtue, recommends instilling actionable ethics into AI systems by following these seven guidelines: 

  1. Bring clarity to AI standards
  2. Increase awareness among everyone in the organization
  3. Thoroughly incorporate AI ethics into team culture
  4. Make sure there are AI experts as part of an AI ethics committee
  5. Introduce accountability
  6. Measure everything— set key performance indicators (KPIs) to track whether your organization is meeting its goals for AI standard adoption
  7. Gain executive sponsorship

Prepare for an AI Future

The AI market size is expected to grow and surpass US$1,597 billion by 2030. Organizations and technology professionals should prepare for a changing landscape when it comes to the future of AI.  

Get a jumpstart on learning about ethics in artificial intelligence systems. Check out Artificial Intelligence and Ethics in Design, a five-course program from IEEE that provides the background knowledge needed to integrate AI and autonomous systems within their companies or to their customers and end users.

Contact an IEEE Account Specialist to get organizational access or check it out for yourself on the IEEE Learning Network.


Resources

Bedzow, Ira. (30 June 2022). What It Takes to Create and Implement Ethical Artificial Intelligence. Forbes.

Boston Consulting Group (BCG). (7 July 2022). 87% of Climate and AI Leaders Believe That AI Is Critical in the Fight Against Climate Change. PR Newswire. 

Chui, Michael et al. (8 December 2021). The state of AI in 2021. McKinsey.

Henderson, Emily. (10 June 2022). Using artificial intelligence to discover new antivirals against COVID-19 and future pandemics. New Medical.

McKendrick, Joe. (10 June 2022). 7 Steps to More Ethical Artificial Intelligence. Forbes. 

Mubarik, Abu. (20 June 2022). This is how former Wall Street trader Sara Menker from Ethiopia is using AI to remove world hunger. Face 2 Face Africa. 

Precedence Research. (19 April 2022). Artificial Intelligence Market Size to Surpass Around US$ 1,597.1 Bn By 2030. GlobeNewswire.

Ramos, Gabriela and Koukku-Ronde, Ritva. (22 June 2022). A new global standard for AI ethics. UNESCO.

Smith, Wesley. (26 June 2022). Five Reasons AI Programs Are Not ‘Persons’. Mind Matters News.

Yu, Eileen. (30 June 2022). AI ethics should be hardcoded like security by design. ZD Net

Big data is creating exciting new opportunities for artificial intelligence (AI). According to Arvind Krishna, Chairman and Chief CEO of IBM, 2.5 quintillion bytes of data are produced each day. To analyze, distribute, and make use of this data, many organizations are combining AI with hybrid cloud technology.

“The economic opportunity behind these technologies is enormous, given that business is only about 10 percent of the way to realizing A.I.’s full potential,” writes Krishna in Inc.com. “Fortunately, we are making steady progress, with the number of organizations poised to integrate A.I. into their business processes and workflows growing rapidly. A recent IBM study showed that more than a third of the companies surveyed were using some form of A.I. to save time and streamline operations.”

However, for artificial intelligence programs to work effectively, organizations need to successfully manage their data. According to Andrew P. Ayres, a Senior Specialist with HPE’s Enterprise Services practice in the United Kingdom, writing in CIO, you can achieve this by:

  • making “data-centric AI” and “AI-centric data” part of your data management strategy. Metadata and “data fabric” should be the foundational elements of this strategy.
  • establishing policy requirements that include minimum AI data quality to prevent “bias, mislabeling, or irrelevance”
  • determining the right “formats, tools, and metrics for AI-centric data” early on. This way you don’t have to develop new techniques as your AI evolves.
  • ensuring that the data, algorithms, and people within your AI supply chain are diverse. This diversity helps to stay in line with your ethical values.
  • appointing or hiring the right experts internally and externally to oversee data management. These experts are capable of developing effective processes and deployments for your AI.

How to Choose an AI Program That Works Best For Your Employees

As you develop your AI program, keep in mind that while AI can augment your organization in terms of speed and efficiency, it is not necessarily a substitute for human intelligence. 

While AI is good at analyzing data and recognizing patterns, it still has a tendency to miss important context that humans easily spot. This can have potentially devastating consequences if, for example, an AI makes a critical error when analyzing medical documentation. As such, you need to consider how to make your AI work with your human employees in the most effective way possible. 

According to experts from Boston Consulting Group, writing in Fortune, organizations can do this by following the following principles:

  • Know your options in terms of how you can combine humans with AI: Depending on your organization’s unique needs, do you need your AI to act as an illuminator, recommender, decider, or automator? Knowing the difference can help you choose the best AI system for your organization. Choose whether it’s an AI that can make predictions or one that can help you automate operations remotely. 
  • Create a decision tree: A decision tree constitutes the questions you will ask in a sequence. This helps you clearly understand your objectives (goals), context (resources in terms of data), and outcomes (results in terms of deploying AI vs employees). This will help you determine what type of AI system (illuminator, recommender, decider, or automator) you need.
  • Continuously assess and revise your human-AI combinations: Your needs for an AI program may evolve overtime and, as such, so will its relationship to your employees. For this reason it’s important to return to the decision tree occasionally to determine if you need to revise your model.

Knowing how to manage your organization’s data and determining the right AI program are important steps. However, you also need to ensure that your employees are equipped to work with this increasingly complex technology. 

Bringing Ethics to the Forefront at Your Organization

An online five-course program, AI Standards: Roadmap for Ethical and Responsible Digital Environments, provides instructions for a comprehensive approach to creating ethical and responsible digital ecosystems. 

Contact an IEEE Content Specialist to learn more about how this program can help your organization create responsible artificial intelligence systems.

Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!

Resources

Krishna, Arvind. (18 May 2022). Why Artificial Intelligence Creates an Unprecedented Era of Opportunity in the Near Future. Inc. 

Candelon, Francois, Ding, Bowen, Gombeaud, Matthieu. (6 May 2022). Getting the balance right: 3 keys to perfecting the human-A.I. combination for your business. Fortune.

Ayres, Andrew P. (29 April 2022). Don’t Fear Artificial Intelligence; Embrace it Through Data Governance. CIO.

Hewlett Packard Enterprise (HPE) recently announced the launch of innovative platforms. These platforms are expected to speed the development of machine learning models. The first, the HPE Machine Learning Development System, combines machine learning software with compute, accelerators, and networking. This combination shortens the time it takes to get results from building and training machine learning models from months to just days.

“Enterprises seek to incorporate AI and machine learning to differentiate their products and services. However, they are often confronted with complexity in setting up infrastructure required to build and train accurate AI models at scale,” stated Justin Hotard, executive vice president and general manager, HPC and AI, at HPE, in a press release. “The HPE Machine Learning Development System combines our proven end-to-end HPC solutions for deep learning with our innovative machine learning software platform into one system. This provides a performant out-of-the-box solution to accelerate time to value and outcomes with AI.”

The second platform, HPE Swarm, combines blockchain technology with the revolutionary learning methods “federated learning” and “swarm learning.” 

What Are Federated Learning and Swarm Learning?

Unlike traditional artificial intelligence (AI) models trained on centralized datasets, federated learning trains models on decentralized datasets. For example, let’s say a model is learning from data on a phone. Here, the model runs on the phone’s data but does not send the actual data to a central server — only insights gleaned from it. This method is much more secure for the owner of the phone. It also makes the system faster and more efficient, because it does not have to send large amounts of data back and forth to a central server. 

However, HP Swarm takes federated learning to a new level by using swarm learning, a subset of federate learning. Instead of relying on a central server, swarm learning uses blockchain. Blockchain is a decentralized digital ledger of transactions that records data by duplicating transactions and dispersing them to “nodes” across the network. As such, swarm learning makes the learning process even more decentralized, secure, and resilient. 

This technology could accelerate machine learning while advancing a large number of applications, particularly within healthcare. As Ledger Insights reported, it could allow cancer research centers across the world to collect valuable data with one another without having to share the actual data. 

“Swarm learning is an important movement in the AI market, with broad support across the public and private sectors. It serves to combine the power of expanding data sets with the innovation and insights from organizations across the globe,” Hotard told VentureBeat.

HP Swarm provides users with containers that are easily integrated with AI models via the HPE swarm API. Users can then instantly share AI model learnings with peers both inside and outside their organization. This enhances training without having to share the original data – making it far more secure. 

Swarm learning holds great potential for businesses. It can enable them to make faster decisions with better results, protect the privacy of their customers, share learnings with other organizations, and advance their data governance.  

Is Your Company Embracing Machine Learning?

It’s important for organizations developing and deploying machine learning to understand the concepts and techniques necessary for driving machine learning-enabled business insights.

Connect with an IEEE Content Specialist today to learn more about this program and how to get access to it for your organization.

Interested in the program for yourself? Visit the IEEE Learning Network.

Resources

(29 April 2022). HPE’s new platform lets customers build machine learning models quickly and at scale. TechCentral.ie. 

(29 April 2022). HPE launches Swarm Learning using blockchain for AI, machine learning. Ledger Insights.

Plumb, Taryn. (27 April 2022). HPE looks to deliver the power of ‘swarm learning’. VentureBeat.

Press Release. (27 April 2022). Hewlett Packard Enterprise accelerates AI journey from POC to production with new solution for AI development and training at scale. hpe.com 

Press Release. (27 April 2022). Hewlett Packard Enterprise ushers in next era in AI innovation with Swarm Learning solution built for the edge and distributed sites. hpe.com 

Artificial intelligence (AI) systems are evolving fast. However, ethical standards that ensure these systems don’t harm the public, such those that aim to prevent unintentional biases based on the data these systems are trained on, have been less quick to evolve. According to a global survey conducted by MIT Sloan Management Review, which polled over 1,000 executives, 82% of managers in organizations with at least USD $100 million in annual revenues agreed or strongly agreed that responsible AI (RAI) should be included in their top management agenda. At the same time, only 50% reported that RAI is a part of their top management’s agenda. 

How can organizations that develop or use artificial intelligence ensure RAI is not just an afterthought? A recent panel of global AI experts, organized by MIT Sloan Management Review and global consulting firm BCG, concluded with the following takeaways:

  • Leadership needs to understand why RAI is important to the organization’s strategy. Otherwise, RAI may never make it into the agendas of the organization’s major decision makers.
  • Determine whether RAI is part of your AI strategy or a part of your wider organizational goals, such as corporate responsibility. Without an understanding of this, leadership may not fully grasp that it should be integrated into their larger agenda.
  • Look at RAI as an urgent need that must be integrated now. Otherwise, you may miss valuable opportunities to prevent risk and harm down the line.

What are the Fundamental Principles of AI Ethics?

Understanding the core principles of AI is the first step to developing an effective AI standards framework. Such a framework should also align with an organization’s mission. It should also align with any regulations the organization may be affected by through its implementation of the AI system. According to TechTarget, the basic principles of ethical AI include:

  1. Fairness: The AI system does not contain biases and functions equally well for all groups 
  2. Accountability: The AI system has ways to identify who is responsible across different stages of the AI life cycle if something goes wrong. It also provides ways for humans to supervise and control the system
  3. Transparency: When the AI system makes a decision, it allows humans to understand why it came to that conclusion. This is essential for building trust
  4. Safety: The AI system is equipped with effective security controls

Incorporating These Principles into AI Systems

During an interview with Analytics India Magazine, Layak Singh, CEO of Artivatic AI, an insurance platform, said the company reduces biases in AI by defining the business problems it wants to solve while considering end users. They then configure data collection methods to be able to incorporate diverse perspectives.

“We also ensure that we clearly understand our training data, as this is where most biases are introduced and can be avoided,” Singh said. “With that aim, we also ensure an ML [machine learning] team that’s assorted as they ask dissimilar queries and thus interact with the AI models in various ways. This leads to identifying errors before the model is underway in production. It is the best manner to reduce bias both at the beginning and while retraining models.”

Additionally, there is a major focus on feedback as his company keeps feedback channels, such as forum discussions, open in order to run continual audits and upgrades.

Ensuring AI systems are ethical is becoming essential to building trust with clients and customers. Don’t wait until that trust is already broken— start developing an ethical AI standards framework today.

Incorporating AI Standards at Your Organization

An online five-course program, AI Standards: Roadmap for Ethical and Responsible Digital Environments, provides instructions for a comprehensive approach to creating ethical and responsible digital ecosystems. Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.

Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!

Resources

Krishna, Sri. (20 April 2022). Talking Ethical AI with Artivatic’s Layak Singh. India Analytics Magazine. 

Kiron, David, Renieris, Elizabeth, and Mills, Steven. (19 April 2022). Why Top Management Should Focus on Responsible AI. MIT Sloan Management Review.

Kompella, Kashyap. (1 April 2022). How AI ethics is the cornerstone of governance. TechTarget.

ethical-AI-standards-framework

A 2019 survey from Gartner found that 37% of businesses and organizations employ artificial intelligence (AI), DataProt reported. However, few organizations are taking steps to mitigate the risk associated with AI systems, such as their propensity for bias and privacy infringements. A 2021 PwC research report found that just 20% of enterprises had instituted an AI ethics framework. Meanwhile, only 35% intended to enhance their AI governance and processes. With governments increasingly moving towards passing AI regulations, the timeframe for organizations to develop ethical AI standards is getting shorter. 

During an interview with Analytics India Magazine, Satyakam Mohanty, Chief Product Officer at Fosfor by L&T Infotech, a global technology consulting and digital solutions company, said responsible AI is the only way for organizations to reduce potential risks associated with the technology.

“The great AI debate opens various facets of ethics, but without a common agreement and agreed standard, its impact and repercussions on the way organizations operate is not quantifiable,” Mohanty told the magazine. “Fairness and explainability can be managed and scaled by introducing data bias mitigation practices and algorithmic bias mitigation processes. Additionally, ensuring higher standard explainability frameworks into the implementations and decision-making process helps. By utilizing ethics as a key decision-making tool, AI-driven companies save time and money in the long run. They do this by building robust and innovative solutions from the start.”

How to Develop an AI Standards Framework

How can your organization begin building a successful AI standards framework? Writing in Harvard Business Review, AI ethics experts Reid Blackman and Beena Ammanath recommend that organizations start by putting together a team of senior-level experts. This team should encompass, at minimum, technologists, legal/compliance experts, ethicists, and business leaders who understand what the organization needs to achieve in terms of ethical AI.

Once you have a team in place, they recommend taking these steps:
  1. First, identify your organization’s AI ethical standard:

    What is the minimum ethical standard your organization is willing to meet in terms of AI? If your AI system is discriminatory towards a certain group but is still far less discriminatory towards them than traditional human-run systems, will your organizations consider that an acceptable benchmark? This is a similar dilemma to what autonomous vehicle manufacturers must consider. For example, if autonomous vehicles occasionally kill passengers and pedestrians but at a lower rate than traditional vehicles, should those vehicles be considered safe? Although these are difficult questions to grapple with, asking them will help your organization set the right frameworks and guidelines. This ensures ethical product development.
  2. Determine “gaps” between where your organization is currently and what your standards need:

    While there may be plenty of technical solutions to your AI ethics dilemma, none are likely to be enough to reduce the risks substantially enough to safeguard your organization. As such, your AI ethics team will need to ask: what are its skills and knowledge limitations, what are the risks it is trying to reduce, in what ways can software/quantitative analysis help and not be able to help. Also, what needs to be done in terms of qualitative assessments, and how mature does the technology need to be to meet ethics expectations?
  3. Gain insight into what’s behind the bias in your AI and then strategize solutions:

    While it’s generally true that biased AI systems are reflections of biased training data and/or societal bias, the real problem is more complex. For example, you need to understand sources of discriminatory outputs, as well as potential biases. Knowing this will help you understand how to decide the best strategy for reducing bias. 

Implementing artificial intelligence standards at your organization will take time, but the risk reduction they provide will be well worth the effort. Does your organization have the right knowledge and skills necessary to build an effective AI standards roadmap? 

Establishing AI Standards for Your Organization

Artificial intelligence continues to spread across various industries, including healthcare, manufacturing, transportation, and finance among others. It’s vital to keep in mind rigorous ethical standards designed to protect the end-user when leveraging these new digital environments. AI Standards: Roadmap for Ethical and Responsible Digital Environments, is a new five-course program from IEEE. It provides instructions for a comprehensive approach to creating ethical and responsible digital ecosystems.

Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.

Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!

Resources

Krishna, Sri. (29 March 2022). Talking Ethical AI with Fosfor’s Satyakam Mohanty. Analytics India Magazine. 

Blackman, Reid and Ammanath, Beena (21 March 2022). Ethics and AI: 3 Conversations Companies Need to Have. Harvard Business Review. 

Jovanovic, Bojan. (8 March 2022). 55 Fascinating AI Statistics and Trends for 2022. DataProt.

Likens, Scott; Shehab, Michael; Rao, Anand. AI Predictions 2021. PwC Research.

Machine learning models often rely on the simple features of a dataset to make decisions. Known as “shortcuts,” these types of decisions can lead to serious errors. For example, these shortcuts can cause models to make inaccurate medical diagnoses. However, a recent study from MIT poses a possible solution. By removing the simple characteristics of a dataset, the researchers forced the model to examine the more complex features of a dataset.

“It is still difficult to tell why deep networks make the decisions that they do. In particular, which parts of the data these networks choose to focus upon when making a decision,” Joshua Robinson, a PhD student in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and lead author of the paper, told MIT News. “If we can understand how shortcuts work in further detail, we can go even farther. We aim to answer some of the fundamental but very practical questions that are really important to people who are trying to deploy these networks.”

How To Avoid Shortcuts in Machine Learning

As MIT News reports, the new research centers on a type of self-supervised machine learning known as contrastive learning. Self-supervised models are trained on raw data that don’t have any label descriptions. In contrastive learning models, an encoder algorithm is trained to distinguish between pairs of similar inputs. It also distinguishes pairs of dissimilar inputs, which encodes complex data, such as images, in a way the model can decipher. While this makes decision making more effective, the researchers found that these models also tended to fall victim to making shortcuts. They fixate on the simplest features of an image to determine which pairs of inputs are similar and which are not. To solve this, the researchers made it more difficult for the model to differentiate similar and dissimilar pairs. This altered the features the encoder used to make a decision.

“If you make the task of discriminating between similar and dissimilar items harder and harder, then your system is forced to learn more meaningful information in the data, because without learning that it cannot solve the task,” Stefanie Jegelka, one of the researchers, told MIT News.

However, this caused the encoder to get worse at focusing on some features over others, particularly the simpler ones. To solve this, the researchers required the encoder to discriminate between the pairs using the simpler feature. They also evaluated after the researchers removed the data it already learned. Having the encoder solve the problem both ways simultaneously forced it to make better decisions.

Implicit Feature Modification

Known as “implicit feature modification,” this groundbreaking method does not rely on any input from humans. While it has the potential to help machine learning models avoid shortcuts, the researchers told MIT that it still needs to be refined. It should be tested on other types of self-supervised learning.

Machine learning is still in its infancy. However, innovations such as implicit feature modification have the potential to give artificial intelligence (AI) the ability to learn on its own. Not only will this make AI smarter and more efficient, it can also lead to revolutionary technological and scientific discoveries. Machine learning has the ability to solve complex problems— such as determining protein’s 3D shape— that humans cannot. 

Understand Machine Learning

By providing AI with the ability to learn from its experiences without needing explicit programming, machine learning plays a critical role in developing the technology. Covering machine learning models, algorithms, and platforms, Machine Learning: Predictive Analysis for Business Decisions, is a five-course program from IEEE.

Connect with an IEEE Content Specialist today to learn more about this program and how to get access to it for your organization.

Interested in the program for yourself? Visit the IEEE Learning Network.

Resources

Zewe, Adam. (2 November 2021). Avoiding shortcut solutions in artificial intelligence. MIT News. 

Callaway, Ewen. (30 Nov 2020). ‘It will change everything’: DeepMind’s AI makes gigantic leap in solving protein structures. Nature.