
Semiconductors are the brains behind so many devices and processes that we take for granted today, from computers, smartphones, cars, programmable coffee makers, and washing machines to high-tech robotics, augmented reality and virtual reality systems, satellites used in national defense, and more. Based on their widespread use in such a broad range of technologies, semiconductors are critical to life in modern industrialized societies — and this reality was further validated by the supply chain issues and shipping delays experienced during the pandemic.
Government Support for the Semiconductor Industry
To help strengthen the United States’ competitiveness and resilience in the semiconductor arena, the CHIPS (“Creating Helpful Incentives to Produce Semiconductors”) and Science Act, enacted in August 2022, earmarked nearly US$53 billion for domestic research and manufacturing. It also established a 25% tax credit for capital investments in semiconductor manufacturing. Europe soon followed suit with their own version of this initiative, The European Chips Act, in September 2023.
Since then, the U.S. government has already disbursed some US$29 billion in CHIPS Act funds to eight companies— Intel, Micron, Global Foundries, Polar Semiconductor, TSMC Arizona Corporation (a subsidiary of Taiwan Semiconductor Manufacturing Company), Samsung, BAE Systems, and Microchip Technology— in an effort to reinvigorate semiconductor manufacturing domestically. This funding succeeded in catalyzing the establishment of a range of new manufacturing facilities, including Intel’s new factories in Arizona, New Mexico, Oregon, and Ohio as well as Micron’s new US$100 billion chip plant in Syracuse, New York.
The European CHIPS Act has driven similar investment in Europe’s semiconductor industry in hopes of doubling the EU’s global market share from 10% to 20% by 2030. “The governments of nearly every major economy are pouring tens of billions of dollars into semiconductor industries every year,” confirmed Chris Miller of Nature Reviews Electrical Engineering, all in an effort to stake claim in a robust global semiconductor market that forecasting organization World Semiconductor Trade Statistics predicts will grow by over 13% to US$588 billion in 2024 and hit US$1 trillion in global revenue by 2030.
The problem? There aren’t currently enough semiconductor technicians and engineers to meet the demand created by the CHIPS Acts and other global initiatives. For example, the U.S. government expects there will be a need for 100,000+ semiconductor technicians and as many as 300,000+ engineering graduates by 2030 to support the growing industry.
Initiatives in Semiconductor Workforce Development Training
In response, companies and educational institutions alike are taking creating and resourceful approaches to filling the talent gap.
As broadcasted in a June 2024 PBS NewsHour segment, Intel Vice President of Talent Planning and Acquisition Cindi Harper confirmed that Arizona-based Intel has recently invested hundreds of millions of dollars into workforce development and that its new semiconductor plants will create 10,000 jobs at the company.
“We have high-paying jobs that are extremely interesting, [and] the manufacturing side of it isn’t what you would have seen 30 or 40 years ago,” agreed Greg Jackson, Director of Facility Operation at Phoenix, AZ-based Taiwan Semiconductor Manufacturing Company, in the PBS NewsHour segment.
And a broad range of colleges, universities, and online educational platforms worldwide are further supporting the semiconductor workforce development movement by offering certificate programs in everything from semiconductor fabrication, devices, packaging, microelectronics, AI in semiconductor design (a strategy which is helping manufacturers enjoy greater efficiency and speed to market), and more.
Let IEEE Unlock the Door to Opportunity
Get started with specialized training, Artificial Intelligence and Machine Learning in Chip Design. Delve into the ways in which artificial intelligence (AI) and machine learning (ML) techniques are revolutionizing chip design methodologies. This training provides engineers with essential knowledge to leverage AI and ML effectively in chip design and electronic design automation (EDA). Learners will identify high-value applications and gain insight into optimizing design methods and preparing for the future of chip design.
Upon successful completion of the program, learners will earn an IEEE Certificate of Completion bearing professional development hours (PDHs) and continuing education units (CEUs). Get started today!
For institutional access, contact a Sales Specialist.
Resources
David, Emilia. (7 June 2024). Where the CHIPS Act Money Has Gone. The Verge.
Shakir, Umar. (25 July 2023). EU Will Spend €43 Billion to Stay Competitive on Chip Production. The Verge.
Miller, Chris. (11 January 2024). Global Chip War for Strategic Semiconductors. Nature Reviews Electrical Engineering.
Khalid, Asma. (19 December 2023). Biden Has Big Plans for Semiconductors. But There’s a Big Hole: Not Enough Workers. NPR.
(7 September 2022). How Semiconductor Makers Can Turn a Talent Challenge Into a Competitive Advantage. McKinsey & Company.
2024 KPMG Global Semiconductor Industry Outlook. KPMG.
(27 May 2022). Purdue Launches Nation’s First Comprehensive Semiconductor Degrees Program. Purdue University News.
Allan, Liz. (16 October 2023). Chip Industry Talent Shortage Drives Academic Partnerships. Semiconductor Engineering.
Hilson, Gary. (5 March 2024). STEM Education Scales to Strengthen Chip Sector Skills. EE Times.
Sy, Stephanie and Jackson, Lena. (11 June 2024). How Arizona is Building the Workforce to Manufacture Semiconductors in the U.S. PBS News.
Based on the ability of artificial intelligence (AI) to automate repetitive tasks and process massive amounts of data, AI technology is revolutionizing many industries. Such industries range from healthcare and banking to cyber security, transportation, marketing, customer service, manufacturing, and more.
One industry that’s undergoing a particularly significant transformation at the hands of AI technology is the field of semiconductor design.
The Landscape for Semiconductors
Semiconductors, also called chips, microchips, or integrated circuits, are tiny components that enable electronic switching and serve as the foundation for all computer processing. As a result, semiconductors are integral to everything from smart phones and laptops to wind turbines, solar technology, wearable technology (like fitness trackers), electronic control systems and driverless capabilities in modern vehicles, implantable medical technology (like pacemakers and insulin pumps), gaming hardware, and many more technologies consider essential in today’s industrialized economies.
As sales of connected technologies continue to grow, so does demand for the next-generation semiconductors needed to fuel them. According to Statista, the global market for semiconductors is expected to grow by 13% to nearly US$590 billion in 2024. At the same time, the semiconductor industry is highly competitive. Taiwan, South Korea, and Japan currently lead the world in semiconductor production. However, experts expect the landscape will get even more competitive. The United States and European Union are vigorously ramping up their activity following their enactment of The CHIPS and Science Act and The European CHIPS Act in August 2022 and September 2023, respectively.
In the semiconductor industry’s ongoing quest for tools that can enhance engineering efficiency and accelerate speed to market, thereby giving manufacturers a competitive edge, the use of artificial intelligence and machine learning (ML) stand as game-changers in semiconductor design and manufacturing.
A New Paradigm in Design
Experts confirm that the use of AI enhances semiconductor (chip) design, or the process known as “electronic design automation” (EDA), in many ways.
Among them, AI automates complex processing tasks, thereby reducing the risk of human error. Artificial intelligence’s ability to analyze past patterns across huge quantities of data, identify efficient pathways, and optimize the space (or “real estate”) within semiconductors helps improve semiconductor performance and meet design criteria. It also reduces chip size, resources required, and cost. By being able to “learn” from past experiences, AI algorithms help semiconductor engineers predict and prevent potential design issues down the road that could otherwise result in the need for costly changes.
Ultimately, AI helps semiconductor manufacturers optimize power, performance, and area, or “PPA”– the three goals of chip design– by helping engineers to both design advanced new chips as well as efficiently and cheaply overhaul and shrink the many older-technology (65 nanometer process node or larger) chip designs on which much of the semiconductor industry has been predicated for the past decade without the need to update their fabrication equipment.
The future continues to look bright for the integration of AI in semiconductor design, with Deloitte experts noting that “some chips are getting so complex that advanced AI may soon be required.”
Learn the Ins and Outs of AI in Semiconductor Design from an Industry Expert
In today’s fast-paced technological landscape, AI and ML techniques are revolutionizing chip design methodologies. Integrated-circuit (IC) chip companies and engineers have unprecedented opportunities to use these technologies to enhance product quality across crucial dimensions such as speed, energy efficiency, and cost. This, in turn, enables the achievement of goals with reduced engineering resources and accelerated time-to-market.
Stay on top of the dynamic field of AI in semiconductor design through a two-day virtual training from IEEE, Artificial Intelligence and Machine Learning in Chip Design. It is presented by Andrew B. Kahng, an IEEE Fellow, Distinguished Professor of CSE and ECE at the University of California San Diego, and co-founder of Blaze DFM, Inc., an EDA software company that delivered new cost and yield optimizations at the IC design-manufacturing interface.
This comprehensive two-day virtual training session will equip engineers with:
- The essential knowledge to leverage AI and ML effectively in chip design and EDA,
- An understanding of the rationale behind these technological shifts to identifying high-value applications and selecting relevant AI and ML technologies, and
- Insights into optimizing design methods and preparing for the future of chip design.
Attendees will also have the opportunity for first-hand interaction with Professor Kahng and ask him questions during the interactive question-and-answer portion of the training.
Successful completion of this training and assessment will earn attendees an IEEE Certificate of Completion bearing professional development hours (PDHs) and continuing education units (CEUs).
Don’t miss this opportunity to get your questions answered directly by a renowned subject matter expert in the industry! Save your seat today to secure your spot in this enlightening training session.
Interested in access for yourself? Visit the IEEE Learning Network (ILN).
Connect with an IEEE Content Specialist today to learn how to get access to this program for your organization.
Resources
Anirudh, VK. (10 February 2022). 10 Industries AI Will Disrupt the Most by 2030. Spiceworks.
(2 February 2024). How AI is Transforming the Semiconductor Industry in 2024 and Beyond. ACL Digital.
McCallum, Shiona. (3 August 2023). What Are Semiconductors and How Are They Used? BBC.
(29 March 2024). Generative AI: The Next S-Curve for the Semiconductor Industry? McKinsey & Company.
Loucks, Jeff, Stewart, Duncan, Simons, Christie, and Kulik, Brandon. (30 November 2022). AI in Chip Design: Semiconductor Companies are Using AI to Design Better Chips Faster, Cheaper, and More Efficiently. Deloitte.
Alsop, Thomas. (8 February 2024). Semiconductor Market Revenue Worldwide from 1987 to 2024. Statista.
Artificial intelligence (AI) continues to dominate headlines, thanks to its potential to revolutionize countless industries. From manufacturing and healthcare to banking and retail, AI is streamlining automatable and administrative tasks across the board.
Beyond efficiency, AI plays a critical role in high-impact applications. It helps detect cybersecurity threats, prevent retail fraud, and improve autonomous vehicle navigation by recognizing driver patterns and predicting accidents. Additionally, AI enhances customer experiences by personalizing marketing and service interactions.
In essence, machines are now replicating, and even expanding, the capabilities of the human mind. As a result, AI is reshaping the future of business.
A New Industrial Era
Because of its transformative power, the World Economic Forum has dubbed AI part of the “fourth industrial revolution.” This new era merges the physical, digital, and biological worlds, following earlier revolutions driven by steam, electricity, and computing.
Forbes contributor Bernard Marr calls it the “Intelligence Revolution,” underscoring AI’s sweeping impact on society and industry.
AI: A Double-Edged Sword
Although often used interchangeably, AI actually falls into two distinct categories: artificial general intelligence (AGI) and generative artificial intelligence (generative AI).
AGI refers to the ability of machines to understand, learn, and perform intellectual tasks as humans would based on the processing of demonstrated customer patterns. Examples of this include:
- personalized product recommendations provided by Amazon
- customized workouts and health goals suggested by apps (such as the MyFitnessPal app formerly owned by sports apparel and gear provider Under Armour) that base their recommendations on collected health data for physical activity, sleep, and diet
- smart assistants like Alexa and Siri that can control home technology, dial the telephone upon request, and more
Generative AI refers to a form of artificial intelligence that learns the patterns and structure of inputted data and responds by generating text, images, or other media with similar characteristics. An example includes the much-publicized ChatGPT, a chatbot introduced in November 2022 by OpenAI, that can produce output of a desired length, format, style, level of detail, and language on most any topic.
Experts confirm that AI can help businesses enhance their productivity by leaps and bounds. For example, research firm Gartner estimates that AI can save companies around the world over 6 billion employee-hours annually. On an economic level, a recent study by global management consulting firm McKinsey & Company predicts that the analytics enabled by AI could add US $13 trillion to our global GDP by 2030.
At the same time, however, AI also raises its share of issues and ethical concerns. Among them, generative AI can lend itself to the alteration of text, images, and video in the form of inaccurate, misleading, manipulative, and/or potentially dangerous “deep fake” or fraudulent content. It also raises questions about ownership rights of created content and its eligibility for copyright protection.
Helping Industry Navigate the Complex Field of AI
Recognizing both the unprecedented importance and complexity of artificial intelligence, IEEE offers several course programs in AI and machine learning designed to help navigate these exciting, complicated, and rapidly-evolving technologies.
- Machine Learning: Predictive Analysis for Business Decisions— Ideal for computer engineers, business executives, industry executives, industry leaders, business leaders, technical managers, data scientists, and data engineers, this five-course program provides an overview of the different types of machine learning that are fueling businesses today, how these forms of AI use software, algorithms, and models in their design, and how attendees can deploy scalable machine learning into their own processes to achieve their business goals.
- Artificial Intelligence and Ethics in Design— Ideal for data engineers, AI/ML engineers, design engineers, computer engineers, security engineers, electrical engineers, software engineers, UX designers, engineering managers, technical leaders, functional consultants, business users, research engineers, robotics engineers, machine learning engineers, and computer vision engineers, this five-course program covers such topics as law, compliance, and ethics in artificial intelligence, ethical challenges in data protection and safety, and responsible design in the algorithmic era.
- Artificial Intelligence and Ethics in Design: Responsible Innovation— This five-course program is designed to help learners understand the ethics specifications that must be met when designing AI systems for European (and other) markets. Topics include causes of bias, transparency and accountability for robots and AI systems, and legal and implementation issues of enterprise AI.
To discover more IEEE courses about artificial intelligence, browse the IEEE Learning Network catalog.
Resources:
Forbes Technology Council. (13 January 2022). 16 Industries and Functions That Will Benefit from AI In 2022 and Beyond. Forbes.
Fourth Industrial Revolution. World Economic Forum.
Marr, Bernard. (10 August 2020). What Is the Artificial Intelligence Revolution and Why Does It Matter To Your Business? Forbes.
Schroer, Alyssa. (19 May 2023). What Is Artificial Intelligence? Built In.
Mohan, Malethy. (22 March 2023). The Difference Between Generative AI and Traditional AI. LinkedIn.
Kanade, Vijay. What Is General Artificial Intelligence (AI)? Definition, Challenges, and Trends. Spiceworks.com.
Rajagopalan, Ramesh. 10 Examples of Artificial Intelligence in Business. Online Degrees.
Elliott, Timo. (9 March 2020). The Power of Artificial Intelligence Vs. the Power Of Human Intelligence. Forbes.
Dilmegani, Cem. (22 April 2023). Generative AI Ethics: Top 6 Concerns. AIMultiple.
Deep learning is having a moment. There was a time where we could only dream of partially autonomous vehicles and voice-activated assistants. Today, however, these inventions are a regular part of our lives. A subfield of machine learning (ML) and artificial intelligence (AI), deep learning algorithms are designed to learn like a human brain. Deep learning continually analyzes data using an advanced technology known as “artificial neural networks,” which are operated by a series of algorithms that can perceive complex relationships in data sets. These neural networks allow computers to see, hear, and speak—it is the reason we can talk to our phones and dictate emails to our computers.
Algorithms have always been part of the digital world, where they are trained and developed in perfectly simulated environments. The current wave of deep learning facilitates AI’s leap from the digital to the physical world. While the applications are endless—from manufacturing to agriculture—there are still challenges of accuracy, clean data, and reinforcement learning.
Deep Learning in the Real World
AI researchers are working to introduce deep learning to our physical, three-dimensional world. Experts anticipate that deep learning will advance several sectors over the next few years, including:
- Self-Driving vehicle capabilities: The handling of novel situations is the main problem for autonomous vehicle engineers. With growing exposure to millions of scenarios, a deep learning algorithm’s regular cycle of testing and implementation ensures safe driving. Global industry growth for autonomous cars is 16% a year. The global autonomous vehicle market reached nearly US$106 billion in 2021, and one forecast projects it will grow to US$2.3 trillion by 2030.
- Fraud news detection and news aggregation: Deep learning is heavily utilized in news aggregation, which attempts to tailor news to consumers’ preferences. Reader personas are defined with greater complexity to filter out content based on a reader’s interests, as well as geographical, social, and economic factors. Furthermore, there is always room for improvement in filtering out fake news and misinformation.
- Natural Language Processing (NLP): One of the most challenging things for computers to learn is how to comprehend the complexity of human language, including its syntax, semantics, tonal subtleties, expressions, and even sarcasm. The global market for Natural Language Processing (NLP) is expected to reach US$25.7 billion by 2027.
- Healthcare: Some of the deep learning projects gaining traction in the healthcare industry include assisting with the quick diagnosis of life-threatening diseases, addressing the shortage of qualified doctors and healthcare providers, and standardizing pathology results and treatment plans. By 2026, artificial intelligence has the potential to save the clinical healthcare business more than US$150 billion.
Getting “Data-Centric AI” with Deep Learning
Andrew Ng is among the pioneers of deep learning and, according to Fortune, he’s also one of the most thoughtful AI experts on how real businesses are using the technology. Ng has become a champion for what he calls “data-centric AI.” Ng believes developers and businesses should be asking questions like: What data is used to train the algorithm? How is it gathered and processed? How is it governed?
Data-centric AI is the practice of “smartsizing” data so that a successful system can be built using the least amount of data possible. If data is carefully prepared, a company may need far less of it than they think—saving both time and money .Calling it as important as the shift to deep learning that occurred over the past decade, Ng believes that the shift to data-centric AI is the most important shift businesses need to make today.
Be Prepared for Future of Deep Learning
As deep learning facilitates AI’s leap from the digital to the physical world, it is important to stay current with the latest technology advances. The IEEE Academy on Artificial Intelligence is designed for those who work in industry and need to understand new technical information quickly so they can apply it to their work. Learn more about the program>>
Interested in enrolling? Visit the IEEE Learning Network
Resources:
Placek, Martin. (16 January 2023). Size of the global autonomous vehicle market in 2021 and 2022, with a forecast through 2030. Statista.
Carsurance. (20 February 2022). 24 Self-Driving Car Statistics & Facts. Carsurance.
Global Industry Analysts, Inc. (April 2021). Natural Language Processing (NLP) – Global Market Trajectory & Analytics. Research and Markets
Gordon, Nicholas. (30 July 2021). Don’t buy the ‘big data’ hype, says cofounder of Google Brain. Fortune.
Ingle, Prathamesh. (9 July 2022). Top Deep Learning Applications in 2022. Marktechpost.
Fine, Ken. (15 January 2022). How digital experiences are fueling the new digital economy. VentureBeat.
Todorov, Georgi. (20 April 2022). 92 Stunning Artificial Intelligence Stats, Facts and Figures in 2022. Thrive My Way.
Woertman, Bert-Jan. (30 April 2022). Deep learning is bridging the gap between the digital and the real world. VentureBeat.
World Economic Forum. (20 July 2022). Is AI the only antidote to disinformation? The European Sting.
Technology has always presented numerous opportunities for improving and transforming healthcare. Such improvements include reducing human errors, improving clinical outcomes, facilitating care coordination, improving practice efficiencies, and tracking data over time. Machine learning (ML) has already proven effective at disease identification and prediction, recognizing patterns that are too subtle for the human eye to detect, guiding physicians towards better-targeted therapies and improved outcomes for patients. Researchers have also used ML as a tool to recognize signs of depression and suicidality by assessing patients’ voices, picking up changes in speech too subtle for a doctor to notice. Artificial intelligence (AI) and machine learning can expand our approach to mental health.
Mapping Mental Health
Researchers at Massachusetts General Hospital have developed an artificial intelligence model that generates ‘personalized maps’ to guide individuals toward improved mental well-being. In this study, the researchers developed a model based on deep learning, a type of machine learning that uses layered algorithmic architectures to analyze data. The researchers also identified the most depression-prone psychological configurations on the self-organizing maps, which they used to develop an algorithm to help individuals move away from potentially dangerous mental states.
Shortest Path to Human Happiness
Deep Longevity, in collaboration with Harvard Medical School, offers another deep learning approach to mental health. Researchers have created two digital models of psychology that work together to find a path to happiness.
The first model depicts the trajectories of the human mind as it ages. The second model is a self-organizing map that serves as the foundation for a recommendation engine for mental health applications. This learning algorithm splits all respondents into clusters depending on their likelihood of developing depression and determines the shortest path to mental stability for any individual.
Combining Technology & Therapy is Key
Anyone with a smartphone can access conversational agent phone apps, also known as chatbots, which are meant to help users cope with the anxieties of daily life. These language processing systems can imitate human discussion by simulating conversations with a therapist via text. They can be a gateway to therapy or can reinforce lessons from in-person sessions. Research has shown that some people prefer interaction with chatbots rather than with real humans.
With the help of AI and machine learning, researchers are hoping the brain can help identify mental health issues. By applying specially designed algorithms to brain scans, labs could identify distinctive features that determine a patient’s optimal treatment. Machine learning could also assist in suicide-prevention. Currently, doctors only have a slight advantage over random probability in recognizing this risk. But algorithms, using data that are easily accessible to health care providers, can predict attempts with significantly improved accuracy.
Stay Current with Technology Advances
From healthcare to security, machine learning plays a critical role in developing the technology that will determine our future. Covering machine learning models, algorithms, and platforms, Machine Learning: Predictive Analysis for Business Decisions, is a five-course program from IEEE.
Connect with an IEEE Content Specialist today to learn more about this program and how to get access to it for your organization.
Interested in the program for yourself? Visit the IEEE Learning Network.
Resources
Deep Longevity LTD. (2 July 2022). Harvard Developed AI Identifies the Shortest Path to Human Happiness. SciTechDaily.
Gavrilova, Yulia. (4 July 2022). AI Chatbots & Mental Healthcare. IOT for All.
Glick, Molly. (1 July 2022). Your Next Therapist Could Be a Chatbot App. Discover.
Kennedy, Shania. (28 June 2022). AI-Generated ‘Maps’ May Help Improve Mental Well-being. Health IT Analytics.
Kesari, Ganes. (24 May 2021). AI Can Now Detect Depression from Just Your Voice. Forbes.
Rutherford, Lucie. (18 February 2022). Medicine Meets Big Data: Clinicians Look to AI For Disease Prediction and Prevention. UVAToday.
Savage, Neil. (25 March 2020). How AI is improving cancer diagnostics. Nature.
For the first time, autonomous vehicles (AVs) are now being tested on the streets of Austin, Texas and Miami, Florida, without drivers at the wheel. Designed by Argo AI, the vehicles are also being tested in Washington, D.C., Pittsburgh, Pennsylvania, Detroit, Michigan, and Palo Alto, California. The tests even extend to the German cities of Hamburg and Munich.
These tests are only the beginning. The company, whose autonomy platform uses lidar, sensors, and mapping software, is partnering with both ride-sharing service Lyft and Walmart’s delivery service. Together they aim to provide driverless taxi rides and autonomous grocery delivery.
Despite many advancements in AV technology, the road ahead remains uncertain. To replace human drivers, these vehicles need to be able to intuitively navigate roads. They must make split decisions the same way humans do. Current systems are still far from reaching this level of autonomy. However, some recent research breakthroughs may help engineers understand how to overcome this challenge.
Overly Conservative Decision Making Can Make AVs Easy to Fool
To make autonomous vehicles safer, engineers have traditionally designed them to be overly cautious. However, recent research from the University of California suggests this is part of the problem.
Since AVs cannot tell the difference between an object that makes its way onto a roadway by accident and an object placed intentionally, they can be tricked into making a wrong decision. For example, coming to a sudden stop in the middle of the road could potentially cause an accident.
Ironically, this problem is a result of engineers designing AV planning modules to operate with “an abundance of caution.” Ziwen Wan, a Ph.D. student in computer science at UC Irvine, explained this to the UC newsroom.
“But our testing has found that the software can err on the side of being overly conservative,” Wan said. “This can lead to a car becoming a traffic obstruction, or worse.”
New Machine Learning Technique Helps AVs Maintain Steady Flow at Intersections
Another obstacle for autonomous systems is knowing how to move together in busy intersections. A team of researchers from MIT recently discovered a machine learning technique that can help AVs navigate signalized intersections. This allows traffic to continue flowing uninterrupted, while making traveling faster and more fuel efficient.
Rather than relying on typical mathematical models to navigate complex intersections, the researchers turned to deep reinforcement learning. This model-free method uses trial-and-error, in which the control algorithm learns to make a sequence of decisions and is rewarded when it makes the right one. They refined this training by using another technique known as reward shaping. In this, they give the system some domain knowledge it would not be able to learn by itself. Using this method, the vehicle would be penalized if it stopped when it wasn’t supposed to brake. This helps the vehicle understand how to balance competing speed requirements. It allows it to both improve travel time and reduce emissions.
Using simulations, the researchers found that if every vehicle on the road is autonomous, their control system could reduce fuel consumption by 18 percent and carbon dioxide emissions by 25 percent. Additionally, travel speeds could increase by 20 percent.
These research findings are just the start. With every advancement in AV technology, engineers are one step closer to creating a world where traveling is easier, faster, and safer.
Preparing for Roadways of the Future
Learn about the latest developments in AV technology with training in foundational and practical applications through the IEEE Guide to Autonomous Vehicle Technology. Created by leading experts in the field, this online seven-course training program explores the latest strategies and business-critical research on autonomous, connected, and intelligent vehicle technologies
Connect with an IEEE Content Specialist today to learn more about purchasing the program for your organization.
Interested in purchasing the program for yourself? Access it now through the IEEE Learning Network (ILN)!
Resources
Bradbury, Rosie. (31 May 2022). There are now fully driverless cars with no human behind the wheel for safety on the roads of Miami and Austin. Business Insider.
Bell, Brian. (26 May 2022). Autonomous vehicles can be tricked into dangerous driving behavior. University of California.
Zewe, Adam. (17 May 2022). On the road to cleaner, greener, and faster driving. MIT News.
Hewlett Packard Enterprise (HPE) recently announced the launch of innovative platforms. These platforms are expected to speed the development of machine learning models. The first, the HPE Machine Learning Development System, combines machine learning software with compute, accelerators, and networking. This combination shortens the time it takes to get results from building and training machine learning models from months to just days.
“Enterprises seek to incorporate AI and machine learning to differentiate their products and services. However, they are often confronted with complexity in setting up infrastructure required to build and train accurate AI models at scale,” stated Justin Hotard, executive vice president and general manager, HPC and AI, at HPE, in a press release. “The HPE Machine Learning Development System combines our proven end-to-end HPC solutions for deep learning with our innovative machine learning software platform into one system. This provides a performant out-of-the-box solution to accelerate time to value and outcomes with AI.”
The second platform, HPE Swarm, combines blockchain technology with the revolutionary learning methods “federated learning” and “swarm learning.”
What Are Federated Learning and Swarm Learning?
Unlike traditional artificial intelligence (AI) models trained on centralized datasets, federated learning trains models on decentralized datasets. For example, let’s say a model is learning from data on a phone. Here, the model runs on the phone’s data but does not send the actual data to a central server — only insights gleaned from it. This method is much more secure for the owner of the phone. It also makes the system faster and more efficient, because it does not have to send large amounts of data back and forth to a central server.
However, HP Swarm takes federated learning to a new level by using swarm learning, a subset of federate learning. Instead of relying on a central server, swarm learning uses blockchain. Blockchain is a decentralized digital ledger of transactions that records data by duplicating transactions and dispersing them to “nodes” across the network. As such, swarm learning makes the learning process even more decentralized, secure, and resilient.
This technology could accelerate machine learning while advancing a large number of applications, particularly within healthcare. As Ledger Insights reported, it could allow cancer research centers across the world to collect valuable data with one another without having to share the actual data.
“Swarm learning is an important movement in the AI market, with broad support across the public and private sectors. It serves to combine the power of expanding data sets with the innovation and insights from organizations across the globe,” Hotard told VentureBeat.
HP Swarm provides users with containers that are easily integrated with AI models via the HPE swarm API. Users can then instantly share AI model learnings with peers both inside and outside their organization. This enhances training without having to share the original data – making it far more secure.
Swarm learning holds great potential for businesses. It can enable them to make faster decisions with better results, protect the privacy of their customers, share learnings with other organizations, and advance their data governance.
Is Your Company Embracing Machine Learning?
It’s important for organizations developing and deploying machine learning to understand the concepts and techniques necessary for driving machine learning-enabled business insights.
Connect with an IEEE Content Specialist today to learn more about this program and how to get access to it for your organization.
Interested in the program for yourself? Visit the IEEE Learning Network.
Resources
(29 April 2022). HPE’s new platform lets customers build machine learning models quickly and at scale. TechCentral.ie.
(29 April 2022). HPE launches Swarm Learning using blockchain for AI, machine learning. Ledger Insights.
Plumb, Taryn. (27 April 2022). HPE looks to deliver the power of ‘swarm learning’. VentureBeat.
Press Release. (27 April 2022). Hewlett Packard Enterprise accelerates AI journey from POC to production with new solution for AI development and training at scale. hpe.com
Press Release. (27 April 2022). Hewlett Packard Enterprise ushers in next era in AI innovation with Swarm Learning solution built for the edge and distributed sites. hpe.com
Machine learning is quickly becoming one of the most popular technologies that companies are investing in. Experts are growing increasingly worried that these models have a dangerous propensity for making mistakes when it comes to applications such as image recognition software used to diagnose illnesses, or surveillance software used to recognize human faces. However, advancements in machine learning may soon help reduce bias in these systems.
Data Diversity Key to Overcoming Bias in Neural Networks
A team of researchers from MIT and Harvard have found that training machine learning models on diverse sets of data can help them reduce bias, MIT News reports. Data sets that contain limited data are much more likely to discriminate when they make decisions. For example, facial recognition systems trained on data sets containing images of mostly white men are much more likely to give incorrect results when given images featuring women and people of color.
Relying on a method that used controlled data sets, the researchers sought to learn how training data impacts whether an artificial neural network (a machine learning model that uses brain-like nodes to process data) can figure out how to recognize new objects.
The researchers created data sets that contained an equal number of images of various objects in different positions (for example, photos of a car from multiple angles). They made some of these data sets more diverse by displaying the images from different points of view. Machine learning models the researchers trained on the more diverse data sets were better at generalizing new viewpoints. The result supports the idea that data diversity is necessary for overcoming bias. However, the researchers also found that the better a model gets at recognizing new objects, the worse it gets at recognizing objects it has already seen.
“A neural network can overcome dataset bias, which is encouraging,” Xavier Boix, a research scientist and senior author of the paper, told MIT News. “But the main takeaway here is that we need to take into account data diversity. We need to stop thinking that if you just collect a ton of raw data, that is going to get you somewhere. We need to be very careful about how we design data sets in the first place.”
The team also found that training a model separately for individual tasks, rather than training a model for each task at the same time, helped models become less biased. This largely has to do with neuron specialization. During separate training, neural networks produce two different kinds of neurons, which Boix finds fascinating. One neuron becomes good at recognizing object categories, and the other learns how to recognize viewpoints. Conversely, if these neurons are trained simultaneously, they can become diluted and confused.
Machine learning has come a long way, but there is still much to learn in order to develop the field. While the technology is promising, organizations should take steps to ensure they are doing their best to prevent bias in the systems they use or create.
What Uses Do You Predict Machine Learning Will Have in Your Company?
By providing AI with the ability to learn from its experiences without needing explicit programming, machine learning plays a critical role in developing the technology. Covering machine learning models, algorithms, and platforms, Machine Learning: Predictive Analysis for Business Decisions, is a five-course program from IEEE.
Connect with an IEEE Content Specialist today to learn more about this program and how to get access to it for your organization.
Interested in the program for yourself? Visit the IEEE Learning Network.
Resources
Zewe, Adam. (21 February 2022). Can machine-learning models overcome biased datasets? MIT News.
Mysteries in math and science have puzzled human researchers for centuries. Now, a pair of recent studies suggest that machine learning could help them make breakthroughs much faster.
A paper published in Nature suggests new connections between the fields of knot theory— the study of mathematical knots (closed, non-self-intersecting curves embedded in three dimensions)— and representation theory (the study of how algebraic systems can act on vector spaces). Deep Mind, an artificial intelligence company owned by Google-parent Alphabet Inc., differentiated the patterns and connections in the fields. The researchers, who were from the University of Oxford and the University of Sydney, used these connections to make different discoveries in mathematics. The Oxford researchers determined a relationship between algebraic and geometric invariants of knots. This represented a brand new theorem. Also, the Sydney researchers nearly proved a conjecture about Kazhdan-Lusztig polynomials.
The study reveals how useful machine learning can be in helping humans solve complex problems in short periods of time. Human mathematicians have traditionally relied on their intuition to solve mathematical patterns, a process that can take many years. However, with machine learning, large amounts of mathematical data can be produced and studied far more easily and quickly.
“Pure mathematicians work by formulating conjectures and proving these, resulting in theorems. But where do the conjectures come from?” University of Oxford Professor Andras Juhasz, a co-author of the paper, told Science Daily. “We have demonstrated that, when guided by mathematical intuition, machine learning provides a powerful framework. It can uncover interesting and provable conjectures in areas where a large amount of data is available, or where the objects are too large to study with classical methods.”
How Machine Learning Could Enhance the Field of Chemistry
Another recent study shows how machine learning is helping researchers make big discoveries in chemistry. As Phys.org reports, a group of researchers from RWTH Aachen University in Germany and the University of Jyväskylä in Finland developed a system. It is based on machine learning and computationally derived descriptors that can be used to discover special types of catalysts. In a paper published in the journal Science, the team described using machine learning algorithms to discover patterns in known types of ligands. Ligands are substances that form complexes with biomolecules to serve biological purposes. They were able to find new catalysts using these results, which could be used to make new products.
The discovery is exciting for chemists, who have traditionally relied on trial and error to discover new catalysts. Help from machine learning speeds up the process. According to Phys.org, the researchers first trained their algorithm with examples of the general properties of known ligands. Next, they used the algorithm to filter 348 ligands and group them into clusters using computationally derived descriptors. Each smaller unit of the original large dataset can be used for different purposes. Then the researchers verified their processing was successful by predicting ligands that had been synthesized before. Lastly, they used the result to uncover special classes of catalysts used to develop new palladium (I) dimers. The researchers note that their system uses just five data points compared to competing systems, which typically depend on much more.
These discoveries are only the beginning. As machine learning continues to evolve, it has the potential to help researchers make revolutionary advancements in a number of fields.
Understand Machine Learning
By providing AI with the ability to learn from its experiences without needing explicit programming, machine learning plays a critical role in developing the technology. Covering machine learning models, algorithms, and platforms, Machine Learning: Predictive Analysis for Business Decisions, is a five-course program from IEEE.
Connect with an IEEE Content Specialist today to learn more about this program and how to get access to it for your organization.
Interested in the program for yourself? Visit the IEEE Learning Network.
Resources
Yirka, Bob. (3 December 2021). Using machine learning and computationally derived descriptors to find special classes of catalysts. Phys.org.
University of Oxford. (1 December 2021). Machine learning helps mathematicians make new connections. Science Daily.
Machine learning models often rely on the simple features of a dataset to make decisions. Known as “shortcuts,” these types of decisions can lead to serious errors. For example, these shortcuts can cause models to make inaccurate medical diagnoses. However, a recent study from MIT poses a possible solution. By removing the simple characteristics of a dataset, the researchers forced the model to examine the more complex features of a dataset.
“It is still difficult to tell why deep networks make the decisions that they do. In particular, which parts of the data these networks choose to focus upon when making a decision,” Joshua Robinson, a PhD student in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and lead author of the paper, told MIT News. “If we can understand how shortcuts work in further detail, we can go even farther. We aim to answer some of the fundamental but very practical questions that are really important to people who are trying to deploy these networks.”
How To Avoid Shortcuts in Machine Learning
As MIT News reports, the new research centers on a type of self-supervised machine learning known as contrastive learning. Self-supervised models are trained on raw data that don’t have any label descriptions. In contrastive learning models, an encoder algorithm is trained to distinguish between pairs of similar inputs. It also distinguishes pairs of dissimilar inputs, which encodes complex data, such as images, in a way the model can decipher. While this makes decision making more effective, the researchers found that these models also tended to fall victim to making shortcuts. They fixate on the simplest features of an image to determine which pairs of inputs are similar and which are not. To solve this, the researchers made it more difficult for the model to differentiate similar and dissimilar pairs. This altered the features the encoder used to make a decision.
“If you make the task of discriminating between similar and dissimilar items harder and harder, then your system is forced to learn more meaningful information in the data, because without learning that it cannot solve the task,” Stefanie Jegelka, one of the researchers, told MIT News.
However, this caused the encoder to get worse at focusing on some features over others, particularly the simpler ones. To solve this, the researchers required the encoder to discriminate between the pairs using the simpler feature. They also evaluated after the researchers removed the data it already learned. Having the encoder solve the problem both ways simultaneously forced it to make better decisions.
Implicit Feature Modification
Known as “implicit feature modification,” this groundbreaking method does not rely on any input from humans. While it has the potential to help machine learning models avoid shortcuts, the researchers told MIT that it still needs to be refined. It should be tested on other types of self-supervised learning.
Machine learning is still in its infancy. However, innovations such as implicit feature modification have the potential to give artificial intelligence (AI) the ability to learn on its own. Not only will this make AI smarter and more efficient, it can also lead to revolutionary technological and scientific discoveries. Machine learning has the ability to solve complex problems— such as determining protein’s 3D shape— that humans cannot.
Understand Machine Learning
By providing AI with the ability to learn from its experiences without needing explicit programming, machine learning plays a critical role in developing the technology. Covering machine learning models, algorithms, and platforms, Machine Learning: Predictive Analysis for Business Decisions, is a five-course program from IEEE.
Connect with an IEEE Content Specialist today to learn more about this program and how to get access to it for your organization.
Interested in the program for yourself? Visit the IEEE Learning Network.
Resources
Zewe, Adam. (2 November 2021). Avoiding shortcut solutions in artificial intelligence. MIT News.
Callaway, Ewen. (30 Nov 2020). ‘It will change everything’: DeepMind’s AI makes gigantic leap in solving protein structures. Nature.