How Standards Shape Our World

In everyday life, standards help ensure the safety of everything from the food we eat to the appliances, devices, and medical equipment we operate. Standards also guide energy management for improved efficiency and govern IT security practices to protect sensitive information.

The concept of standardization dates back to ancient civilizations, many of which created universal systems of weights, measures, and guidelines to support their trading activities. The world’s first formal standards organization, the National Standards Body, was established in London in 1901. Following the launch of the World Bank in 1944 and the founding of the United Nations in 1945, the International Organization for Standardization (ISO) was officially created in 1947 to “establish international standards for goods and services, promote global cooperation, and enhance quality, safety, and efficiency” in the post-WWII era.

Streamlining Society and Business

Since then, standards have had an indelible impact on our lives – enhancing safety, promoting technological innovation, and streamlining global trade. Below are some interesting facts about global standards:

  • The acronym “ISO” (associated with the International Organization for Standardization) comes from the Greek word “isos,” meaning “equal.”
  • More than 100,000 standards are recognized in the U.S. alone, and over 30,000 international standards are acknowledged globally.
  • Standards are foundational for a wide range of industries. Examples include:
    • Generally Accepted Accounting Principles (GAAP) used in financial reporting
    • Common Core Standards in education
    • The National Electric Code (NEC) governing safe electrical installations in the U.S.
    • The International Energy Conservation Code regulating global energy usage
    • Bluetooth standards defining how wireless devices connect and communicate
    • HTML and CSS language standards regulating the architecture, look, and feel of web content
    • Even credit card sizes are standardized to ensure their compatibility worldwide!
  • 14 October marks World Standards Day (founded by ISO in 1970), celebrating the importance of standards and those who develop them.

The Role of IEEE in the Standards Process

For over a century, the IEEE Standards Association (IEEE SA) has helped shape global technology. As one of the most respected standards organizations, IEEE collaborates with thought leaders in more than 160 countries to advance innovation, safety, and interoperability. Its portfolio includes more than 1,200 active standards, with another 1,000+ currently in development.

IEEE standards span a wide range of disciplines—telecommunications, IT, electric vehicles, smart grids, blockchain, electromagnetic compatibility, and more. By providing a framework for compliance and innovation, these standards empower professionals to develop reliable, forward-thinking technologies. 

IEEE: Your Expert Source on Standards

IEEE offers many informative standards-related courses across a diverse range of fields.

  • IEEE 802.11ax: An Overview of High-Efficiency Wi-Fi (Wi-Fi 6
    This 6-hour course program examines the underlying technology behind the latest Wi-Fi 6 products and the 802.11ax standard, which is focused on achieving higher efficiency and improving the user experience.
  • Introduction to IEEE Std 1547-2018: Connecting Distributed Energy Resources 
    This 6-hour course program reviews the interconnection testing and verification requirements included in the IEEE 1547 standard, requirements for interoperability and open access at the DER, and power quality issues associated with DER systems.
  • AI Standards: Roadmap for Ethical and Responsible Digital Environments 
    This 5-hour course program offers a comprehensive approach to creating ethical and responsible digital ecosystems based on the principles of Honesty & Impartiality, Protection & Security, and Safe Disclosure & Privacy.
  • IEEE Software and Systems Engineering Standards Used in Aerospace and Defense
    This 5-hour course program explores systems and software engineering concepts applicable to the Aerospace and Defense industries and covers such topics as the selection and application of appropriate IEEE standards for life cycle processes, solving complex issues through interrelated life cycle processes, and techniques for rapid but high quality delivery.
  • NESC® 2023: National Electrical Safety Code
    This 7-hour course program educates power utility professionals on the rules, regulations, and changes in the 2023 edition of the National Electrical Safety Code (NESC) and reviews such specific topics as supply station safety, grounding, and overhead and underground requirements.
  • Software & Hardware Configuration Management in Systems Engineering
    This 5-hour course program reviews essential configuration management core concepts for both hardware and software, from the requirements specified in the IEEE 828 standard to best CM practices, modern CM approaches such as “Agile SCM,” and methods to assess and improve existing organizational CM practices.

Explore and enroll in IEEE standards courses today on the IEEE Learning Network. For institutional access, contact a specialist today!

Artificial intelligence (AI) is more present in our lives than ever. With varied uses, AI can predict what we want to see as we scroll through social media, as well as help to solve global challenges like hunger, environmental changes, and pandemics. This technology has countless applications in the real world. A McKinsey survey illustrates that AI adoption followed an upward trajectory in the year 2021 and continues to do so. According to the survey, “56 percent of all respondents report AI adoption in at least one function.”

However, AI technology is not always beneficial—AI can violate privacy, AI-generated output cannot always be explained, and AI can be biased. When the data feeding an AI system is not representative of the diversity and plurality of our societies, it can produce biased or discriminatory outcomes.

An often-cited example is facial recognition technology. Used to access mobile phones and bank accounts, it’s also being increasingly employed by law enforcement authorities. With emerging problems accurately identifying women and darker-skinned people, facial recognition is far from being perfected. This is not surprising when you look at how AI is developed: only 1 in 10 software developers worldwide are women. Furthermore, developers come overwhelmingly from western countries. 

Hardcoding Ethics into AI

Humans can be biased, but people possess the ability to recognize how their conclusions may be biased, discriminatory, or unethical. While there is some recent debate over the “sentient” qualities of AI programs, they cannot “think” or “feel”. AI performance depends entirely on its coding. Because AI does not have this meta-cognitive ability, it is up to people to override unethical decisions when they arise. Unethical AI is not a consequence simply of programming deficiencies, but rather of not fully considering how ethical requirements should be incorporated into the learning algorithm during development. 

Organizations using AI need to become more proactive and formulate actionable AI ethics policies by thinking about ethics from the start. This approach already is deemed essential to cyber security products, where “security by design” development principles drives the need to assess risks and hardcode security from the start. This mindset should be applied to the development of AI tools so these can be deployed responsibly and without bias. This process will be critical as societies and cultures change over time, and AI products should always reflect current values.

How to Create an AI Ethics Policy 

Aligning AI ethics is not just a moral responsibility, it is also a business imperative. It requires action to build an AI ethics-aware culture. Reid Blackman, CEO of Virtue, recommends instilling actionable ethics into AI systems by following these seven guidelines: 

  1. Bring clarity to AI standards
  2. Increase awareness among everyone in the organization
  3. Thoroughly incorporate AI ethics into team culture
  4. Make sure there are AI experts as part of an AI ethics committee
  5. Introduce accountability
  6. Measure everything— set key performance indicators (KPIs) to track whether your organization is meeting its goals for AI standard adoption
  7. Gain executive sponsorship

Prepare for an AI Future

The AI market size is expected to grow and surpass US$1,597 billion by 2030. Organizations and technology professionals should prepare for a changing landscape when it comes to the future of AI.  

Get a jumpstart on learning about ethics in artificial intelligence systems. Check out Artificial Intelligence and Ethics in Design, a five-course program from IEEE that provides the background knowledge needed to integrate AI and autonomous systems within their companies or to their customers and end users.

Contact an IEEE Account Specialist to get organizational access or check it out for yourself on the IEEE Learning Network.


Resources

Bedzow, Ira. (30 June 2022). What It Takes to Create and Implement Ethical Artificial Intelligence. Forbes.

Boston Consulting Group (BCG). (7 July 2022). 87% of Climate and AI Leaders Believe That AI Is Critical in the Fight Against Climate Change. PR Newswire. 

Chui, Michael et al. (8 December 2021). The state of AI in 2021. McKinsey.

Henderson, Emily. (10 June 2022). Using artificial intelligence to discover new antivirals against COVID-19 and future pandemics. New Medical.

McKendrick, Joe. (10 June 2022). 7 Steps to More Ethical Artificial Intelligence. Forbes. 

Mubarik, Abu. (20 June 2022). This is how former Wall Street trader Sara Menker from Ethiopia is using AI to remove world hunger. Face 2 Face Africa. 

Precedence Research. (19 April 2022). Artificial Intelligence Market Size to Surpass Around US$ 1,597.1 Bn By 2030. GlobeNewswire.

Ramos, Gabriela and Koukku-Ronde, Ritva. (22 June 2022). A new global standard for AI ethics. UNESCO.

Smith, Wesley. (26 June 2022). Five Reasons AI Programs Are Not ‘Persons’. Mind Matters News.

Yu, Eileen. (30 June 2022). AI ethics should be hardcoded like security by design. ZD Net

Artificial intelligence (AI) systems are evolving fast. However, ethical standards that ensure these systems don’t harm the public, such those that aim to prevent unintentional biases based on the data these systems are trained on, have been less quick to evolve. According to a global survey conducted by MIT Sloan Management Review, which polled over 1,000 executives, 82% of managers in organizations with at least USD $100 million in annual revenues agreed or strongly agreed that responsible AI (RAI) should be included in their top management agenda. At the same time, only 50% reported that RAI is a part of their top management’s agenda. 

How can organizations that develop or use artificial intelligence ensure RAI is not just an afterthought? A recent panel of global AI experts, organized by MIT Sloan Management Review and global consulting firm BCG, concluded with the following takeaways:

  • Leadership needs to understand why RAI is important to the organization’s strategy. Otherwise, RAI may never make it into the agendas of the organization’s major decision makers.
  • Determine whether RAI is part of your AI strategy or a part of your wider organizational goals, such as corporate responsibility. Without an understanding of this, leadership may not fully grasp that it should be integrated into their larger agenda.
  • Look at RAI as an urgent need that must be integrated now. Otherwise, you may miss valuable opportunities to prevent risk and harm down the line.

What are the Fundamental Principles of AI Ethics?

Understanding the core principles of AI is the first step to developing an effective AI standards framework. Such a framework should also align with an organization’s mission. It should also align with any regulations the organization may be affected by through its implementation of the AI system. According to TechTarget, the basic principles of ethical AI include:

  1. Fairness: The AI system does not contain biases and functions equally well for all groups 
  2. Accountability: The AI system has ways to identify who is responsible across different stages of the AI life cycle if something goes wrong. It also provides ways for humans to supervise and control the system
  3. Transparency: When the AI system makes a decision, it allows humans to understand why it came to that conclusion. This is essential for building trust
  4. Safety: The AI system is equipped with effective security controls

Incorporating These Principles into AI Systems

During an interview with Analytics India Magazine, Layak Singh, CEO of Artivatic AI, an insurance platform, said the company reduces biases in AI by defining the business problems it wants to solve while considering end users. They then configure data collection methods to be able to incorporate diverse perspectives.

“We also ensure that we clearly understand our training data, as this is where most biases are introduced and can be avoided,” Singh said. “With that aim, we also ensure an ML [machine learning] team that’s assorted as they ask dissimilar queries and thus interact with the AI models in various ways. This leads to identifying errors before the model is underway in production. It is the best manner to reduce bias both at the beginning and while retraining models.”

Additionally, there is a major focus on feedback as his company keeps feedback channels, such as forum discussions, open in order to run continual audits and upgrades.

Ensuring AI systems are ethical is becoming essential to building trust with clients and customers. Don’t wait until that trust is already broken— start developing an ethical AI standards framework today.

Incorporating AI Standards at Your Organization

An online five-course program, AI Standards: Roadmap for Ethical and Responsible Digital Environments, provides instructions for a comprehensive approach to creating ethical and responsible digital ecosystems. Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.

Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!

Resources

Krishna, Sri. (20 April 2022). Talking Ethical AI with Artivatic’s Layak Singh. India Analytics Magazine. 

Kiron, David, Renieris, Elizabeth, and Mills, Steven. (19 April 2022). Why Top Management Should Focus on Responsible AI. MIT Sloan Management Review.

Kompella, Kashyap. (1 April 2022). How AI ethics is the cornerstone of governance. TechTarget.

As artificial intelligence (AI) becomes more common, so do its risks, such as its potential for bias and privacy infringements. As previously discussed in previous posts, governments around the world are beginning to develop requirements and guidance around AI. Organizations not yet developing AI standards in alliance with these requirements may soon struggle to keep up with regulations. Nevertheless, there are steps they can start taking now to navigate these shifting requirements.

Six Steps for Managing Risk in AI

According to Michael K. Atkinson and Rukiya Mohamed, attorneys at Crowell & Moring specializing in national security practice and regulatory enforcement, AI risk management should be approached like onboarding new employees. Following AI frameworks designed by governmental agencies, the Intelligence Community’s AI Ethics Framework, and the European Commission’s High-Level Expert Group on Artificial Intelligence’s Ethics Guidelines for Trustworthy AI, they recommend six steps to reduce risk in your AI.

  1. Build integrity into your organization’s AI from the design stage. “Just as employees need to be aligned with an organization’s values, so too does AI,” Atkinson and Mohamed write in VentureBeat. “Organizations should set the right tone from the top on how they will responsibly develop, deploy, evaluate, and secure AI consistent with their core values and a culture of integrity.”
  2. Onboard AI as your organization would new employees and third-party vendors. “As with humans, this due diligence process should be risk-based,” the authors write. This will involve checking the “the equivalent of the AI’s resume and transcript,” such as “the quality, reliability, and validity of data sources used to train the AI.” Additionally, it involves reviewing the risks of using AI whose proprietary data is not available. It also includes checking “the equivalent of references to identify potential biases or safety concerns in the AI’s past performance.” As a further step, organizations should perform “deep background” checks. This includes reviewing source code with the providers’ consent to “root out any security or insider threat concerns.”
  3. Ingrain AI into your organizational culture before deployment. “Like other forms of intelligence, AI needs to understand the organization’s code of conduct and applicable legal limits. It then needs to adopt and retain them over time,” Atkinson and Mohamed write. “AI also needs to be taught to report alleged wrongdoing by itself and others. Through AI risk and impact assessments, organizations can assess privacy, civil liberties, and civil rights implications for each new AI system.”
  4. Manage, evaluate, and hold AI accountable. Similar to how an organization might take a risk-based, probational approach to responsibilities for new employees, they should do the same with AI. “Like humans, AI needs to be appropriately supervised, disciplined for abuse, rewarded for success, and able and willing to cooperate meaningfully in audits and investigations,” the authors write. They suggest companies routinely and regularly document AI’s performance, including any corrective actions to ensure it produced desired results.
  5. Keep AI safe from various dangers, such as physical harm and cyber threats. This is similar to how companies protect employees. “For especially risky or valuable AI systems, safety precautions may include insurance coverage. This is similar to the insurance that companies maintain for key executives,” they write. 
  6. Terminate or retire AI systems that don’t meet your organization’s values and standards or that simply age out. “Organizations should define, develop, and implement transfer, termination, and retirement procedures for AI systems,” Atkinson and Mohamed write. “For especially high-consequence AI systems, there should be clear mechanisms to, in effect, escort AI out of the building by disengaging and deactivating it when things go wrong.”

Keeping up with evolving AI requirements and guidelines isn’t easy. However, managing risk in your AI systems isn’t much different from how you are already doing it with employees. Like humans, AI systems are prone to bias and mistakes. As such, it’s fair to treat them with the same level of scrutiny. 

Establishing AI Standards for Your Organization

Artificial intelligence continues to spread across various industries, such as healthcare, manufacturing, transportation, and finance among others. It’s vital to keep in mind rigorous ethical standards designed to protect the end-user when leveraging these new digital environments. AI Standards: Roadmap for Ethical and Responsible Digital Environments is a new five-course program from IEEE that provides instructions for a comprehensive approach to creating ethical and responsible digital ecosystems.

Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.

Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!

Resources

Atkinson, Michael K. and Mohamed, Rukiya. (19 September 2021). Want to develop a risk-management framework for AI? Treat it like a human. VentureBeat.