U.S. Federal Trade Commission: Seven Ways to Help Ensure Fair AI Systems


In April, the European Union (EU) and the U.S. Federal Trade Commission (FTC) each released regulations aimed at curbing the potential risks of artificial intelligence (AI). While the FTC regulations apply nationally in the U.S., enforcement of the EU regulations is left to EU member states. Both regulations apply to private companies only, though they could affect vendors that work with governments.

The EU regulations are broad guidelines focused on curtailing mass surveillance and the use of AI as a manipulation tool. The FTC regulations take a hard line against private companies that use or sell algorithms that are biased. Ultimately, the regulations impose legal repercussions on companies that violate the FTC Act, which outlaws unfair types of competition as well as unfair acts or practices that impact commerce. For example, Section 5 of the FTC Act “prohibits unfair or deceptive practices,” which would include, for example, the sale or use of algorithms that are racially biased, the agency explained in a blog post.

The post, titled “Aiming for truth, fairness, and equity in your company’s use of AI,” offers recommendations for organizations on how they can prevent bias in their AI systems while still benefiting from the technology.

Recommendations from the FTC

According to the FTC, there are seven ways organizations can develop fair AI systems that can help them avoid violating regulations.

1) Have a solid foundation
The dataset used to train your AI is the foundation of the system. If the data overwhelmingly represents majority groups, your system will likely be biased. To help ensure your system is fair, make sure the datasets are representative of everyone, especially women and minority groups.

2) Pay attention to discriminatory results
Through its PrivacyCon conference, the FTC has found that AI applications designed with good intentions still often contain racial bias. To prevent this, be sure to test your algorithms both before and after you deploy your AI models.

3) Understand the importance of independence and transparency
To ensure your algorithms aren’t biased, rely on “independent standards” and “transparent frameworks”. Examples include conducting and making public independent audits and letting outside experts inspect your data and source code.

4) Don’t overpromise on your AI algorithm’s capabilities
The FTC Act dictates that your communications to customers are “truthful, non-deceptive, and backed up by evidence.” If you tell clients that your AI system can provide fully unbiased hiring decisions, but the data is not inclusive of women and minorities, it can lead to repercussions.

5) Be honest about how your organization uses data
Be transparent with clients and customers about how you obtain their data and how the data will be used.

6) Make sure your AI model does more good than harm
According to the FTC Act, a practice that does more harm than good is defined as “unfair”. If your AI model harms a group of people, and the damage outweighs the benefits—particularly if the damage could have been avoided—the FTC can take action.

7) Hold your organization accountable
It doesn’t matter if an employee or an algorithm violates FTC rules, the FTC will hold your organization accountable. To prevent this, make sure your organization is following regulations and enforcing them from within.

Designing fair and unbiased AI systems is a challenge. However, following these guidelines will make it easier to design systems people can trust.

Establishing AI Standards for Your Organization

Artificial intelligence continues to spread across various industries, such as healthcare, manufacturing, transportation, and finance among others. It’s vital to keep in mind rigorous ethical standards designed to protect the end-user when leveraging these new digital environments. AI Standards: Roadmap for Ethical and Responsible Digital Environments, is a new five-course program from IEEE that provides instructions for a comprehensive approach to creating ethical and responsible digital ecosystems.

Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.

Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!


Heaven, Will Douglas. (21 April 2021). This has just become a big week for AI regulation. MIT Technology Review. 

Jillson, Elisa. (19 April 2021). Aiming for truth, fairness, and equity in your company’s use of AI.

, , , ,


  1. NIST: Recommendations for Identifying and Managing Bias in AI - IEEE Innovation at Work - July 21, 2021

    […] intelligence (AI) systems may soon find themselves in regulatory trouble. As we discussed in a previous post, the U.S. Federal Trade Commission announced in April that private companies that sell biased AI […]

Leave a Reply