Biased AI Is Another Sign We Need to Solve the Cybersecurity Diversity Problem
Article

Biased AI Is Another Sign We Need to Solve the Cybersecurity Diversity Problem

Editorial Rating

7

Qualities

  • Analytical
  • Applicable
  • Concrete Examples

Recommendation

Organizations use artificial intelligence (AI) to enhance their cybersecurity. AI can view patterns, detect unusual behavior, and help identify potential risks; however, the AI itself comes with its own risks. Jasmine Henry, a journalist specializing in analytics and information security, explains that AI often reflects the natural human prejudices of the teams that produce it. Henry shares her insights into the causes of AI bias and offers ideas for creating more diversity.

Summary

Organizations increasingly use artificial intelligence (AI) to augment their cybersecurity, but AI applications feature the inherent human prejudices of those who develop it.

Artificial intelligence enhances cybersecurity with its ability to detect unusual patterns of behavior. Close to two-thirds of businesses and organizations are expected to use AI in some capacity in 2020 to strengthen their security efforts.

This increased use of AI concerns many in the security industry because AI algorithms are often biased. Bias creeps into AI in three ways:

  1. “Biased business rules” – ...

About the Author

Jasmine Henry is a journalist specializing in analytics and information security. Her work has appeared in Forbes, Time and dozens of other publications. She specializes in writing about emerging technology trends.


Comment on this summary

More on this topic

The Citizen’s Perspective on the Use of AI in Government
8
An Ethics Guide for Tech Gets Written With Workers in Mind
8
AI Government Procurement Guidelines
9
Expanding AI’s Impact with Organizational Learning
8
The Panopticon Is Already Here
9
Can Computers Ever Replace the Classroom?
9

Related Channels