• Analytical
  • Applicable
  • Concrete Examples


Organizations use artificial intelligence (AI) to enhance their cybersecurity. AI can view patterns, detect unusual behavior, and help identify potential risks; however, the AI itself comes with its own risks. Jasmine Henry, a journalist specializing in analytics and information security, explains that AI often reflects the natural human prejudices of the teams that produce it. Henry shares her insights into the causes of AI bias and offers ideas for creating more diversity.


Organizations increasingly use artificial intelligence (AI) to augment their cybersecurity, but AI applications feature the inherent human prejudices of those who develop it.

Artificial intelligence enhances cybersecurity with its ability to detect unusual patterns of behavior. Close to two-thirds of businesses and organizations are expected to use AI in some capacity in 2020 to strengthen their security efforts.

This increased use of AI concerns many in the security industry because AI algorithms are often biased. Bias creeps into AI in three ways:

  1. “Biased business rules” – ...

About the Author

Jasmine Henry is a journalist specializing in analytics and information security. Her work has appeared in Forbes, Time and dozens of other publications. She specializes in writing about emerging technology trends.

More on this topic

Using Artificial Intelligence to Promote Diversity
The Malicious Use of Artificial Intelligence
The Citizen’s Perspective on the Use of AI in Government
“The Discourse Is Unhinged”
An Ethics Guide for Tech Gets Written With Workers in Mind
AI Government Procurement Guidelines

Related Channels