Join getAbstract to access the summary!

Biased AI Is Another Sign We Need to Solve the Cybersecurity Diversity Problem

Join getAbstract to access the summary!

Biased AI Is Another Sign We Need to Solve the Cybersecurity Diversity Problem

Security Intelligence,

5 min read
2 take-aways
Audio & text

What's inside?

Inherent biases in artificial intelligence jeopardize its effectiveness in cybersecurity.


Editorial Rating

7

Qualities

  • Analytical
  • Applicable
  • Concrete Examples

Recommendation

Organizations use artificial intelligence (AI) to enhance their cybersecurity. AI can view patterns, detect unusual behavior, and help identify potential risks; however, the AI itself comes with its own risks. Jasmine Henry, a journalist specializing in analytics and information security, explains that AI often reflects the natural human prejudices of the teams that produce it. Henry shares her insights into the causes of AI bias and offers ideas for creating more diversity.

Summary

Organizations increasingly use artificial intelligence (AI) to augment their cybersecurity, but AI applications feature the inherent human prejudices of those who develop it.

Artificial intelligence enhances cybersecurity with its ability to detect unusual patterns of behavior. Close to two-thirds of businesses and organizations are expected to use AI in some capacity in 2020 to strengthen their security efforts.

This increased use of AI concerns many in the security industry because AI algorithms are often biased. Bias creeps into AI in three ways:

  1. “Biased business rules” – ...

About the Author

Jasmine Henry is a journalist specializing in analytics and information security. Her work has appeared in Forbes, Time and dozens of other publications. She specializes in writing about emerging technology trends.


Comment on this summary