Organizations use artificial intelligence (AI) to enhance their cybersecurity. AI can view patterns, detect unusual behavior, and help identify potential risks; however, the AI itself comes with its own risks. Jasmine Henry, a journalist specializing in analytics and information security, explains that AI often reflects the natural human prejudices of the teams that produce it. Henry shares her insights into the causes of AI bias and offers ideas for creating more diversity.
About the Author
Jasmine Henry is a journalist specializing in analytics and information security. Her work has appeared in Forbes, Time and dozens of other publications. She specializes in writing about emerging technology trends.
Comment on this summary
In our Journal
1 year ago
Robots Need Bias Awareness Training, Too
Despite their veneer of objectivity, algorithms are as biased as the humans creating them. It’s piñata time! Imagine you’ve invited a bunch of kids to your daughter’s birthday party. You find yourself at a loss watching them squabble over who gets to take the first swing at the papier-mâché lama filled with candy. To prove […]