Infocalypse Now

Infocalypse Now

He Predicted The 2016 Fake News Crisis. Now He's Worried About An Information Apocalypse.

Buzzfeed,

5 min read
2 take-aways
Audio & text

What's inside?

The man who warned about fake news now sounds the alarm about an impending “Infocalypse.”

auto-generated audio
auto-generated audio

Editorial Rating

7

Qualities

  • Eye Opening
  • Concrete Examples
  • Hot Topic

Recommendation

By predicting the fake news scandals surrounding the 2016 US presidential election campaign, tech engineer Aviv Ovadya has become an early Cassandra of Silicon Valley. Yet today, Ovadya believes that your children will look at the post-2016 election fallout with nostalgia. He warns that AI-enhanced technology is steering the world toward a disinformation apocalypse. To learn more about Ovadya’s background and thinking, getAbstract recommends you read Charlie Warzel’s recent post on BuzzFeed News.

Take-Aways

  • Former Silicon Valley tech engineer Aviv Ovadya has been an early voice warning about malicious actors using online platforms to spread misinformation. 
  • AI, combined with audio and video editing tools, will soon allow bad actors to influence world events by issuing authentic-looking footage of fake events or political speeches. 

Summary

Aviv Ovadya abruptly left his career as a Silicon Valley engineer in 2016 when it became clear to him that the world was at the verge of an information crisis. Already, before the 2016 US presidential elections and subsequent revelations about Russia’s stealth influence campaign, Ovadya warned that malicious actors can easily manipulate today’s most widely used online platforms like Facebook, Google, and Twitter to spread and selectively target people with misinformation.

“Our platformed and algorithmically optimized world is vulnerable…to propaganda, to misinformation, to dark targeted advertising from foreign governments…so much so that it threatens to undermine a cornerstone of human discourse: the credibility of fact.”

According to Ovadya, new technological tools will enable malicious actors to do even more harm in the future. Artificial intelligence and machine learning, combined with ever more sophisticated audio and video editing techniques, will enable the creation of nearly authentic looking footage of fake events or political speeches that could easily precipitate an international crisis or worse. Furthermore, malicious actors can unleash AI-powered bots to overwhelm the inboxes of legislators with messages that are nearly indistinguishable from those crafted by real humans. Similarly, machine-learning programs can create simulations of real people using their online profiles and interact with their friends who believe they are talking with the real person.

“Already available tools for audio and video manipulation have begun to look like a potential fake news Manhattan Project.”

Living in a world in which it becomes impossible to distinguish between real and fake content, people may simply turn away from information platforms altogether and enter a state of “reality apathy.” Such a scenario would be devastating for democracies which depend on a well-informed citizenry and fact-based public discourse.

“Ovadya’s premonitions are particularly terrifying given the ease with which our democracy has already been manipulated by the most rudimentary, blunt-force misinformation techniques.”

Ovadya, who now works for the University of Michigan’s Center for Social Media Responsibility, is now on a mission to educate the public, legislators, and technology professionals about the real prospects of fake news proliferation leading to an “Infocalypse.” He takes heart from the recent increase in public debate about the dangers of misinformation and online propaganda. Furthermore, he is encouraged by recent advances in cryptographic image and audio authentication, which would allow platforms and users to weed out fake content. Yet Ovadya believes that things in the information space will get worse before they get better, especially because online platforms continue to rely on algorithms that favor clickbait and sensationalist content.

About the Author

Charlie Warzel is a senior writer for BuzzFeed News. 

This document is restricted to personal use only.

Did you like this summary?

Read the article

This summary has been shared with you by getAbstract.

We find, rate and summarize relevant knowledge to help people make better decisions in business and in their private lives.

For yourself

Discover your next favorite book with getAbstract.

See prices

For your company

Stay up-to-date with emerging trends in less time.

Learn more

Students

We're committed to helping #nextgenleaders.

See prices

Already a customer? Log in here.

Comment on this summary

More on this topic

By the same author

Related Channels