The Supply of Disinformation Soon Will Be Infinite

The Supply of Disinformation Soon Will Be Infinite

Disinformation campaigns used to require a lot of human effort, but artificial intelligence will take them to a whole new level.

The Atlantic,

5 min read
3 take-aways
Text available

What's inside?

In the old days, humans created propaganda, but now robots can do it just as well.


Editorial Rating

9

Qualities

  • Analytical
  • Applicable
  • Bold

Recommendation

In the digital age, social media accounts and plagiarized articles can launch media careers or disinformation campaigns. The next frontier is content generated by artificial intelligence (AI), texts that people find difficult to distinguish from the work of old-fashioned propagandists. The digital world was already prone to disinformation and, as Renée DiResta reports in The Atlantic, this new development will build increasing suspicion and distrust. She posts a giant red flag: If you thought influencers and propagandists were slippery, they’re about to get a lot more so.

Take-Aways

  • AI-generated text is propaganda’s new frontier.
  • In the future, less sophisticated actors will be able to create fake media.
  • AI-generated text will further erode people’s trust.

Summary

AI-generated text is propaganda’s new frontier.

Just a few years ago, traditional investigative techniques could expose organizations disseminating disinformation or propaganda under fake bylines. For example, in 2016, a purported author pitched articles to progressive media sites, which accepted some of them for publication. Two years later, The Washington Post obtained information from an FBI report that the author did not exist and was, instead, a totally fictitious byline created by a Russian intelligence agency. Further investigation found that the fictitious author was plagiarizing other fictitious authors.

“We found a sprawling web of nonexistent authors turning Russian-government talking points into thousands of opinion pieces and placing them in sympathetic Western publications, with crowds of fake people discussing the same themes on Twitter.”

The ease of searching for articles and images on the internet means that users can expose fictitious writers and their equally fictitious social media collaborators. Given that, Russian disinformation operators adapted. They created more credible social media profiles decked out with unique, AI-generated images and thus duped actual journalists into writing original content, though even that strategy has weaknesses.

An artificial intelligence research group recently created an even better nefarious alternative: The AI-based “generative pre-trained transformer,” or GPT-3, can manufacture articles and social media posts so fluent that it’s far more difficult to detect that human writers didn’t create them.

In the future, less sophisticated actors will be able to create fake media.

The availability of GPT-3 will transform international propaganda campaigns. It can fabricate everything from credible news stories to long philosophical essays to day-to-day tweets, and all with a consistent style. With relatively modest effort, programmers can teach GPT-3 to write credibly in a wide variety of styles, such as that of the disciples of various extremist groups. The big question is: How will people deal with the existence of a technology that allows anybody to efficiently and quickly disseminate hard to trace information? The people who created GPT-3 grasp its dangers and have limited access to it, but less restrained parties are likely to create similar technologies.

AI-generated text will further erode people’s trust in the media.

The digital “ecosystem” has long been moving in the direction of fiction and illusion. As early as 1990, Adobe Photoshop made it possible to manipulate photographs. With “computer-generated images” (CGI), people can fabricate images of everything from novel species to entire planets. To some extent, modern media consumers have grown accustomed to manipulated images, but people understand that falsified photographs or videos are destructive and manipulative. People take it for granted that human beings write the articles they read in newspapers and magazines and the feeds they follow on social media. That won’t necessarily be true in the future – a fact that will change how people relate to the information they consume and to the online forums that disseminate that information.

“The impact that pervasive unreality within the information space will have on liberal democracies is unclear. If, or when, the flooding of the discourse comes to pass, our trust in what we read and who we are speaking with online is likely to decrease.”

“Generative text” will deepen people’s distrust of media sources. Editors will have to be more careful and – as fast as possible – media platforms will have to find new, effective ways to detect text generated by AI. People will have to assess the value and reliability of the text they read one article and one social media post at a time. Even so, AI’s ability to create images and texts will inevitably get better and more sophisticated, and will be more and more difficult to detect.

About the Author

Renée DiResta is the technical research manager at the Stanford Internet Observatory.

This document is restricted to personal use only.

Did you like this summary?

Read the article

This summary has been shared with you by getAbstract.

We find, rate and summarize relevant knowledge to help people make better decisions in business and in their private lives.

For yourself

Discover your next favorite book with getAbstract.

See prices

For your company

Stay up-to-date with emerging trends in less time.

Learn more

Students

We're committed to helping #nextgenleaders.

See prices

Already a customer? Log in here.

Comment on this summary