Navigation überspringen
A.I. Is Mastering Language. Should We Trust What It Says?
Article

A.I. Is Mastering Language. Should We Trust What It Says?

OpenAI’s GPT-3 and other neural nets can now write original prose with mind-boggling fluency – a development that could have profound implications for the future.



Editorial Rating

10

Qualities

  • Eye Opening
  • Bold
  • Hot Topic

Recommendation

What would the world look like with real artificial general intelligence (AGI)? With OpenAI’s remarkable new large language model (LLM) GPT-3, you can see glimmers of the future to come. GPT-3 uses sophisticated neural nets to facilitate word prediction, generating unique compositions based on 700 gigabytes of curated data. This technology would have broad applications across sectors. The troubling question is: Should humanity create such an entity, and what are the possible unintended consequences? OpenAI’s founders want AGI to “benefit” all humanity – but it’s not clear if that is possible, or even desirable.

Take-Aways

  • OpenAI created technology that harnesses AI to write original prose.
  • The application of large language models (LLMs) in the real world could be massive, threatening even “elite” professions.
  • Silicon Valley entrepreneurs envision OpenAI as an “extension of human wills” and not used for profit.

About the Author

Steven Johnson is a contributing writer for The New York Times Magazine and the author of Future Perfect: The Case for Progress in a Networked Age. He also writes the newsletter Adjacent Possible on Substack.


More on this topic

In our Journal