Únase a getAbstract para acceder al resumen.

A.I. Is Mastering Language. Should We Trust What It Says?

Únase a getAbstract para acceder al resumen.

A.I. Is Mastering Language. Should We Trust What It Says?

OpenAI’s GPT-3 and other neural nets can now write original prose with mind-boggling fluency – a development that could have profound implications for the future.

New York Times Magazine,

5 mins. de lectura
5 ideas fundamentales
Audio y Texto

¿De qué se trata?

Machines are getting smarter, but is that a good thing?

Editorial Rating



  • Eye Opening
  • Bold
  • Hot Topic


What would the world look like with real artificial general intelligence (AGI)? With OpenAI’s remarkable new large language model (LLM) GPT-3, you can see glimmers of the future to come. GPT-3 uses sophisticated neural nets to facilitate word prediction, generating unique compositions based on 700 gigabytes of curated data. This technology would have broad applications across sectors. The troubling question is: Should humanity create such an entity, and what are the possible unintended consequences? OpenAI’s founders want AGI to “benefit” all humanity – but it’s not clear if that is possible, or even desirable.


OpenAI created technology that harnesses AI to write original prose.

In a complex in Iowa, one of the most powerful supercomputers on Earth, with 285,000 CPU cores that process “innumerable calculations,” runs a program called GPT-3 (Generative Pre-Trained Transformer 3), developed by OpenAI. It is playing a game called “Guess the missing word.” This supercomputer is learning to compose original texts with an uncanny “illusion of cognition.” OpenAI was founded in 2015 by several Silicon Valley heavyweights, including Elon Musk and Greg Brockman. GPT-3 came out in 2020, and amazed test users with its resemblance to HAL 9000 from the movie 2001: A Space Odyssey. But is GPT-3 actually thinking, or is it a “stochastic parrot” – a large language model (LLM) that merely appropriates and recombines human-constructed text?

GPT-3 is an LLM, a “complex neural net” that resembles a brain, with many layers, each a higher level of abstraction. Its “intelligence” resembles a child’s because it learns from “the bottom up,” testing nonsense words first to complete a sentence, then adjusting and re-adjusting until it gets...

About the Author

Steven Johnson is a contributing writer for The New York Times Magazine and the author of Future Perfect: The Case for Progress in a Networked Age. He also writes the newsletter Adjacent Possible on Substack.

Comment on this summary

More on this topic

In our Journal