Join getAbstract to access the summary!

A.I. Is Mastering Language. Should We Trust What It Says?

Join getAbstract to access the summary!

A.I. Is Mastering Language. Should We Trust What It Says?

OpenAI’s GPT-3 and other neural nets can now write original prose with mind-boggling fluency – a development that could have profound implications for the future.

New York Times Magazine,

5 min read
5 take-aways
Audio & text

What's inside?

Machines are getting smarter, but is that a good thing?


Editorial Rating

10

Qualities

  • Eye Opening
  • Bold
  • Hot Topic

Recommendation

What would the world look like with real artificial general intelligence (AGI)? With OpenAI’s remarkable new large language model (LLM) GPT-3, you can see glimmers of the future to come. GPT-3 uses sophisticated neural nets to facilitate word prediction, generating unique compositions based on 700 gigabytes of curated data. This technology would have broad applications across sectors. The troubling question is: Should humanity create such an entity, and what are the possible unintended consequences? OpenAI’s founders want AGI to “benefit” all humanity – but it’s not clear if that is possible, or even desirable.

Take-Aways

  • OpenAI created technology that harnesses AI to write original prose.
  • The application of large language models (LLMs) in the real world could be massive, threatening even “elite” professions.
  • Silicon Valley entrepreneurs envision OpenAI as an “extension of human wills” and not used for profit.
  • Deep-learning systems sometimes operate in uncanny, unpredictable ways, which may indicate emergence.
  • If Artificial General Intelligence (AGI) resembles human society, it will have deep flaws and contradictions.

Summary

OpenAI created technology that harnesses AI to write original prose.

In a complex in Iowa, one of the most powerful supercomputers on Earth, with 285,000 CPU cores that process “innumerable calculations,” runs a program called GPT-3 (Generative Pre-Trained Transformer 3), developed by OpenAI. It is playing a game called “Guess the missing word.” This supercomputer is learning to compose original texts with an uncanny “illusion of cognition.” OpenAI was founded in 2015 by several Silicon Valley heavyweights, including Elon Musk and Greg Brockman. GPT-3 came out in 2020, and amazed test users with its resemblance to HAL 9000 from the movie 2001: A Space Odyssey. But is GPT-3 actually thinking, or is it a “stochastic parrot” – a large language model (LLM) that merely appropriates and recombines human-constructed text?

“The underlying idea of GPT-3 is a way of linking an intuitive notion of understanding to something that can be measured and understood mechanistically, and that is the task of predicting the next word in text.” (Ilya Sutskever, co-founder and chief scientist, OpenAI)

GPT-3 is an LLM, a “complex neural net” that resembles a brain, with many layers, each a higher level of abstraction. Its “intelligence” resembles a child’s because it learns from “the bottom up,” testing nonsense words first to complete a sentence, then adjusting and re-adjusting until it gets a green light. The software enhances that connection, and this iterative process, repeated billions of times, teaches the LLM to essentially “think.” After all, prediction is a key factor in human intelligence. GPT-3 was developed with the latest in computational power and mathematical techniques and, training on 700 gigabytes of data from the web, it’s among the best LLM performers. 

The application of LLMs in the real world could be massive, threatening even “elite” professions.

Currently, GPT-3 and other LLMs are in the experimental phase, but GPT-3’s commercial use has enormous potential. No more typing questions into Google and clicking links. You could simply ask a question and receive a refined answer. LLMs could replace customer service reps. But even higher level professions may see LLMs writing code or producing legal documents. What would restrict LLMs from taking over professions that require intelligence and reasoning?

“Any company with a product that currently requires a human tech-support team might be able to train an LLM to replace them.”

Firstly, critics claim that large language models are only “mimicking” human “syntactic patterns” and cannot come up with ideas or make complex decisions. Some critics also claim that LLMs would be dogged by corruption from their human masters, who would program bias, propaganda and misinformation into them, wittingly or not. Which begs the question: Should LLMs even exist, and just as importantly, who should be building them?

Silicon Valley entrepreneurs envision OpenAI as an “extension of human wills” and not used for profit.

In recent years, digital assistants such as Siri and Alexa have been popular among users, even if they are merely “scripted agents.”  But consumers also have misgivings about Big Tech’s overreach and algorithms dictating online behavior. True artificial intelligence could even “spell the end of the human race” according to the late Stephen Hawking. Therefore, who develops it, and why, is extremely important.

“If AI was going to be unleashed on the world in a safe and beneficial way, it was going to require innovation on the level of governance and incentives and stakeholder involvement.”

That is why the brain trust that created OpenAI created a kind of manifesto in 2015, stating their goal to develop AI in a responsible manner, driven not by profit but by a desire to “benefit humanity as a whole.” They would later create a public charter, enshrining promises to cap profits (at 100 times an investor’s stake) and to not allow the technology to be used to discriminate nor to interfere in democratic processes. While it promised to be open-sourced, GPT-3 remains proprietary for safety reasons. Sam Altman, chief executive officer at OpenAI, insists that GPT-3 roll out slowly, because “gradual change in the world is better than sudden change.” 

Deep-learning systems sometimes operate in uncanny, unpredictable ways, which may indicate emergence.

Artificial general intelligence (AGI) is a holy grail in the AI community. But what constitutes general intelligence? Critics believe GPT-3 doesn’t demonstrate intelligence because it can’t produce insight from direct experience, merely play the “guess the missing word” game. However, GPT-3 produces completely novel text, and even taught itself English grammar. While it is agreed that GPT-3 lacks sentience, it nonetheless seems like the machine is thinking by “manipulating higher order concepts” and recombining them in novel ways. DALL-E, a visual neural net that can produce images from natural-language commands has even demonstrated creativity.

“The crux of the problem, in my view, is that understanding language requires understanding the world, and a machine exposed only to language cannot gain such an understanding.” (Melanie Mitchell, scientist, Santa Fe Institute)

One of the troubling things about deep-learning systems is that no one quite knows how they work. Neural nets sometimes make calculations that are difficult to parse out or explain but resemble human intelligence. For instance, OpenAI claims that it discovered “multinodal neurons” in machine-learning software. But LLMs also have a troubling propensity to “hallucinate” – inventing stories and facts “out of nowhere.” Sometimes, LLMs are racist. Sometimes they give bad health advice. This reminds the user that such systems are human-made.

If Artificial General Intelligence (AGI) resembles human society, it will have deep flaws and contradictions.

There is heated debate regarding whether LLMs can be trusted. To address its flaws, developers would have to scour the entire web and in doing so, expose the system to human “toxicity.” OpenAI addressed this with PALMS (“process for adapting language models to society”) that would train GPT-3 with “values-targeted data sets.” The problem this strategy raises is: Whose values? Which society? Sutskever insists that OpenAI wants to “build an AGI that loves humanity.” How can humans build a machine that resembles human intelligence, yet lacks all-too-human flaws?

“Whose values do we put through the AGI? Who decides what it will do and not do? These will be some of the highest-stakes decisions that we’ve had to make collectively as a society.” (Sam Altman, CEO, OpenAI)

Currently, decisions about how AI “benefits humanity” are made by a tiny group of people in Silicon Valley. Author and New York University professor Gary Marcus argues that AI should be a global, coordinated effort with many stakeholders weighing in on its costs and benefits. After all, GPT-3 has shown that machines can do something that heretofore, only humans could do: communicate coherent, illuminating thoughts. Even engaging in a discussion about values indicates that “we have crossed an important threshold” towards AGI, and all that entails.

About the Author

Steven Johnson is a contributing writer for The New York Times Magazine and the author of Future Perfect: The Case for Progress in a Networked Age. He also writes the newsletter Adjacent Possible on Substack.

This document is restricted to personal use only.

Did you like this summary?

Read the article

Comment on this summary

More on this topic

In our Journal