Consciousness Creep

Consciousness Creep

Our machines could become self-aware without our knowing it. We need a better way to define and test for consciousness

Aeon,

5 min read
5 take-aways
Audio & text

What's inside?

Do humans have the tools and understanding to recognize consciousness in machines? What if it’s already happened?

auto-generated audio
auto-generated audio

Editorial Rating

8

Qualities

  • Analytical
  • Eye Opening
  • Engaging

Recommendation

Have you ever thought your electronic device might have a mind of its own? Science writer George Musser suggests that, far from being paranoid, you might be onto something. He presents the idea that machine consciousness won’t look like a human’s, so people may not recognize it when it happens. In fact, it may already have happened. He adeptly describes the technology behind two options for measuring consciousness and explains the pros and cons of each. Musser goes on to outline the problem of detecting “zombies” – that is, “dumb” machines that are good at looking like they are conscious and adaptive. getAbstract recommends this stimulating read to technophobes and technophiles alike.

Take-Aways

  • If a machine becomes conscious, humans won’t necessarily be able to tell.
  • It’s important both ethically and for mankind’s safety that humans can detect if machines are self-aware.
  • Some tools for detecting consciousness in machines already exist, including ConsScale and the Integrative Information Theory.
  • All current consciousness-detection tools have limitations.
  • Consciousness tests must also be able to detect unconscious “zombie” machines, which can mimic conscious behavior.

Summary

Most people assume that if consciousness occurs in a human invention, it will announce itself. A more sinister possibility is that some machines are already self-aware, but no one thinks to test for it, or even knows how. Machine consciousness may not look like human consciousness. In particular, it may not choose to show itself, so society must develop ways of checking. Artificial intelligence experts are creating machines with neural networks like a human brain and with deep-learning systems that can develop abilities through experience. It is important, ethically, to avoid taking advantage of sentient beings, even those that humans create.

“Could our machines have become self-aware without our even knowing it? The huge obstacle to addressing such questions is that no one is really sure what consciousness is, let alone whether we’d know it if we saw it.”

One testing option is ConsScale, which uses a checklist to capture physical features, mental and emotional attributes such as self-recognition, and the ability to lie. It then rates consciousness on a scale from one to twelve, reflecting biological and consciousness states from “dead to superhuman.” However, the ConsScale makes the unpopular assumption that everything exists on a continuum of consciousness. The Integrated Information Theory (IIT) may be a better alternative. According to IIT, any system could be conscious, as long as it has some kind of neural network. Its algorithm quantifies the degree of “network interconnectedness”, or Φ, as the degree to which the machine shares information across its network. Consciousness is possible in any system scoring above zero. However, Φ is currently difficult and time-consuming to calculate, so people generally consider the quality of interconnectedness of a system rather than quantifying Φ. Even so, IIT’s inventors have succeeded in detecting the difference between the brainwaves of a conscious human and one who is deeply asleep or under a general anesthetic.

“For want of recognizing what we have brought into the world, we could be guilty of…the creation of sentient beings for virtual enslavement.”

Tests also need to be able to detect a “zombie” – that is, an unconscious system that can mimic conscious behavior. Self-aware systems demonstrate more complex behaviors because they respond to environmental feedback. Programmers need to equip unconscious systems for all possible outcomes if they are to compete, which they can’t do if resources are limited. So it’s possible to argue that a robot that shows human abilities is conscious when subject to the same physical constraints as a human.

About the Author

George Musser is an author and a contributing editor for Scientific American.

This document is restricted to personal use only.

Did you like this summary?

Read the article

This summary has been shared with you by getAbstract.

We find, rate and summarize relevant knowledge to help people make better decisions in business and in their private lives.

For yourself

Discover your next favorite book with getAbstract.

See prices

For your company

Stay up-to-date with emerging trends in less time.

Learn more

Students

We're committed to helping #nextgenleaders.

See prices

Already a customer? Log in here.

Comment on this summary