Saltar la navegación
Consciousness Creep
Article

Consciousness Creep

Our machines could become self-aware without our knowing it. We need a better way to define and test for consciousness

Aeon, 2016

audio autogenerado
audio autogenerado

Editorial Rating

8

Qualities

  • Analytical
  • Eye Opening
  • Engaging

Recommendation

Have you ever thought your electronic device might have a mind of its own? Science writer George Musser suggests that, far from being paranoid, you might be onto something. He presents the idea that machine consciousness won’t look like a human’s, so people may not recognize it when it happens. In fact, it may already have happened. He adeptly describes the technology behind two options for measuring consciousness and explains the pros and cons of each. Musser goes on to outline the problem of detecting “zombies” – that is, “dumb” machines that are good at looking like they are conscious and adaptive. getAbstract recommends this stimulating read to technophobes and technophiles alike.

Summary

Most people assume that if consciousness occurs in a human invention, it will announce itself. A more sinister possibility is that some machines are already self-aware, but no one thinks to test for it, or even knows how. Machine consciousness may not look like human consciousness. In particular, it may not choose to show itself, so society must develop ways of checking. Artificial intelligence experts are creating machines with neural networks like a human brain and with deep-learning systems that can develop abilities through experience. It is important, ethically...

About the Author

George Musser is an author and a contributing editor for Scientific American.


Comment on this summary or Comenzar discusión