Join getAbstract to access the summary!

A New Model and Dataset for Long-Range Memory

Join getAbstract to access the summary!

A New Model and Dataset for Long-Range Memory

DeepMind,

5 min read
4 take-aways
Audio & text

What's inside?

A good and useful memory is all about knowing what to forget.


Editorial Rating

9

Qualities

  • Analytical
  • Scientific
  • Hot Topic

Recommendation

One of the challenges for AI systems is that, as they become more sophisticated and swallow up larger and larger databases, they need ever more processing power to achieve marginally better results. Even the biggest server farms have their limits. Yet people don’t want to wait for more than a few microseconds, so a major task in AI science is to determine how to reduce mountains of crude data to hills of important data. Google’s DeepMind project’s latest proposal is to use sleep as an inspiration for data processing. Your Eureka moment might have come one afternoon during a power nap.

Summary

A major problem in AI is reducing extremely large datasets into more manageable dimensions for processing and analysis.

A simple model of human memory is the neural network, where each piece of information can interact with every other. Unfortunately, it produces dauntingly large datasets. Kilobytes of data can result in gigabytes of parameters to be processed, and this limits the practical utility of neural networks in AI. Researchers need better models that combine the flexibility of neural networking with more practical information processing demands.

Human memory exists at different timescales and different levels of detail.

Human memory is not simply a collection...

About the Authors

Jack Rae and Timothy Lillicrap are staff researchers at Google’s DeepMind project in London.


Comment on this summary