Join getAbstract to access the summary!

Open-Source Language AI Challenges Big Tech’s Models

Join getAbstract to access the summary!

Open-Source Language AI Challenges Big Tech’s Models

BLOOM aims to address the biases that machine-learning systems inherit from the texts they train on.


5 min read
4 take-aways
Audio & text

What's inside?

The powerful BLOOM open-source natural language processing system rivals “big tech” models.

Editorial Rating



  • Analytical
  • Visionary
  • Engaging


Currently in its final weeks of training, the BLOOM model for natural language processing is almost ready for full launch. With parameter sets rivaling those used by Google and OpenAI, the system’s originators seek to correct biases inherent in many systems that make them seem all too human – in the worst ways. Anyone who designs or uses AI should read this eye-opening report, and perhaps consider signing up for a test drive.


Scientists designed the BLOOM natural language processing model to correct AI text biases.

Because machine learning systems tend to inherit errors from training material, researchers warn of possible harm caused by AI models that process and generate text.

A multinational team of about a thousand mostly academic volunteers tried to reduce such problems by breaking “big tech’s” grip on natural language processing models. Fueled by $7 million in allocations for computer time, the BLOOM [BigScience Language Open-science Open-access Multilingual] model rivals those conceived by OpenAI and Google – but offers multilingual and open-source access. BigScience collaborators introduced a preliminary BLOOM model in June 2022.

Such systems can display humanlike qualities, including societal and ethical flaws inherent in people.

Until now, researchers have experienced difficulty in gaining access to privately-held models.

BLOOM can tackle a variety of AI-based research projects.

Biological classifications...

About the Author

Elizabeth Gibney is a senior physics reporter at Nature. She has written for Scientific American, the BBC and CERN.

Comment on this summary