Rejoignez getAbstract pour lire le résumé !

The Malicious Use of Artificial Intelligence

Rejoignez getAbstract pour lire le résumé !

The Malicious Use of Artificial Intelligence

Forecasting, Prevention, and Mitigation

Malicious AI Report,

15 minutes de lecture
10 points à retenir
Audio et texte

Aperçu

How dangerous can artificial intelligence become?

résumé audio créé automatiquement
résumé audio créé automatiquement

Editorial Rating

8

Qualities

  • Overview
  • Concrete Examples
  • For Experts

Recommendation

In 2018, artificial intelligence (AI) technology is advancing at a dizzying speed. Hopes that self-driving cars will soon make roads safer and daily commutes more pleasant come with the possibility that, for example, terrorists can program these same vehicles to run over people. A group of 25 leading AI researchers and scholars in the new field of existential risk research have put together a comprehensive overview of the threat landscape in the age of AI. Although the threat is real, if researchers and policy makers work together to prevent possible misuses, they can prevent AI from becoming a new Frankenstein. For policy makers, political activists, managers and employees of tech companies, and AI researchers, getAbstract believes this timely and important overview is a must-read.

Summary

AI Characteristics

The field of artificial intelligence (AI) has advanced significantly in recent years. People already use AI-powered applications such as voice recognition, machine translation and search tools. AI technologies will support medical staff and speed up disaster response. However, people can use AI toward both benevolent and malicious ends. Most AI applications are dual-use: The technology behind a drone that drops off packages will also work for drones equipped with bombs. Once set up and trained, an AI system will perform its task reliably and more efficiently than humans. Also, people can replicate the technology and roll it out on a broad scale. The high degree of transparency in the field of AI research makes obtaining and reproducing newly developed AI software easy. Machine learning helps AI systems to improve continually at the tasks they are performing. But AI systems are equally efficient at executing flawed or malicious directions. Hence, AI can multiply the harm a programming error or malicious intent could cause. 

As AI is efficient, scalable and easy to obtain, the number of actors who could use AI for malicious purposes ...

About the Authors

The contributing authors are researchers from several US- and UK-based universities and private institutes, including OpenAI, the Electronic Frontier Foundation and the Center for a New American Security.


Comment on this summary