The AI revolution has elicited a range of predictions: Doomsayers expect the machines to take over; optimists foresee a better world, where AI models enhance human skills and well-being. In this guidebook, tech executive Hamilton Mann argues that a beneficial outcome for humanity hinges on ensuring “artificial integrity”: creating frameworks for AI development that result in outcomes that support human values and goals. He calls for readers to embrace the possibilities for individual and social improvements via artificial intelligence while also building appropriate guardrails.
“Artificial integrity” helps ensure AI outputs benefit humanity.
Legendary investor Warren Buffett said he looked for three things when hiring: integrity, intelligence, and energy. “And if they don’t have the first, the other two will kill you,” Buffett quipped. So it goes with artificial intelligence. Just because an AI tool is intelligent — that is, powerful — does not mean it can or will produce intelligent outputs: ones free of bias, contextually appropriate, safe, and beneficial to humanity. That’s why any discussion of the future trajectory of artificial intelligence development needs to center on how to ensure “artificial integrity” — that is, how to make sure AI models’ outputs support and are guided by human values. Consider, for example, autonomous vehicles, which are programmed to decide on the fly how to weigh human safety in an emergency. That programming must be rooted in the rules and values determined by the people in the society in which the car operates.
People have compared AI to the advent of electricity, cars, or the internet. AI will be even more impactful than these innovations. It’s the first technology to take over cognitive functions that...
Hamilton Mann is a tech executive, Digital for Good pioneer, keynote speaker, and the originator of the concept of artificial integrity. Mann serves as group vice president responsible for AI and digital transformation initiatives at Thales.
Comment on this summary