Machine learning may seem like a perfect objective solution to subjective problems, but it’s not infallible. Techno-sociologist Zeynep Tufekci warns that auditing these highly complex systems is crucial to understanding how they arrive at decisions. Advanced algorithms calculate probabilities from analyzing huge amounts of data, but the process lacks the moral reasoning only humans can provide. getAbstract appreciates Tufekci’s heads-up as computers continue to assume much of society’s decision making.
About the Speaker
Zeynep Tufekci, an associate professor at the University of North Carolina, is the author of Twitter and Tear Gas.