Takeaways
- Well-constructed algorithm audits remain the most effective means to hold AI systems accountable, as evidenced by the 1984 mandate requiring the Sabre reservation system to disclose its ranking criteria.
- Mimicking traditional fair-housing tests with “sock puppet” browser profiles revealed that online home-listing platforms surface more expensive options to women and predatory rent-to-own ads to Black agents.
- A narrative visualization audit showed that roughly 62% of users are unaware their news feeds are curated, yet 80% became more satisfied after “manipulating the manipulation.”
- Research on platform control settings demonstrated a strong placebo effect: users report greater satisfaction even when controls are nonfunctional, underscoring the need for clearer messaging and genuine agency.
- Interdisciplinary collaboration is essential for crafting audits and control mechanisms that support autonomy, fairness, and public trust.
Summary
Professor Karrie Karrahilos reviews the origins of algorithmic transparency in the 1960 Sabre airline reservation system, where intentional ranking of flights prompted antitrust investigations and led to regulations mandating public disclosure of sorting criteria in 1984. She then introduces the concept of algorithm audits, systematic examinations of online platforms, inspired by civil rights housing tests. By creating “sock puppet” browser agents that mimic prospective home buyers of different races and genders, her team uncovered biased rankings and predatory rent‐to‐own advertisements.
Moving to social media, Karahalios describes a narrative visualization audit that revealed users saw only about 30% of their network’s posts and were largely unaware of algorithmic curation. The study empowered participants to “manipulate the manipulation” and fostered folk theories and strategies for influencing their feeds. Subsequent research into control settings on platforms like Facebook and Twitter exposed widespread confusion, placebo effects, and misaligned expectations when users attempted to adjust algorithms via built-in controls.
Karahalios emphasizes the need for multidisciplinary collaboration, combining human‐computer interaction, statistics, design, sociology, and law, to craft robust, community‐centered audits. She recounts legal challenges around web scraping and pseudonymous accounts and notes recent protections under the AI Bill of Rights. Concluding, she calls for ongoing monitoring of “boring” infrastructure, early legal guidance, and integrated audit and user‐control mechanisms to achieve autonomy, justice, and accountability in sociotechnical systems.