Skip navigation
Video
65 minutes
Apr 22, 2025

Video


ABA

Stanford Seminar - AI Audits and Autonomy

In this video, computer science professor Karrie Karahalios outlines the evolution of algorithm audits, from airline reservation disclosures to online housing and news‐feed studies, highlighting methods that ensure fairness, transparency, and user autonomy in AI systems.

Algorithm Audits Housing Discrimination Narrative Visualization User Autonomy Control Settings

Takeaways

  • Well-constructed algorithm audits remain the most effective means to hold AI systems accountable, as evidenced by the 1984 mandate requiring the Sabre reservation system to disclose its ranking criteria.
  • Mimicking traditional fair-housing tests with “sock puppet” browser profiles revealed that online home-listing platforms surface more expensive options to women and predatory rent-to-own ads to Black agents.
  • A narrative visualization audit showed that roughly 62% of users are unaware their news feeds are curated, yet 80% became more satisfied after “manipulating the manipulation.”
  • Research on platform control settings demonstrated a strong placebo effect: users report greater satisfaction even when controls are nonfunctional, underscoring the need for clearer messaging and genuine agency.
  • Interdisciplinary collaboration is essential for crafting audits and control mechanisms that support autonomy, fairness, and public trust.

Summary

Professor Karrie Karrahilos reviews the origins of algorithmic transparency in the 1960 Sabre airline reservation system, where intentional ranking of flights prompted antitrust investigations and led to regulations mandating public disclosure of sorting criteria in 1984. She then introduces the concept of algorithm audits, systematic examinations of online platforms, inspired by civil rights housing tests. By creating “sock puppet” browser agents that mimic prospective home buyers of different races and genders, her team uncovered biased rankings and predatory rent‐to‐own advertisements.

Moving to social media, Karahalios describes a narrative visualization audit that revealed users saw only about 30% of their network’s posts and were largely unaware of algorithmic curation. The study empowered participants to “manipulate the manipulation” and fostered folk theories and strategies for influencing their feeds. Subsequent research into control settings on platforms like Facebook and Twitter exposed widespread confusion, placebo effects, and misaligned expectations when users attempted to adjust algorithms via built-in controls.

Karahalios emphasizes the need for multidisciplinary collaboration, combining human‐computer interaction, statistics, design, sociology, and law, to craft robust, community‐centered audits. She recounts legal challenges around web scraping and pseudonymous accounts and notes recent protections under the AI Bill of Rights. Concluding, she calls for ongoing monitoring of “boring” infrastructure, early legal guidance, and integrated audit and user‐control mechanisms to achieve autonomy, justice, and accountability in sociotechnical systems.

Job Profiles

Product Manager Business Consultant UX/UI Designer Academic/Researcher Algorithm Engineer

Actions

Watch full video Export
Contributors

ABA
Content rating = A
  • Relies on reputable sources
  • Adequate structure
  • In-depth
Author rating = B
  • Has professional experience in the subject matter area
  • Experienced subject-matter writer
Source rating = A
  • Established, respected publisher
  • Features expert contributions