Title: Stanford Seminar - AI Audits and Autonomy Resource URL: https://www.youtube.com/watch?v=CDMuUsSkvao Publication Date: 2025-04-22 Format Type: Video Reading Time: 65 minutes Contributors: Karrie Karahalios; Source: Stanford Online (YouTube) Keywords: [Algorithm Audits, Housing Discrimination, Narrative Visualization, User Autonomy, Control Settings] Job Profiles: Algorithm Engineer;Academic/Researcher;UX/UI Designer;Business Consultant;Product Manager; Synopsis: In this video, computer science professor Karrie Karahalios outlines the evolution of algorithm audits, from airline reservation disclosures to online housing and news‐feed studies, highlighting methods that ensure fairness, transparency, and user autonomy in AI systems. Takeaways: [Well-constructed algorithm audits remain the most effective means to hold AI systems accountable, as evidenced by the 1984 mandate requiring the Sabre reservation system to disclose its ranking criteria., Mimicking traditional fair-housing tests with “sock puppet” browser profiles revealed that online home-listing platforms surface more expensive options to women and predatory rent-to-own ads to Black agents., A narrative visualization audit showed that roughly 62% of users are unaware their news feeds are curated, yet 80% became more satisfied after “manipulating the manipulation.”, Research on platform control settings demonstrated a strong placebo effect: users report greater satisfaction even when controls are nonfunctional, underscoring the need for clearer messaging and genuine agency., Interdisciplinary collaboration is essential for crafting audits and control mechanisms that support autonomy, fairness, and public trust.] Summary: Professor Karrie Karrahilos reviews the origins of algorithmic transparency in the 1960 Sabre airline reservation system, where intentional ranking of flights prompted antitrust investigations and led to regulations mandating public disclosure of sorting criteria in 1984. She then introduces the concept of algorithm audits, systematic examinations of online platforms, inspired by civil rights housing tests. By creating “sock puppet” browser agents that mimic prospective home buyers of different races and genders, her team uncovered biased rankings and predatory rent‐to‐own advertisements. Moving to social media, Karahalios describes a narrative visualization audit that revealed users saw only about 30% of their network’s posts and were largely unaware of algorithmic curation. The study empowered participants to “manipulate the manipulation” and fostered folk theories and strategies for influencing their feeds. Subsequent research into control settings on platforms like Facebook and Twitter exposed widespread confusion, placebo effects, and misaligned expectations when users attempted to adjust algorithms via built-in controls. Karahalios emphasizes the need for multidisciplinary collaboration, combining human‐computer interaction, statistics, design, sociology, and law, to craft robust, community‐centered audits. She recounts legal challenges around web scraping and pseudonymous accounts and notes recent protections under the AI Bill of Rights. Concluding, she calls for ongoing monitoring of “boring” infrastructure, early legal guidance, and integrated audit and user‐control mechanisms to achieve autonomy, justice, and accountability in sociotechnical systems. Content: ## Introduction In this informal presentation, Professor Karrie Karahalios reflects on a decade of developments in algorithmic accountability and previews forthcoming research on AI audits and user autonomy. She emphasizes learning from historical precedents, presenting work to be discussed at CHI ’25 and ongoing community‐based initiatives. ## Early Algorithm Transparency: The Sabre Case In 1960, American Airlines and IBM launched the Sabre system to automate flight reservations, processing the equivalent of 83,000 daily calls in a matter of seconds. By 1982, travel agents selected Sabre’s top‐ranked flights over 92 percent of the time—evidence of intentional “screen‐science” manipulation that favored more expensive carrier options. Following antitrust complaints, regulatory authorities compelled reservation systems to publish their sorting algorithms, establishing a landmark requirement for algorithmic transparency. ## Conceptualizing Algorithm Audits Inspired by federal and state civil‐rights statutes, Karahalios and colleagues introduced the notion of algorithm audits in 2014, arguing that online platforms dispensing core social goods—such as housing—must be tested for discrimination. Drawing on the Fair Housing Act, traditional housing audits match paired families by socioeconomic characteristics and vary only in protected attributes to isolate bias. ## Online Housing Audits with Sock Puppets Translating this methodology online, the team created thousands of demographic‐specific browser profiles—“sock puppets”—trained to exhibit realistic browsing behavior. The first audit exposed that women saw pricier listings and that non‐white profiles encountered fewer housing advertisements, while Black profiles encountered more predatory rent-to-own offers. A subsequent ad audit identified racial disparities in the volume and nature of real‐estate advertisements. ## The Audit Spectrum and Societal Impact Karahalios maps a spectrum of audit methods from people‐centered qualitative studies to large-scale automated analyses. High-impact cases—such as Facebook’s discriminatory ad‐targeting revelations—underscore the urgent need for ongoing audits. Under the Obama administration, algorithmic auditing was designated a national priority, and academic interest has since surged in venues including NeurIPS, CSCW, and CHI. ## Narrative Visualization and Feed Curation A narrative‐visualization audit co-led by Motahhare Eslami asked users to juxtapose all network posts with those actually delivered in their feeds. Many participants, including experts, were unaware of curation and reported feelings of helplessness upon learning they saw only 30 percent of content. However, 80 percent later reported increased satisfaction and described strategies to “manipulate the manipulation.” Folk theories—such as believing that liking one’s own posts would boost visibility—demonstrated user attempts to regain agency. ## Collective and Community Audits Building on user engagement, the research group initiated collective audits in which community members shared insights—such as reappropriating hotel-rating platforms to correct inflated scores—and third-party investigations, exemplified by Cathy O’Neil’s audit of a hiring system that led to removal of facial recognition features. ## Multidisciplinary Challenges and Legal Protections Effective audits demand rigorous data scrubbing, statistical validation, design of intelligible interfaces, and legal guidance. Early audits navigated ambiguous interpretations of anti-hacking statutes when using web scraping and pseudonymous accounts; recent judicial rulings and AI Bill of Rights guidelines now offer greater legal certainty for audit researchers. ## Control Settings and the Illusion of Agency Subsequent studies evaluated built-in user controls on social platforms. In-lab tasks and large-scale surveys revealed that 50 percent of participants could not identify real settings, 94 percent desired changes once guided, yet most controls produced only minor satisfaction gains—primarily driven by expectation bias rather than actual system improvements. ## Contestability and Future Directions The research group has organized workshops on contestability, enabling users to challenge algorithmic decisions. They are developing participatory design methods with marginalized communities affected by content moderation policies. ## Toward Just Sociotechnical Systems Karahalios concludes by stressing the need for continuous, “boring” monitoring of digital infrastructures and early collaboration with legal experts. Combining algorithm audits with usable control mechanisms and contestability promises a form of autonomy aligned with user values and well-being. The Center for Just Infrastructures and the Community Data Clinic exemplify community-driven audit efforts, empowering local stakeholders to evaluate and shape the technologies that govern daily life.