By Zoe Kahn, CTSP Fellow 2019
Images of people soaring through space wearing astronaut helmets adorn the walls of a narrow hallway leading from the street-level entrance to the hotel lobby elevator bank. A life-sized model of an astronaut stands off to the side of the hotel registration desk. It’s safe to say… the conference hotel has an outer space theme.
The words “What is fairness?” repeat on a large circular digital display at the center of the hotel lobby. I can’t decide if the display belongs in a newsroom, stock trade room, or train station. Perhaps art? Perhaps commentary?
The word in question changes each day. What is Fairness? What is Accountability? What is Transparency? What questions are we missing?
It is here—in Barcelona, Spain—that ~600 people gather for the ACM Fairness, Accountability, and Transparency (FAccT) conference to discuss efforts to design, develop, deploy, and evaluate machine learning models in ways that are more fair, accountable, and transparent.
Despite the conference’s roots in computer science, there were several papers, tutorials, and CRAFT sessions that challenged the community to think beyond technical fixes to broader socio-technical contexts rife with power and complex social dynamics.
Let me begin with a discussion of power. The ACLU of Washington and the Media Mobilizing Project put on a terrific CRAFT session titled, Creating Community-Based Tech Policy: Case Studies, Lessons Learned, and What Technologist and Communities Can Do Together. The two-part session first documented the ACLU of Washington’s efforts, in partnership with academics primarily from the University of Washington (Katell et al., 2020), to include marginalized and vulnerable communities in the development of tech policy surrounding the acquisition and use of surveillance technologies in the City of Seattle. The Media Mobilizing Project then presented their work investigating the use of risk assessment tools in the criminal justice system across the country. And, in particular, their efforts with community groups to combat the use of criminal justice risk assessment tools in Philadelphia. Taken together, the session provided models for meaningfully engaging marginalized and vulnerable populations in algorithmic accountability, forming the foundation for workshop participants to reflect upon their own power.