Projects 2020

CTSP Projects

A Restorative Justice Approach to Online Moderation

Fellows: Shagun Jhaver, Sijia Xiao

Our project aims to explore how to use restorative justice in online moderation practices. Online harassment and abuse happen in social media platforms every day. Online platforms generally address those harms through content moderation, where human and AI moderators remove offending content and sometimes block offenders from the community. However, the current practice does not address victims’ needs for support and healing, nor does it provide offenders a chance to understand and repair the harm. Restorative justice is a philosophy and practice of justice that provides us with an alternative way to design for online moderation, where it centers on healing of victims, and emphasizes accountability and responsibility of offenders to repair the harm. Through an interview study with victims, offenders, and community members, we want to understand how people conceptualize online harm, their underlying theories of justice, and how might we design for restorative justice practices in online spaces. Niloufar Salehi is also collaborating on this project.

Conversational Physical Activity Coaches for Diverse Individuals with Low Literacy

Fellows: David Chan, Caroline Figueroa, Shubhra Ganguly, Joanne Jia

Insufficient physical activity is one of the leading risk factors of death worldwide. Smartphone apps that can track exercise patterns can be powerful tools to help people become more active. However, most health apps are only available in English, and have a high level of health literacy (i.e. knowledge about maintaining good physical and mental health). Further, they often do not provide personalized advice or adapt to what users need, leading to users quickly losing interest in these applications. To bridge this gap, we aim to develop a conversational, text-messaging based application delivered via smartphones, tailored to individuals with low health literacy and available in English and Spanish. The conversational application uses Artificial Intelligence (AI) to become a ‘physical activity coach:’ providing personalized physical activity recommendations and support. This project aims to contribute to promoting good health, increasing digital health equity and improving personalization by using advanced AI methods.

Cybersecurity Graphic Materials

Fellows: Franchesca Spektor

This project aims to create a set of graphic materials to investigate daily cybersecurity practices for demographics sidelined by traditional practices, including undocumented or disabled individuals, sex workers, or victims of domestic abuse. Through this lens, the materials will track a few technological case studies within a day-in-the-life framing, such as a morning routine with a wifi-enabled toothbrush, or a work commute using a Clipper Card. By asking how various identities complicate one’s access to technology and introduce unique cybersecurity risks, these visual materials hope to highlight pervasive security risks. The project will focus on improving community methods and lowering social barriers, instead of considering individual security alone.

Disability Disclosures on Online Communities

Fellows: Jennifer Momoh, Alicia Sidik

Disability, and invisible disabilities, in particular, remains socially stigmatized and can be challenging to disclose in both offline and online settings. On the one hand, social media enables those with disabilities to seek out support anonymously and from those who are more likely to understand their condition, as well as carefully control their message. It also gives those with visible disabilities a great deal more control over their disclosure. On the other, it can be challenging to control the flow of information online, leading to disclosure messages reaching unintended recipients even long after the original disclosure; and posting online can expose people who choose to disclose to new sources of discrimination. By invoking social psychology literature, this project aims to achieve a more nuanced understanding of the disclosure process and the factors users account for prior to this decision.

Interrogating Visions of Bay Area Passenger Rail Futures: The Mutual Shaping of Technology Companies and Public Transit Vision Plans

Fellows: Richmond Wong

Public passenger rail transit in the San Francisco Bay Area and broader region is undergoing a moment of investment and expansion—including a BART extension to San Jose, modernization of Caltrain, and improvements to the Capitol Corridor. While these projects aim to serve the public, at the same time, private technology companies are involved in these projects in multiple ways, such as Facebook’s role as a co-developer and investor to reconstruct a rail bridge across the Bay, or Google’s planned redevelopment and relocation of its headquarters adjacent to the San Jose train station by 2030. By investigating design artifacts that represent future visions of passenger rail in the Bay Area—such as vision planning documents—this project seeks to study how processes of technology company development and public transit planning intersect and mutually shape each other, and to investigate how “the public” is conceptualized in these visions of the future.

Privacy & Consumer Power

Fellows: Nicole Chi, Ji Su Yoo

When users are informed of data breaches or unethical company behavior, how much does that knowledge transfer into an active protest or privacy preserving action such as an exercise of consumer power (e.g. deactivating or deleting an app temporarily or permanently) or public shaming (e.g. social media protest against a company or organization, or amplifying another person’s public disapproval)? The list of privacy breaches and lack of accountability for personal data mishandling has increased exponentially in the past decade, especially among health and tech companies that store the most sensitive data from users around the world. How do users then communicate and powerfully build movements to exercise their consumer power should they choose to do so? This project seeks to better understand consumer power and effective ways to build movements calling for corporate transparency and accountability.

Regulating Medical Data Sharing

Fellows: Reid Whitaker

Health information collection has become ubiquitous and medical data increasing provides the basis for new healthcare technologies. Improved data sharing and aggregation will be required to maximize the benefits of these technological changes. But at the same time, data sharing needs to be sensitive to the social context of healthcare practice and the privacy concerns of individuals. This project aims to develop policy frameworks and proposals for protecting privacy while still incentivizing the data sharing necessary for turning medical data into improved health outcomes. In particular, the project aims to interrogate the patchwork of laws and regulations that currently govern medical data sharing with the goal of both examining the possibilities available under existing laws and providing suggestions for further legislation and rulemaking. The project is placing particular focus on how to regulate medical data sharing in order to foster the development of fair and equitable algorithms in healthcare.

Surveillance Technology as Infrastructure: A Case Study of Oakland’s Algorithmic Policing Practices

Fellows: Julia Irwin, Brie McLemore

While there has been much discussion in the press about the adoption and pervasive use of algorithms in modern-day policing, the specific sensing technologies and data processes that make up this technological infrastructure and inform everyday decision-making remain shrouded in mystery. Using the city of Oakland, California as a case study, this project seeks to rectify this knowledge gap. We will produce a website that maps surveillance technologies across geographic locations. It will also track changes in the surveillance infrastructure over the past five years, as Oakland shifted from implementing an expansive, privacy-encroaching Domain Awareness Center to recently banning facial recognition. The website will serve as a tool for city residents and to raise awareness about how surveillance technology influences police action. Informed by inter-disciplinary methods, including qualitative interviews and archival research, this project seeks to provide a comprehensive analysis of infrastructure that operates largely out of public view.

Tackling Misinformation Through Media Literacy

Fellows: Aneesa Chishti, Srividya Ramamoorthy, Nithya Ramgopal, Vivant Sakore

We are concerned about the impact of misinformation on democracy and civic integrity, and the role that social networks play in disseminating news of questionable credibility. The role of Fake News in influencing opinions, civil discourse, and ultimately, election outcomes is undeniable.

Limiting misinformation on social media platforms is complicated. Enforcing against virality is antithetical to the engagement goals of most commercial platforms. There is also uncertainty about whether and how social media platforms should censor information based on their guidelines. This brings us to consider alternative approaches, unconstrained by platform interests. How might we identify patterns in information processing amongst humans that lead them to classify something they read as “credible”? Through this research, we hope to explore the effectiveness of media literacy on curbing the spread of misinformation.

CTSP-AFOG Joint Projects

Algorithmic Fairness in Mediated Markets

Fellows: Andrew Chong, Emma Lurie

Online marketplaces, where firms like Uber and Amazon control the terms of economic interaction, exert an increasing influence on economic life. Algorithms on these platforms are drawing greater scrutiny, whether in how different price and quality characteristics are determined for different users, the end outcomes algorithms optimize for, and ultimately, how surplus created by these networks is allocated between buyers, sellers, and the platform. This project undertakes a systematic survey of perceptions of fairness among riders and drivers in ride-sharing marketplaces. We seek to carefully catalogue different notions of fairness among different groups, examining where they might cohere and where they might be in tension. We explore obligations platform firms might have as custodians of market information and arbiters of market choice and structure, to contribute to a developing public debate on what a “just” algorithmic regime might resemble for online marketplaces.

An alternate lexicon for AI

Fellows: Noura Howell, Noopur Raval

This project joins the “second wave” of AI scholars in examining structural questions around what constitutes the field of social concerns within current AI and Social Impact research. Under this project, we will map the ethical and social landscape of current AI research and its limits by conducting a critical and comparative content analysis of how social/ethical concerns have been represented over time at leading AI/ML conferences. Based on our findings, we will also develop a draft syllabus on ‘Global and Critical AI’ and will convene a one-day workshop to build vocabulary for such AI thinking and writing. With this project we aim to join the growing community at UC Berkeley and beyond in identifying the dominant techno-imaginaries of AI and Social Impact research, and 2) critically and tactically expanding that field to bring diverse experiential, social, cultural, and political realities beyond the Silicon Valley to bear upon AI thinking. Morgan Ames is also collaborating on this project.

Environmental conservation in the age of algorithms: from data to decisions

Fellows: Millie Chapman, Caleb Scoville

While human impacts on the rest of nature accelerate, our techniques of observing those impacts are rapidly outstripping our ability to react to them. Artificial Intelligence (AI) techniques are quickly being adopted in the environmental sphere not only to inform decisions through providing more useful datasets, but also to facilitate more robust decisions about complex natural resource and conservation problems. The onset of decision-making algorithms requires us to urgently ask the question: Whose values are shaping AI decision making systems in natural resource management? In the shadow of this problem, our project seeks to understand the expansion of privately developed but publicly available environmental data and algorithms through a critical study of algorithmic governance. It aims to facilitate an analysis of how governments and nongovernmental entities deploy techniques of algorithmic conservation to aid in collective judgments about our complex and troubled relation to our natural environments. Carl Boettiger is also a collaborator on the project.

State-Firm Coproduction of China’s Social Credit System

Fellows: Shazeda Ahmed

This qualitative dissertation project investigates how the Chinese government and domestic technology companies are collaboratively constructing the country’s social credit system. Through interviews with government officials, tech industry representatives, lawyers, and academics, I argue that China’s government and tech firms rely on and influence one another in their efforts to engineer social trust through incentives of punishment and reward.

CTSP-CLTC Joint Projects

Data for Defenders

Fellows: Sneha Chowdhary, Tiffany Pham, Rachel Warren, Jyen Yiee Wong

The Data for Defenders team is partnering with local non-profit Secure Justice to build technical tools to support public defenders. Public defenders are overworked and under-resourced, and the sheer amount of data in modern criminal cases has compounded this problem. New technologies such as historical cell site information, GPS location history, and social media data are now commonly used to build a case. While prosecutors may receive internal data resources from federal organizations and insights about said resources from technology providers, public defenders often have neither the data analysis skills nor the external support to apply those resources for their clients. Our goal is to work with public defenders to assess the most critical gaps in their knowledge of data and emerging technologies. We will pilot one technical solution that can be useful for an attorney with no technical background to challenge data-based evidence or provide exculpatory evidence.

Developing A Common Vocabulary Around Privacy And Security Concepts With Elderly Users

Fellows: Julia Bernd, Samy Cherfaoui, Alisa Frik

The professional jargon currently used to describe online privacy and security matters (including policies, regulations, user settings, support documentation) was largely developed by technologists and lawyers. People with limited technological literacy and experience, and older adults specifically, often find it hard to comprehend this jargon. In this project, we will estimate the gap in comprehension and familiarity with privacy and security vocabulary among older adults, compare it with the younger population. Then we will develop guidance on how to bridge that gap, and improve effective communication about privacy and security between technologists and lawyers who design policies and interfaces and different socio-demographic groups of users. By supporting seniors’ learning of common terminology, we could also engage them in conversation about concerns, and prepare them to deal with tech support agents, chatbots, or systems’ privacy policies and settings.

Digital Tools for Decentralized Networks

Fellows: Nicole Chi, Ji Su Yoo

The world is facing multiple global crises – including climate change, forced migration, the erosion of liberal democracy, rising inequality – and the only way out of these crises is by working together, and at scale. Meanwhile, for the first time in human history, we have the technology to link national and global trends, and learn from each other in real-time and at global scale.

In this project, we will explore how to leverage technology to connect disparate networks that are aiming to solve these global challenges alongside our partner organization Build Belonging. We will study technologies applied in a broad range of use cases that might be helpful to creating a decentralized, self-organizing network, and explore the relationship between online and offline organizing. Examples include technologies that are being used in participatory democracy, HR tech, open source models, and more.

Privacy Preserving Machine Learning for Autonomous Vehicle

Fellows: Mugdha Bhusari, Amrit Daswaney, Akshay Punhani, Alicia Tsai

A self-driving car is equipped with 360 degree cameras to operate on the road. Ordinally, when people are recorded in private areas, it is compulsory to put a disclaimer about it. While such a car seeks the permission of the owner of the car, it does not seek consent to be recorded from pedestrians or passengers of cars whose license plates get recorded. The pedestrians have not opted in for the surveillance by a private firm and have no means to opt-out of such surveillance. This data can be exploited by the company collecting the data or actors who access the data.

This project plans to investigate techniques that enable secure, privacy-preserving machine learning for autonomous vehicle and discuss the trade-off and issues involving privacy, security and data collection policies of protecting the individuals whilst allowing the data to be useful for public use and for the autonomous vehicles industry.

Understanding Online Reputation Damage and Repair for Student Activists

Fellows: Emma Lurie

In 2019, almost everyone has typed their name into Google. People search their name because the search results have online and offline implications. In fact, the consequences of negative online search results relate to issues of physical, financial and emotional security offline. Little is understood about vulnerable communities’ strategies and understanding of online reputational harm and repair, yet there is substantial evidence of damage caused by political doxing, which is one form of reputational harm. This study will conduct qualitative interviews with student activists who have damaging articles written by the Canary Mission appear on their Google search result pages. The research question that will guide these semi-structured qualitative interviews is: how do student activists conceptualize the reputational harm and opportunities for reputational repair on “their” Google search pages?

Banner Photo Credit: “UC Berkeley South Hall” by I School IMSA is licensed under CC BY 2.0