A User-Centered Perspective on Algorithmic Personalization

A User-Centered Perspective on Algorithmic Personalization

By Rena Coen, Emily Paul, Pavel Vanegas, and G.S. Hans, CTSP Fellows | Permalink

We conducted a survey using experimentally controlled vignettes to measure user attitudes about online personalization and develop an understanding of the factors that contribute to personalization being seen as unfair or discriminatory. Come learn more about these findings and hear from the Center for Democracy & Technology on the policy implications of this work at our event tonight!

What is online personalization?

Some of you may be familiar with a recent story, in which United Artists presented Facebook users with different movie trailers for the film Straight Outta Compton based on their race, or “ethnic affinity group,” which was determined based on users’ activity on the site.

This is just one example of online personalization, where content is tailored to users based on some user attribute. Such personalization can be beneficial to consumers but it can also have negative and discriminatory effects, as in the targeted trailers for Straight Outta Compton or Staples’ differential retail pricing based on zip code. Of course, not all personalization is discriminatory; there are examples of online personalization that many of us see as useful and have even come to expect. One example of this is providing location-based results for generic search terms like “coffee” or “movie showtimes.”

The role of algorithms

A big part of this story is the role of algorithms in personalization. This could mean that the data that is used to drive the personalization has been inferred, as in the Straight Outta Compton example where Facebook algorithmically inferred people’s ethnic affinity group based on the things they liked and clicked on. In this case the decision about who to target was made deductively. Facebook offers companies the opportunity to target their ads to ethnic affinity groups and United Artists thought it made sense to show different movie trailers to people based on their race. In other cases, there may not be a clear logic used in deciding what kind of targeting to do. Companies can use algorithms to identify patterns in customer data and target content, based on the assumption that people who like one thing will like another.

When does personalization discriminate?

We have a range of responses to personalization practices; we may see some as useful while others may violate our expectations. But how can we think about the range of responses to these examples more systematically – in a way that helps us articulate what these expectations are?

This is something that policy makers and privacy scholars have been examining and debating. From the policy side there is a need for practices and procedures that reflect and protect users’ expectations. These personalization practices, especially the use of inference, create challenges for existing policy frameworks. Several reports from the Federal Trade Commission (e.g. here and here) and the White House (e.g. here and here) look at how existing policy frameworks like the Fair Information Practice Principles (FIPPs) can address the use of algorithms to infer user data and target content. Some of the proposals from authors including Kate Crawford, Jason Schultz, Danielle Citron, and Frank Pasquale look to expand due process to allow users to correct data that has been inaccurately inferred about them.

Theoretical work from privacy scholars attempts to understand what users’ expectations are around inference and personalization, attempting to understand how these might be protected in the face of new technology. Many of these scholars have talked about the importance of context. Helen Nissenbaum and Solon Barocas discuss Nissenbaum’s conception of privacy as contextual integrity based on whether the inference conflicts with information flow norms and expectations. So, in the Straight Outta Compton example, does Facebook inferring people’s ethnic affinity based on their activity on the site violate norms and expectations of what users think Facebook is doing with their data?

This policy and privacy work highlights some of the important factors that seem to affect user attitudes about personalization: there is the use of inferred data and all of the privacy concerns it raises, there are questions around accuracy when inference is used, and there is the notion of contextual integrity.

One way to find more clarity around these factors and how they affect user attitudes is to ask the users directly. There is empirical work looking at how users feel about targeted content. In particular, there are several studies on user attitudes about targeted advertising, including by Chris Hoofnagle, Joseph Turow, Jen King, and others which found that most users (66%) did not want targeted advertising at all and that once users were informed of the tracking mechanisms that support targeted ads even more (over 70%) did not want targeted ads. There has also been empirical work from researchers at Northeastern University who have examined where and how often personalization is taking place online in search results and pricing. In addition, a recent Pew study looked at when people are willing to share personal information in return for something of value.

Experimental approach to understanding user attitudes

Given the current prevalence of personalization online and the fact that some of it does seem to be useful to people, we chose to take personalization as a given and dig into the particular factors that push it from something that is beneficial or acceptable to something that is unfair.

Using an experimental vignette design, we measure users’ perceptions of fairness in response to content that is personalized to them. We situate these vignettes in three domains: targeted advertising, filtered search results, and differential retail pricing using a range of data types including race, gender, city or town of residence, and household income level.

We find that users’ perceptions of fairness are highly context-dependent. By looking at the fairness ratings based on the contextual factors of domain and data type, we observe the importance of both the sensitivity of the data used to personalize and its relevance to the domain of the personalization in determining what forms of personalization might violate user norms and expectations.

Join us tonight from 6-9 pm with the Startup Policy Lab to hear Rena Coen, Emily Paul, and Pavel Vanegas present the research findings, followed by a conversation about the policy implications of the findings with Alethea Lange, policy analyst at the Center for Democracy & Technology, and Jen King, privacy expert and Ph.D. candidate at the UC Berkeley School of Information, moderated by Gautam Hans.

Event details and RSVP

This project is funded by the Center for Technology, Society & Policy and the Center for Long-Term Cybersecurity.

No Comments

Post a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.