Engineering Ethics

Data Science and Expanding Our Sources of Ethical Inspiration

By Luke Stark & Anna Lauren Hoffmann, CTSP Fellows | Permalink

Steam rising from nuclear reactors

Photo by Mark Goebel

Recent public controversies regarding the collection, analysis, and publication of data sets about sensitive topics—from identity and sexuality to suicide and emotion—have helped push conversations around data ethics to the fore. In popular articles and emerging scholarly work (some of it supported by our backers at CTSP), scholars, practitioners and policymakers have begun to flesh out the longstanding conceptual and practical tensions expressed not only in the notion of “data ethics,” but in related categories such as “data science,” “big data,” and even plain old “data” itself.

Against this uncertain and controversial backdrop, what kind of ethical commitments might bind those who work with data—for example, researchers, analysts, and (of course) data scientists? One impulse might be to claim that the unprecedented size, scope, and attendant possibilities of so-called “big data” sets require a wholly new kind of ethics, one built with digital data’s particular affordances in mind from the start. Another impulse might be to suggest that even though “Big Data” seems new or even revolutionary, its ethical problems are not—after all, we’ve been dealing with issues like digital privacy for quite some time.

READ MORE

A User-Centered Perspective on Algorithmic Personalization

By Rena Coen, Emily Paul, Pavel Vanegas, and G.S. Hans, CTSP Fellows | Permalink

We conducted a survey using experimentally controlled vignettes to measure user attitudes about online personalization and develop an understanding of the factors that contribute to personalization being seen as unfair or discriminatory. Come learn more about these findings and hear from the Center for Democracy & Technology on the policy implications of this work at our event tonight!

What is online personalization?

Some of you may be familiar with a recent story, in which United Artists presented Facebook users with different movie trailers for the film Straight Outta Compton based on their race, or “ethnic affinity group,” which was determined based on users’ activity on the site.

This is just one example of online personalization, where content is tailored to users based on some user attribute. Such personalization can be beneficial to consumers but it can also have negative and discriminatory effects, as in the targeted trailers for Straight Outta Compton or Staples’ differential retail pricing based on zip code. Of course, not all personalization is discriminatory; there are examples of online personalization that many of us see as useful and have even come to expect. One example of this is providing location-based results for generic search terms like “coffee” or “movie showtimes.”

READ MORE

Design Wars: The FBI, Apple and hundreds of millions of phones

By Deirdre K. Mulligan and Nick Doty, UC Berkeley, School of Information | Permalink | Also posted to the Berkeley Blog

After forum-and fact-shopping and charting a course via the closed processes of district courts, the FBI has honed in on the case of the San Bernardino terrorist who killed 14 people, injured 22 and left an encrypted iPhone behind. The agency hopes the highly emotional and political nature of the case will provide a winning formula for establishing a legal precedent to compel electronic device manufacturers to help police by breaking into devices they’ve sold to the public.

The phone’s owner (the San Bernardino County Health Department) has given the government permission to break into the phone; the communications and information at issue belong to a deceased mass murderer; the assistance required, while substantial by Apple’s estimate, is not oppressive; the hack being requested is a software downgrade that enables a brute force attack on the crypto — an attack on the implementation rather than directly disabling encryption altogether and, the act under investigation is heinous.

But let’s not lose sight of the extraordinary nature of the power the government is asking the court to confer.READ MORE

Rough cuts on the incredibly interesting implications of Facebook’s Reactions

By Galen Panger, CTSP | Permalink

How do we express ourselves in social media, and how does that make other people feel? These are two questions at the very heart of social media research including, of course, the ill-fated Facebook experiment. Facebook Reactions are fascinating because they are, even more explicitly than the Facebook experiment, an intervention into our emotional lives.

Let me be clear that I support Facebook’s desire to overcome the emotional stuntedness of the Like button (don’t even get me started on the emotional stuntedness of the Poke button). I support the steps the company has taken to expand the Like button’s emotional repertoire, particularly in light of the company’s obvious desire to maintain its original simplicity. But as a choice about which emotional expressions and reactions to officially reward and sanction on Facebook, they are consequential. They explicitly present the company with the knotty challenge of determining the shape of Facebook’s emotional environment, and they have wide implications for the 1.04 billion of us who visit Facebook each day. Here are a few rough reactions to Facebook Reactions.

READ MORE

The need for interdisciplinary tech policy training

By Nick Doty, CTSP, with Richmond Wong, Anna Lauren Hoffman and Deirdre K. Mulligan | Permalink

Conversations about substantive tech policy issues — privacy-by-design, net neutrality, encryption policy, online consumer protection — frequently evoke questions of education and people. “How can we encourage privacy earlier in the design process?” becomes “How can we train and hire engineers and lawyers who understand both technical and legal aspects of privacy?” Or: “What can the Federal Trade Commission do to protect consumers from online fraud scams?” becomes “Who could we hire into an FTC bureau of technologists?” Over the past month, members of the I School community have participated in several events where these tech policy conversations have occurred:

  • Catalyzing Privacy by Design: fourth in a series of NSF-sponsored workshops, organized with the Computing Community Consortium, to develop a privacy by design research agenda
  • Workshop on Problems in the Public Interest: hosted by the Technology Science Research Collaboration Network at Harvard to generate new research questions
  • PrivacyCon: an event to bridge academic research and policymaking at the Federal Trade Commission

READ MORE

Ethical Pledges for Individuals and Collectives

By Andrew McConachie | Permalink

[Ed. note: As a follow-up to Robyn’s explanation of the proposed I School Pledge, Andrew McConachie provides some challenges regarding the effectiveness of pledges, and individual vs. collective action for ethical behavior in software development. We’re pleased to see this conversation continue and welcome further input; it will also be a topic of discussion in this week’s Catalyzing Privacy-by-Design workshop in Washington, DC. —npd]

I am conflicted about how effective individualized ethics are at creating ethical outcomes, and the extent to which individuals can be held accountable for the actions of a group. The I School Pledge is for individuals to take. It asks individuals to hold themselves accountable. However, most technology/software is produced as part of a team effort, usually in large organizations. Or, in the case of most open source software, it is produced through a collaborative effort with contributors acting both as individuals, and as members of contributing organizations. The structures of these organizations and communities play a fundamental role in what kind of software gets produced (cf. Conway’s Law, which focuses on internal communications structures), and what kinds of ethical outcomes eventuate.

READ MORE

Should Facebook watch out for our well-being?

By Galen Panger, CTSP | Permalink

Last year, when Facebook published the results of its emotional contagion experiment, it triggered a firestorm of criticism in the press and launched a minor cottage industry within academia around the ethical gray areas of Big Data research. What should count as ‘informed consent’ in massive experiments like Facebook’s? What are the obligations of Internet services to seek informed consent when experimentally intervening in the lives, emotions and behaviors of their users? Is there only an obligation when they want to publish in academic journals? These are not easy questions.

Perhaps more importantly, what are the obligations of these Internet services to users and their well-being more broadly?
 

Facebook's Infection

Credit: ‘Facebook’s Infection’ by ksayer1

READ MORE

A Pledge of Ethics for I School Graduates

By Robyn Perry, I School MIMS ’15 | Permalink

When you hear about Volkswagen engineers cheating emissions tests, or face recognition software that can’t “see” Black people, you start to wonder who is in charge here. Or more to the point, who is to blame?

Well, I just graduated from UC Berkeley’s School of Information Master of Information Management and Systems program (MIMS for short). My colleagues and I are the kind of people that are going to be making decisions about this stuff in all sorts of industries.

This post is about one way we might hold ourselves accountable to an ethical standard that we agree to by means of a pledge.

As you might imagine, we spend a significant part of our coursework thinking about how people think about technology, how people use technology, and how what we design can do a better job of not destroying the world.

READ MORE