A Pledge of Ethics for I School Graduates

A Pledge of Ethics for I School Graduates

By Robyn Perry, I School MIMS ’15 | Permalink

When you hear about Volkswagen engineers cheating emissions tests, or face recognition software that can’t “see” Black people, you start to wonder who is in charge here. Or more to the point, who is to blame?

Well, I just graduated from UC Berkeley’s School of Information Master of Information Management and Systems program (MIMS for short). My colleagues and I are the kind of people that are going to be making decisions about this stuff in all sorts of industries.

This post is about one way we might hold ourselves accountable to an ethical standard that we agree to by means of a pledge.

As you might imagine, we spend a significant part of our coursework thinking about how people think about technology, how people use technology, and how what we design can do a better job of not destroying the world.

What is a MIMS degree, you ask? At the I School, we go into many different careers, and many of them have names that aren’t yet familiar to the average person in the way that lawyer or biologist are. Our job titles edge towards the obscure, the new, the bizarre. Take for example UX Designer (less cryptically known as User Experience Designer), Product Manager, or Growth Hacker. (Join me in snickering at the last one.)

As commencement speaker at graduation in May, I wanted to crystallize what my fellow graduates and I spent all this time thinking about into an actual commitment. So I delivered this speech and ended it by asking my fellow graduates to take a pledge. This pledge is inspired by similar oaths in related disciplines. You may have heard of the Hippocratic Oath for doctors. The Order of the Engineer wears a ring to remind themselves of an oath they take known as the Obligation. As I learned in Applied Behavioral Economics, it’s hard to beat wearing a physical object as a commitment device. However, making a pledge together, to each other, and in public, seemed like a good place to start.

Here’s the text of the pledge. You may recognize the model I used for its basis. It’s short but pithy.

 

On my honor, I will try:

To serve people and society,

to create technology for important human needs,

and to abide by the I School pledge.

 

I am

responsible for what I design and code.

 

I will do my best to be

inquisitive and not condescending [1],

ever mindful of privacy and security [2],

and

to combat technological determinism [3],

to remember that artifacts have politics [4],

to beware the power of defaults [5],

 

I will

use and abide by technical standards [6],

design with the tussle in mind [7],

and be accountable for the things I create in the world [8].

 

Hopefully, the first several of lines speak for themselves. For some clarity about each of the next lines, the following ought to help:

  1. We pledged to be inquisitive and not condescending. Anyone who remembers the skits featuring “Nick Burns, your company’s computer guy” from Saturday Night Live can recall the caricature of the condescending tech geek. The School of Information student body is diverse, but we all have a level of technical expertise that may endow us with — deserved or undeserved — cultural clout in our jobs and even in our lives. In addition to this popular stereotype of the Tech Geek as a condescending jerk who we have to put up with in exchange for tech support, anyone with technical expertise has some degree of power in choosing how they use language and translate concepts for their audience. Here, I intended for us to pledge to use curiosity as a first attack to a problem, rather than leading with establishing how much we know.
  2. Being mindful of privacy and security is part of our coursework in the MIMS program. Two of our core courses, Information Law and Policy and Distributed Computing Applications and Infrastructure, acquaint us with the complexities of ensuring privacy and security in designing and developing technical systems. These are incredibly difficult challenges, and in some cases, students enroll in the School of Information specifically to learn to affect policy changes in these areas.
  3. Technological determinism is fancy terminology for the belief that technology follows an inevitable course, and every new development is “progress”. It’s believing that the course it takes is out of our hands. We pledged to remember that it doesn’t have a mind of its own — we have the power to make choices about it. In the words of Tarleton Gillespie in The Relevance of Algorithms: “A sociological analysis…must unpack the warm human and institutional choices that lie behind these cold mechanisms.” Many sophisticated thinkers have written brilliantly about technological determinism. See Winner, Heilbroner, Cockburn, Gillespie, and others for more.
  4. “Artifacts have politics” is a classic line, swiped directly from Langdon Winner’s eponymous piece. We learn at the I School that the choice to introduce particular technologies may be politically or economically motivated in ways that disenfranchise certain groups. Winner describes the introduction of the tomato harvester that wiped out the small California tomato farm, and the pneumatic molding machine at Chicago’s McCormick factory that dealt a blow to the skilled laborers who were organizing the union. The choice of technological systems is a product of contextual cultural, social, and political pressures, combined with technological developments. The same can be said of every other piece of technology you might encounter. Winner says it better.
  5. A default is any predetermined setting on a device or technical system that may be changed. We learn about the incredible power of defaults in Applied Behavioral Economics — most people don’t change the out-of-the-box default settings that technologies come with! So whatever settings the designers and developers choose to be default will influence the experience of most people who use their technology. FTC settlements have turned on defaults (as in FTC v. Frostwire), as we learn in Information Law and Policy.
  6. We pledged to design in accordance with technical standards — seriously, this is not the time to nod off. Your life would be a lot more of a sweaty struggle without widely used technical standards. They are the reason you can charge an appliance in any socket trusting that it won’t explode, or type coherently on any US computer, or make sense of information on the Internet. Standards help you and your date arrive at the same place at a pre-established time. Information Organization and Retrieval and Technology and Delegation do their part to drum this into our little skulls.
  7. Perhaps the most jargony part of the pledge, the “tussle” is a frequent buzzword in conversations around South Hall. Nothing beats reading the paper it comes from by Clark, Sollins, Braden, and Wroclawski. But in essence, the tussle is the “ongoing contention between parties with conflicting interests.” This plays out in the design and development of technology and information systems. There is a tussle anywhere that a design process must negotiate competing goals. Not only must we understand the competition, but we should design to preserve the tension, rather than to resolve it in favor of one party or another.
  8. And indeed, to seal the deal, it was fitting that we remembered that we are responsible for the things we create in the world. Technology and information systems are not neutral and never have been. The person who says it best is Mike Monteiro in his speech about How Designers Destroyed the World. Sometimes, the more invisible a system or technology is, the more influence it can have in our lives. We pledged to remember that whatever we create may have unintended consequences, and we need to tread carefully and be prepared to take responsibility for our work, just as healthcare providers, engineers, and other professionals do.

The pledge is becoming a living document. I’m going to go out on a limb and suggest that my first rendition isn’t perfection. Do you notice something important missing from it? What do you think the value is of pledging to each other? Should we take our commitments further? If so, how?

We welcome your comments and discussion towards a pledge that represents the highest ethics in the information profession.


See Robyn and the I School 2015 class take this pledge.

8 Comments
  • Jonathan

    October 22, 2015 at 3:21 pm Reply

    I think line 2 is really important (“to create technology for important human needs”). Too many technologists and information professionals have spent too much energy making the lives of the rich and comfortable slightly more comfortable. It would be great to see more awareness of the ways information technology could benefit the lives of the less privileged. There is a lot of unrealized opportunity.

    • Robyn Perry

      October 22, 2015 at 6:19 pm Reply

      Exactly. This is one of the main motivators of this pledge and for me personally in my interest in the field. I want to see more of the creativity and talent that currently goes to making already comfortable people feel more comfortable or think less go into grappling with more interesting problems. Sarah Jeong has some great suggestions in the Internet of Garbage, as an example of where to start (why not get serious about tackling the online harassment problem that’s starting to get attention?)

  • Jonathan

    October 22, 2015 at 3:35 pm Reply

    The line I’m least comfortable with is “design with the tussle in mind.” First off, it’s super-jargony. But even so, I’m not sure how it leads us to ethical decisions.

    You say, “Not only must we understand the competition, but we should design to preserve the tension, rather than to resolve it in favor of one party or another.” But is that always the ethical choice?

    For instance: if one group feels gaming culture is misogynist, and another group thinks the first group needs to be assaulted and/or raped in order to learn respect, would you say “we should design to preserve the tension, rather than to resolve it in favor of one party or another”? And if not, what is the ethical decision? And what guidelines help you make that decision in the future?

    • Robyn Perry

      October 22, 2015 at 6:36 pm Reply

      Yes, I appreciate this comment. When I wrote it, it was definitely aimed at a very tiny audience of my fellow graduates (and one of my favorite professors) and I agree that it comes off as jargony.

      I would love some other voices in this discussion on this one, but this is about ensuring that we don’t use technology to foreclose on healthy debates between parties that both have important goals. So some digital rights managment technologies, for example, foreclose on possibilities that were previously available, like making fair use of copyrighted works. (Another example: it’s not possible to share an e-book that you bought with a friend in the way that you can share a paper book with whomever you want.)

      When technical systems are designed rigidly and in favor of an industry over consumers, for example, this is a departure from how things worked before. So it’s about recognizing the regulatory and legal frameworks that preceded the internet such that we can strive to maintain some balance between big, difficult problems like privacy and security, for example, the rights of artists and the rights of those who enjoy and maybe make mashups of their work.

      Any thoughts for a better way to communicate this, that’s less opaque?

  • Dan Turner

    October 26, 2015 at 7:33 pm Reply

    Very cool pledge and I admire the drive and thought behind it (and the recursive nature of pledging to abide by the pledge one just pledged).

    Maybe Morozov would also be a good citation against determinism? And I’d like to see something along the lines of addressing awareness that everything we create acts within and affects systems — my motivation for this is to counter technolibertarianism that is so problematic (“if I rent out a public parking space, that’s tiny” while ignoring the systemic effects of building an app so that everyone can do that), etc.

    • Robyn Perry

      October 26, 2015 at 7:46 pm Reply

      Good suggestions. Morozov has tended to be somewhat divisive, though I have agreed with some of his points. Do you have a particular Morozov piece you’re thinking of? Most of my references here academic, which is perhaps problematic in itself.

      I like the second suggestion. Thoughts about how we would word that? Something shorter and more poetic than ‘I will remember that creating technical systems to enable individual actions turns small influences into large ones”.

      • Dan Turner

        October 29, 2015 at 11:37 pm Reply

        Of Morozov, I’ve only read “To Fix Everything, Click Here” — and yes, his rhetoric is sarcastic and intentionally provocative, so I’d have to find some of his articles and see if any encapsulate. Basically, I’d like to see an awareness of and commitment to avoiding solutionism; in part because a) “solutions” are incredibly situational and b) see Kolko’s “Wicked Problems”.

        Addendum: I literally just opened Kentaro Toyama’s “Geek Heresy”, and and the first graf of the Introduction mentions a Megan Smith speech in spring 2011… at SOUTH HALL. Toyama is correct in identifying that “just add tech” isn’t a panacea for many problems, though it’s the easiest thing to say and walk away from, consequence-free; I think Toyama would be a good person to try running this code/pledge by and getting his feedback.

      • Dan Turner

        October 30, 2015 at 9:04 pm Reply

        And maybe “I will not assume what I want is a solution, but will observe to discover real problems first.”

Post a Reply to Jonathan Cancel Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.