Citizen Technologist

The CTSP Blog

Developing Strategies to Counter Online Abuse

By Nick Doty, CTSP | Permalink

We are excited to host a panel of experts this Wednesday, talking about strategies for making the Internet more gender-inclusive and countering online harassment and abuse.

Toward a Gender-Inclusive Internet: Strategies to Counter Harassment, Revenge Porn, Threats, and Online Abuse
Wednesday, April 27; 4:10-5:30 pm
202 South Hall, Berkeley, CA
Open to the public; Livestream available

These are experts and practitioners in law, journalism and technology with an interest in the problem of online harassment. And more importantly, they’re all involved with ongoing concrete approaches to push back against this problem (see, for example, Activate Your Squad and Block Together). While raising awareness about online harassment and understanding the causes and implications remains important, we have reached the point where we can work on direct countermeasures.

READ MORE

Please Can We Not Try to Rationalize Emoji

By Galen Panger, CTSP Director | Permalink

Emoji are open to interpretation, and that’s a good thing. Credit: Samuel Barnes

Emoji are open to interpretation, and that’s a good thing. Credit: Samuel Barnes

This week a study appeared on the scene suggesting an earth-shattering, truly groundbreaking notion: Emoji “may be open to interpretation.”

And then the headlines. “We Really Don’t Know What We’re Saying When We Use Emoji,” a normally level-headed Quartz proclaimed. “That Emoji Does Not Mean What You Think It Means,” Gizmodo declared. “If Emoji Are the Future of Communication Then We’re Screwed,” New York Magazine cried, obviously not trying to get anyone to click on its headline.

READ MORE

Start Research Project. Fix. Then Actually Start.

By Robyn Perry, CTSP Fellow | Permalink

If you were living in a new country, would you know how to enroll your child in school, get access to health insurance, or find affordable legal assistance? And if you didn’t, how would you deal?

As Maggie, Kristen, and I are starting to interview immigrant women living in the US and the organizations that provide support to them, we are trying to understand how they deal – particularly, how they seek social support when they face stress.

This post gives a bit of an orientation to our project and helps us document our research process.

We’ve developed two semi-structured questionnaires to guide our interviews: one for immigrants and one for staff at service providers (organizations that provide immigrants with legal aid, job training, access to resources for navigating life in the US, and otherwise support their entry and integration). We are seeking to learn about women immigrants who have been in the US between 1-7 years. All interviews are conducted in English by one of the team members. For this reason, we are striving for a high degree of consistency in our interview process because we will each be conducting one-on-one interviews separately from each other.

READ MORE

Moderating Harassment in Twitter with Blockbots

By Stuart Geiger, ethnographer and post-doctoral scholar at the Berkeley Institute for Data Science | Permalink

I’ve been working on a research project about counter-harassment projects in Twitter, where I’ve been focusing on blockbots (or bot-based collective blocklists) in Twitter. Blockbots are a different way of responding to online harassment, representing a more decentralized alternative to the standard practice of moderation — typically, a site’s staff has to go through their own process to definitively decide what accounts should be suspended from the entire site. I’m excited to announce that my first paper on this topic will soon be published in Information, Communication, and Society (the PDF on my website and the publisher’s version).

This post is a summary of that article and some thoughts about future work in this area. The paper is based on my empirical research on this topic, but it takes a more theoretical and conceptual approach given how novel these projects are. I give an overview of what blockbots are, the context in which they have emerged, and the issues that they raise about how social networking sites are to be governed and moderated with computational tools. I think there is room for much future research on this topic, and I hope to see more work on this topic from a variety of disciplines and methods.

What are blockbots?

Blockbots are automated software agents developed and used by independent, volunteer users of Twitter, who have developed their own social-computational tools to help moderate their own experiences on Twitter.

blockbot

The blocktogether.org interface, which lets people subscribe to other people’s blocklists, publish their own blocklists, and automatically block certain kinds of accounts.

READ MORE

Privacy for Citizen Drones: Use Cases for Municipal Drone Applications

By Timothy Yim, CTSP Fellow and Director of Data & Privacy at Startup Policy Lab | Permalink

Previous Citizen Drone Articles:

  1. Citizen Drones: delivering burritos and changing public policy
  2. Privacy for Citizen Drones: Privacy Policy-By-Design
  3. Privacy for Citizen Drones: Use Cases for Municipal Drone Applications

Startup Policy Lab is leading a multi-disciplinary initiative to create a model policy and framework for municipal drone use.

A Day in the Park

We previously conceptualized a privacy policy-by-design framework for municipal drone applications—one that begins with gathering broad stakeholder input from academia, industry, civil society organizations, and municipal departments themselves. To demonstrate the benefits of such an approach, we play out a basic scenario.

A city’s Recreation and Parks Department (“Parks Dept.”) wants to use a drone to monitor the state of its public parks for maintenance purposes, such as proactive tree trimming prior to heavy seasonal winds, vegetation pruning around walking paths, and any directional or turbidity changes in water flows. For most parks, this would amount to twice-daily flights of approximately 15–30 minutes each. The flight video would then be reviewed, processed, and stored by the Parks Dept.

READ MORE

Privacy for Citizen Drones: Privacy Policy-By-Design

By Timothy Yim, CTSP Fellow and Director of Data & Privacy at Startup Policy Lab | Permalink

Startup Policy Lab is leading a multi-disciplinary initiative to create a model policy and framework for municipal drone use.

Towards A More Reasoned Approach

Significant policy questions have arisen from the nascent but rapidly increasing adoption of drones in society today. The developing drone ecosystem is a prime example of how law and policy must evolve with and respond to emerging technology, in order for society to thrive while still preserving its normative values.

Privacy has quickly become a vital issue in the debate over acceptable drone use by government municipalities. In some instances, privacy concerns over the increased potential for government surveillance have even led to wholesale bans on the use of drones by municipalities.

Let me clear. This is a misguided approach.

Without a doubt, emerging drone technology is rapidly increasing the potential ability of government to engage in surveillance, both intentionally and unintentionally, and therefore to intrude on the privacy of its citizenry. And likewise, it’s also absolutely true that applying traditional privacy principles—such as notice, consent, and choice—has proven incredibly challenging in the drone space. For the record, these are legitimate and serious concerns.

Yet even under exceptionally strong constructions of modern privacy rights, including those enhanced protections afforded under state constitutions such as California’s, an indiscriminate municipal drone ban makes little long-term sense. A wholesale ban cuts off municipal modernization and the many potential benefits of municipal drone use—for instance, decreased costs and increased frequency of monitoring for the maintenance of public parks, docks, and bridges.

READ MORE

The Neighbors are Watching: From Offline to Online Community Policing in Oakland, California

By Fan Mai & Rebecca Jablonsky, CTSP Fellows | Permalink

As one of the oldest and most popular community crime prevention programs in the United States, Neighborhood Watch is supposed to promote and facilitate community involvement by bringing citizens together with law enforcement in resolving local crime and policing issues. However, a review of Neighborhood Watch programs finds that nearly half of all properly evaluated programs have been unsuccessful. The fatal shooting of Trayvon Martin by George Zimmerman, an appointed neighborhood watch coordinator at that time, has brought the conduct of Neighborhood Watch under further scrutiny.

Founded in 2010, Nextdoor is an online social networking site that connects residents of a specific neighborhood together. Unlike other social media, Nextdoor maintains a one-to-one mapping of real-world community to virtual community, nationwide. Positioning itself as the platform for “virtual neighborhood watch,” Nextdoor not only encourages users to post and share “suspicious activities,” but also invites local police departments to post and monitor the “share with police” posts. Since its establishment, more than 1000 law enforcement agencies have partnered with the app, including the Oakland Police Department. Although Nextdoor has helped the local police to solve crimes, it has also been criticized for giving voices to racial biases, especially in Oakland, California.

READ MORE

Design Wars: The FBI, Apple and hundreds of millions of phones

By Deirdre K. Mulligan and Nick Doty, UC Berkeley, School of Information | Permalink | Also posted to the Berkeley Blog

After forum-and fact-shopping and charting a course via the closed processes of district courts, the FBI has honed in on the case of the San Bernardino terrorist who killed 14 people, injured 22 and left an encrypted iPhone behind. The agency hopes the highly emotional and political nature of the case will provide a winning formula for establishing a legal precedent to compel electronic device manufacturers to help police by breaking into devices they’ve sold to the public.

The phone’s owner (the San Bernardino County Health Department) has given the government permission to break into the phone; the communications and information at issue belong to a deceased mass murderer; the assistance required, while substantial by Apple’s estimate, is not oppressive; the hack being requested is a software downgrade that enables a brute force attack on the crypto — an attack on the implementation rather than directly disabling encryption altogether and, the act under investigation is heinous.

But let’s not lose sight of the extraordinary nature of the power the government is asking the court to confer.READ MORE

Rough cuts on the incredibly interesting implications of Facebook’s Reactions

By Galen Panger, CTSP | Permalink

How do we express ourselves in social media, and how does that make other people feel? These are two questions at the very heart of social media research including, of course, the ill-fated Facebook experiment. Facebook Reactions are fascinating because they are, even more explicitly than the Facebook experiment, an intervention into our emotional lives.

Let me be clear that I support Facebook’s desire to overcome the emotional stuntedness of the Like button (don’t even get me started on the emotional stuntedness of the Poke button). I support the steps the company has taken to expand the Like button’s emotional repertoire, particularly in light of the company’s obvious desire to maintain its original simplicity. But as a choice about which emotional expressions and reactions to officially reward and sanction on Facebook, they are consequential. They explicitly present the company with the knotty challenge of determining the shape of Facebook’s emotional environment, and they have wide implications for the 1.04 billion of us who visit Facebook each day. Here are a few rough reactions to Facebook Reactions.

READ MORE

The need for interdisciplinary tech policy training

By Nick Doty, CTSP, with Richmond Wong, Anna Lauren Hoffman and Deirdre K. Mulligan | Permalink

Conversations about substantive tech policy issues — privacy-by-design, net neutrality, encryption policy, online consumer protection — frequently evoke questions of education and people. “How can we encourage privacy earlier in the design process?” becomes “How can we train and hire engineers and lawyers who understand both technical and legal aspects of privacy?” Or: “What can the Federal Trade Commission do to protect consumers from online fraud scams?” becomes “Who could we hire into an FTC bureau of technologists?” Over the past month, members of the I School community have participated in several events where these tech policy conversations have occurred:

  • Catalyzing Privacy by Design: fourth in a series of NSF-sponsored workshops, organized with the Computing Community Consortium, to develop a privacy by design research agenda
  • Workshop on Problems in the Public Interest: hosted by the Technology Science Research Collaboration Network at Harvard to generate new research questions
  • PrivacyCon: an event to bridge academic research and policymaking at the Federal Trade Commission

READ MORE