Uncategorized

2021 Fellowship FAQs

LAST UPDATED Nov 24, 2020

We added more frequently asked questions for the 2021 Call for Applications after our information session on Thursday, Nov. 19th.

Frequently Asked Questions

Q: Do I need a letter of recommendation from a faculty member?

A: No. The application does not ask for a letter of recommendation from a faculty member. However, if you are proposing a project with a community partner, you should have a letter of support from the community partner.

Q: Do I need to submit a budget for my project?

A: The application does not require a budget, although you may choose to include one if you feel it would be useful.

Q: Will I need to submit receipts or other proof of expenses?

A: Not for fellowship stipends. Fellowship stipends are disbursed to fellows as unrestricted honorarium at the beginning of the fellowship year to UC Berkeley students. Fellows must acknowledge CTSP support in publications and other outputs resulting from their fellowship related work.

Q: Should my proposed project have a tangible deliverable that is described in the project proposal?

A: Proposals should include one or more clearly defined outcomes. The scope of what you or your team may propose to do is open-ended and may include a publishable paper, a detailed design or engineering prototype, a whitepaper or regulatory proposal, a public event or a documentary film. However, we acknowledge that in the course of the year your research goals may shift such that your plans for a deliverable change. That’s ok, and is in fact a common part of doing research.

Q: Do I have to produce a deliverable by the end of the fellowship year?

A:  The only final deliverable we require is a statement (approximately 1 page long) describing what you have done during your fellowship year.

Q: Can I submit more than one project proposal?

A: Yes, you can submit (or be part of a group submitting) more than one proposal. Since proposals are chosen through a blind review, each of your proposals will be evaluated separately. If a fellow is accepted to work on more than one project they will still receive one stipend.

Q: Does my project have to fit in one of the focus areas?

Yes, the focus areas are purposely broad (and can overlap), so that it will not be a difficult fit for you project. If you think your project does not fit, you can choose the focus area that is closest to your project category. The focus areas you choose will help us see how you interpret the area(s) that you see your project fitting in.

Q: What kind of final product do I need to have? How does this relate to showing my impact?

The final product for your project can depend on the audience that you intend it for. We want to see you think through who that audience is and how you hope to have an impact in the discussion of your final product. For example, if you hope to reach a general audience, you might aim for a blog post or op-ed. Also, remember that the final product you plan on in your application is not a fixed requirement. We know that over the course of the year plans may shift – that is OK!

Q: If I also applied to CTLC for funding can I apply to CTSP? 

Yes. We coordinate with CLTC when making our decisions, so they are aware of the projects that we accept to be CTSP-CLTC projects. You can indicate that you have applied for CLTC by checking that box on your application.

Q: How does it work if I  have non-UCB students on a project?

Logistically, it is VERY hard for us to disburse funds to a non-UCB student. That is why we require that every project have a UCB student on the team. In the case of teams with some non-UCB students, we will disburse all of the funds to the UCB student(s) and let them divide the money among their teammates. Just note that for tax purposes the person who we disburse the money to will be responsible for paying taxes on all of it.

Q: Can I apply with a new project if I was a fellow in the past ?

Yes! We have had many fellows come back with new projects (or continuations of past projects). Remember, that applications are de-identified so we will judge your application like all the others.

Q: If I have a project with someone who could be considered a client, do they need to be included in the reflexivity or team members questions?

You could include that person as a team member and then yes, include them in the team members and reflexivity sections. Or you can have them be a client and then they play less of a role in your application but are indicated in a letter of support (so we know that you have actually discussed this idea with them). It’s really up to you and the kind of relationship you want to have with this person/organization.

Q: What does the mentorship section mean? 

This is another part of the application that is trying to get you to think through what you want to do and get out of your CTSP project. If there are ways that you anticipate getting mentorship out of your team – for example, a grad student and an undergrad could have a mentoring relationship while both being fellows on a project together. Or, maybe you hope to leverage your connection to CTSP to gain access to other kinds of mentorship – to professors in other departments, people in industry or policy, or even with our wonderful CTSP alums. This is a space to indicate that!

Q: It’s really hard to find a partner for a new project given that we are in a pandemic and have has fewer chances to get to know people, their interests, working style, etc. What if I want to apply for a project that needs other team members, but I don’t know who those people are yet? Can I apply for more money with the expectation that it will support me and a team member yet to be determined?

This is a new issue, but one that makes a lot of sense. You can apply and make the case for why you need more team members, what their roles will be and how you will identify them. We can’t promise that we will give you more money, but we will consider it.

Q: What if we need more space than the one page provided for the positionality statement? 

Since this is part of the identified application, you can email us (ctsp@berkeley.edu) an additional page – just include the title of the project and explain that it’s for the positionality statement.

Q: Is the section on racial and/or indigenous justice required? 

Yes. Explicitly we want to see at least one citation that references either the researchers that we highlight here (link) or another that you choose. Beyond that we want to see you think through how your idea intersects with this issue. You do not need to summarize all of the points of the work you reference, but tell us how it is related to your proposed project.

If you still have questions about the Call for Applications, send us an email!

Using Crowdsourcing to address Disparities in Police Reported Data: Addressing Challenges in Technology and Community Engagement

This is a project update from a CTSP project from 2017: Assessing Race and Income Disparities in Crowdsourced Safety Data Collection (with Kate BeckAditya Medury, and Jesus M. Barajas)

Project Update

This work has led to the development of Street Story, a community engagement tool that collects street safety information from the public, through UC Berkeley SafeTREC.

The tool collects qualitative and quantitative information, and then creates maps and tables that can be publicly viewed and downloaded. The Street Story program aims to collect information that can create a fuller picture of transportation safety issues, and make community-provided information publicly accessible.

 

The Problem

Low-income groups, people with disabilities, seniors and racial minorities are at higher risk of being injured while walking and biking, but experts have limited information on what these groups need to reduce these disparities. Transportation agencies typically rely on statistics about transportation crashes aggregated from police reports to decide where to make safety improvements. However, police-reported data is limited in a number of ways. First, crashes involving pedestrians or cyclists are significantly under-reported to police, with reports finding that up to 60% of pedestrian and bicycle crashes go unreported. Second, some demographic groups, including low-income groups, people of color and undocumented immigrants, have histories of contentious relationships with police. Therefore, they may be less likely to report crashes to the police when they do occur. Third, crash data doesn’t include locations where near–misses have happened, or locations where individuals feel unsafe but an issue has not yet happened. In other words, the data allow professionals to react to safety issues, but don’t necessarily allow them to be proactive about them.

One solution to improve and augment the data agencies use to make decisions and allocate resources is to provide a way for people to report transportation safety issues themselves. Some public agencies and private firms are developing apps and websites whether people can report issues for this purpose. But one concern is that the people who are likely to use these crowdsourcing platforms are those who have access to smart phones or the internet and who trust that government agencies with use the data to make changes, biasing the data toward the needs of these privileged groups.

Our Initial Research Plan

We chose to examine whether crowdsourced traffic safety data reflected similar patterns of underreporting and potential bias as police-reported safety data. To do this, we created an online mapping tool that people could use to report traffic crashes, near-misses and general safety issues. We planned to work with a city to release this tool to and collected data from the general public, then work directly with a historically marginalized community, under-represented in police-reported data, to target data collection in a high-need neighborhood. We planned to reduce barriers to entry for this community, including meeting the participants in person to explain the tool, providing them with in-person and online training, providing participants with cell phones, and compensating their data plans for the month. By crowdsourcing data from the general public and from this specific community, we planned to analyze whether there were any differences in the types of information reported by different demographics.

This plan seemed to work well with the research question and with community engagement best practices. However, we came up against a number of challenges with our research plan. Although many municipal agencies and community organizations found the work we were doing interesting and were working to address similar transportation safety issues we were focusing on, many organizations and agencies seemed daunted by the prospect of using technology to address underlying issues of under-reporting. Finally, we found that a year was not enough time to build trusting relationships with the organizations and agencies we had hoped to work with. Nevertheless, we were able to release a web-based mapping tool to collect some crowdsourced safety data from the public.

Changing our Research Plan

To better understand how more well-integrated digital crowdsourcing platforms perform, we pivoted our research project to explore how different neighborhoods engage with government platforms to report non-emergency service needs. We assumed some of these non-emergency services would mirror the negative perceptions of bicycle and pedestrian safety we were interested in collecting via our crowdsourcing safety platform. The City of Oakland relies on SeeClickFix, a smartphone app, to allow residents to request service for several types of issues: infrastructure issues, such as potholes, damaged sidewalks, or malfunctioning traffic signals; and non-infrastructure issues such as illegal dumping or graffiti. The city also provides phone, web, and email-based platforms for reporting the same types of service requests. These alternative platforms are collectively known as 311 services. We looked at 45,744 SeeClickFix-reports and 35,271 311-reports made between January 2013 and May 2016. We classified Oakland neighborhoods by status as community of concern. In the city of Oakland, 69 neighborhoods meet the definition for communities of concern, while 43 do not. Because we did not have data on the characteristics of each person reporting a service request, we made the assumption that people reporting requests also lived in the neighborhood where the request was needed.

How did communities of concern interact with the SeeClickFix and 311 platforms to report service needs? Our analysis highlighted two main takeaways. First, we found that communities of concern were more engaged in reporting than other communities, but had different reporting dynamics based on the type of issue they were reporting. About 70 percent of service issues came from communities of concern, even though they represent only about 60 percent of the communities in Oakland. They were nearly twice as likely to use SeeClickFix than to report via the 311 platforms overall, but only for non-infrastructure issues. Second, we found that even though communities of concern were more engaged, the level of engagement was not equal for everyone in those communities. For example, neighborhoods with higher proportions of limited-English proficient households were less likely to report any type of incident by 311 or SeeClickFix.

Preliminary Findings from Crowdsourcing Transportation Safety Data

We deployed the online tool in August 2017. The crowdsourcing platform was aimed at collecting transportation safety-related concerns pertaining to pedestrian and bicycle crashes, near misses, perceptions of safety, and incidents of crime while walking and bicycling in the Bay Area. We disseminated the link to the crowdsourcing platform primarily through Twitter and some email lists. . Examples of organizations who were contacted through Twitter-based outreach and also subsequently interacted with the tweet (through likes and retweets) include Transform Oakland, Silicon Valley Bike Coalition, Walk Bike Livermore, California Walks, Streetsblog CA, and Oakland Built. By December 2017, we had received 290 responses from 105 respondents. Half of the responses corresponded to perceptions of traffic safety concerns (“I feel unsafe walking/cycling here”), while 34% corresponded to near misses (“I almost got into a crash but avoided it”). In comparison, 12% of responses reported an actual pedestrian or bicycle crash, and 4% of incidents reported a crime while walking or bicycling. The sample size of the responses is too small to report any statistical differences.

Figure 1 shows the spatial patterns of the responses in the Bay Area aggregated to census tracts. Most of the responses were concentrated in Oakland and Berkeley. Oakland was specifically targeted as part of the outreach efforts since it has significant income and racial/ethnic diversity.

Figure 1 Spatial Distribution of the Crowdsourcing Survey Responses

Figure 1 Spatial Distribution of the Crowdsourcing Survey Responses

 

In order to assess the disparities in the crowdsourced data collection, we compared responses between census tracts that are classified as communities of concern or not. A community of concern (COC), as defined by the Metropolitan Transportation Commission, a regional planning agency, is a census tract that ranks highly on several markers of marginalization, including proportion of racial minorities, low-income households, limited-English speakers, and households without vehicles, among others.

Table 1 shows the comparison between the census tracts that received at least one crowdsourcing survey response. The average number of responses received in COCs versus non-COCs across the entire Bay Area were similar and statistically indistinguishable. However, when focusing on Oakland-based tracts, the results reveal that average number of crowdsourced responses in non-COCs were statistically higher. To assess how the trends of self-reported pedestrian/cyclist concerns compare with police-reported crashes, an assessment of pedestrian and bicycle-related police-reported crashes (from 2013-2016) shows that more police-reported pedestrian/bicycle crashes were observed on an average in COCs across the Bay Area as well as in Oakland. The difference in trends observed in the crowdsourced concerns and police-reported crashes suggest that either walking/cycling concerns are greater in non-COCs (thus underrepresented in police crashes), or that participation from among COCs is relatively underrepresented.

Table 1 Comparison of crowdsourced concerns and police-reported pedestrian/bicycle crashes in census tracts that received at least 1 response

Table 1 Comparison of crowdsourced concerns and police-reported pedestrian/bicycle crashes in census tracts that received at least 1 response

Table 2 compares the self-reported income and race/ethnicity characteristics of the respondents with the locations where the responses were reported. For reference purposes, Bay Area’s median household income in 2015 was estimated to be $85,000 (Source: http://www.vitalsigns.mtc.ca.gov/income), and Bay Area’s population was estimated to be 58% White, per the 2010 Census, (Source: http://www.bayareacensus.ca.gov/bayarea.htm).

Table 2 Distribution of all Bay Area responses based on the location of response and the self-reported income and race/ethnicity of respondents

The results reveal that White, medium-to-high income respondents were observed to report more walking/cycling -related safety issues in our survey, and more so in non-COCs. This trend is also consistent with the definition of COCs, which tend to have a higher representation of low-income people and people of color. However, if digital crowdsourcing without widespread community outreach is more likely to attract responses from medium-to-high income groups, and more importantly, if they only live, work, or play in a small portion of the region being investigated, the aggregated results will reflect a biased picture of a region’s transportation safety concerns. Thus, while the scalability of digital crowdsourcing provides an opportunity for capturing underrepresented transportation concerns, it may require greater collaboration with low-income, diverse neighborhoods to ensure uniform adoption of the platform.

Lessons Learned

From our attempts to work directly with community groups and agencies and our subsequent decision to change our research focus, we learned a number of lessons:

  1. Develop a research plan in partnership with communities and agencies. This would have allowed us to ensure that we began with a research plan in which community groups and agencies were better able to partner with us on, and this would have ensured that the partners were on board the topic of interest and the methods we hoped to use.
  2. Recognize the time it takes to build relationships. We found that building relationships with agencies and communities was more time intensive and took longer that we had hoped. These groups often have limitations on the time they can dedicate to unfunded projects. Next time, we should plan for this in our initial research plan.
  3. Use existing data sources to supplement research. We found that using See-Click-Fix and 311 data was a way to collect and analyze information to add context to our research question. Although the data did not have all demographic information we had hoped to analyze, this data source added additional context to the data we collected.
  4. Speak in a language that the general public understands. We found that when we used the term self-reporting, rather than crowdsourcing, when talking to potential partners and to members of the public, these individuals were more willing to consider the use of technology to collect information on safety issues from the public as legitimate. Using vocabulary and phrasing that people are familiar with is crucial when attempting to use technology to benefit the social good.

CTSP Alumni Updates

We’re thrilled to highlight some recent updates from our fellows:

Gracen Brilmyer, now a PhD student at UCLA, has published a single authored work in one of the leading journals in archival studies, Archival Science: “Archival Assemblages: Applying Disability Studies’ Political/Relational Model to Archival Description” and presented their work on archives, disability, and justice at a number of events over the past two years, including The Archival Education and Research Initiative (AERI), the Allied Media Conference, the International Communications Association (ICA) Preconference, Disability as Spectacle, and their research will be presented at the upcoming Community Informatics Research Network (CIRN).

CTSP Funded Project 2016: Vision Archive


Originating in the 2017 project “Assessing Race and Income Disparities in Crowdsourced Safety Data Collection” done by Fellows Kate Beck, Aditya Medury, and Jesus Barajas, the Safe Transportation and Research Center will launch a new project, Street Story, in October 2018. Street Story is an online platform that allows community groups and agencies to collect community input about transportation collisions, near-misses, general hazards and safe locations to travel. The platform will be available throughout California and is funded through the California Office of Traffic Safety.

CTSP Funded Project 2017: Assessing Race and Income Disparities in Crowdsourced Safety Data Collection


Fellow Roel Dobbe has begun a postdoctoral scholar position at the new AI Now Institute. Inspired by his 2018 CTSP project, he has co-authored a position paper with Sarah Dean, Tom Gilbert and Nitin Kohli titled A Broader View on Bias in Automated Decision-Making: Reflecting on Epistemology and Dynamics.

CTSP Funded Project 2018: Unpacking the Black Box of Machine Learning Processes


We are also looking forward to a CTSP Fellow filled Computer Supported Cooperative Work conference in November this year! CTSP affiliated papers include:

We also look forward to seeing CTSP affiliates presenting other work, including 2018 Fellows Richmond Wong, Noura Howell, Sarah Fox, and more!

 

Standing up for truth in the age of disinformation

Professor Deirdre K. Mulligan and PhD student (and CTSP Co-Director) Daniel Griffin have an op-ed in The Guardian considering how Google might consider its human rights obligations in the face of state censorship demands: If Google goes to China, will it tell the truth about Tiananmen Square?

The op-ed advances a line of argument developed in a recent article of theirs in the Georgetown Law Technology Review: “Rescripting Search to Respect the Right to Truth”