AFOG

A WORKING GROUP ON ALGORITHMIC FAIRNESS AND OPACITY (CTSP PARTNER)

Jump to: People, Schedule.

AFOG is partnering with CTSP to provide increased support and mentorship for CTSP fellows working on the Fairness* & Opacity* focus area. This page was prepared by and on behalf of AFOG. Please contact Nitin Kohli, the working group coordinator, with any questions.

About AFOG

Members of this working group are interested in the following areas pertaining to algorithmic transparency and fairness:

What is at stake? We raise questions that include but also go beyond concerns about legal compliance and user acceptance or trust in technology. How do trends in data collection and algorithmic classification relate to the restructuring of the life chances, opportunities, and ultimately the social mobility of individuals and groups in society? How does the evolving, algorithmically informed mass media as well as information circulation on social media shape the stability of our democracy?

How should we design for users of machine learning systems? How can we make transparency part of the design of user interfaces for machine learning systems that will support user understanding, empowered decision-making, and human autonomy?

Related to the question above, how should classification tasks be delegated between humans and machine learning?

Is it the data or the algorithms? What transparency or opacity issues pertain to the data itself, to the algorithms that operate on this data, or to the combination of data + algorithm? What is the emerging political economy of personal data and the opacity surrounding its transport, storage, circulation, and processing and what forms of discrimination or unfairness may arise from that?

How can we produce credible knowledge using machine learning and other algorithmic tools/techniques that are opaque? For disciplines that have begun to use new computational tools to do scholarship, concerns about transparency have to do with making defensible knowledge claims. Can the tools of algorithmic interpretability devised for other purposes prove useful in new areas of scholarship such as the digital humanities?

What emerging tools, techniques, or approaches could mitigate opacity or unfairness/bias problems? What tools and techniques are emerging that offer ways to ensure transparency and/or fairness? What methods are best suited to what domains of application?

How can we better communicate and collaborate across disciplines? Disciplines provide shared tools, priorities and language, but along with this come constraints in ways of thinking about a topic or problem space. How can we identify and transcend those differences to make progress on issues of algorithmic opacity and fairness? How can we make better use of the insights from other disciplines rather than reinventing the wheel?

What motivates the tech industry to take up these issues? How do tech industry firms and the different professional roles within tech approach questions of algorithmic opacity and fairness? How does this diverge from or converge with scholarly interest?

How do different professions interact with machine learning tools? Machine learning is finding application in many domains ranging from health/medicine to criminal justice to education. How do professionals perceive and interact with these tools and with the technologists who build them? To what extent do they recognize and find ways to manage problems of opacity/inscrutability?

People

Faculty Organizers

Jenna Burrell

Associate Professor

School of Information, UC Berkeley

Jenna Burrell is an Associate Professor in the School of Information at UC Berkeley. Her first book Invisible Users: Youth in the Internet Cafes of Urban Ghana (The MIT Press) came out in May 2012. She has a PhD in Sociology from the London School of Economics. Before pursuing her PhD she was an Application Concept Developer in the People and Practices Research Group at Intel Corporation. For over 10 years she studied the appropriation of Information and Communication Technologies (ICTs) by individuals and groups on the African continent. Her most recent research considers populations that are excluded from or opt-out of Internet connectivity in urban and rural California.

Deirdre K. Mulligan

Associate Professor

School of Information, UC Berkeley

Deirdre K. Mulligan is an Associate Professor in the School of Information at UC Berkeley, a faculty Director of the Berkeley Center for Law & Technology, and an affiliated faculty on the new Hewlett funded Berkeley Center for Long-Term Cybersecurity. Mulligan’s research explores legal and technical means of protecting values such as privacy, freedom of expression, and fairness in emerging technical systems. Her book, Privacy on the Ground: Driving Corporate Behavior in the United States and Europe, a study of privacy practices in large corporations in five countries, conducted with UC Berkeley Law Prof. Kenneth Bamberger was recently published by MIT Press. Mulligan and Bamberger received the 2016 International Association of Privacy Professionals Leadership Award for their research contributions to the field of privacy protection. Mulligan recently chaired a series of interdisciplinary visioning workshops on Privacy by Design with the Computing Community Consortium to develop a research agenda. She is a member of the National Academy of Science Forum on Cyber Resilience. She is Chair of the Board of Directors of the Center for Democracy and Technology, a leading advocacy organization protecting global online civil liberties and human rights; a founding member of the standing committee for the AI 100 project, a 100-year effort to study and anticipate how the effects of artificial intelligence will ripple through every aspect of how people work, live and play; and a founding member of the Global Network Initiative, a multi-stakeholder initiative to protect and advance freedom of expression and privacy in the ICT sector, and in particular to resist government efforts to use the ICT sector to engage in censorship and surveillance in violation of international human rights standards. She is a Commissioner on the Oakland Privacy Advisory Commission. Prior to joining the School of Information. she was a Clinical Professor of Law, founding Director of the Samuelson Law, Technology & Public Policy Clinic, and Director of Clinical Programs at the UC Berkeley School of Law.

Mulligan was the Policy lead for the NSF-funded TRUST Science and Technology Center, which brought together researchers at U.C. Berkeley, Carnegie-Mellon University, Cornell University, Stanford University, and Vanderbilt University; and a PI on the multi-institution NSF funded ACCURATE center. In 2007 she was a member of an expert team charged by the California Secretary of State to conduct a top-to-bottom review of the voting systems certified for use in California elections. This review investigated the security, accuracy, reliability and accessibility of electronic voting systems used in California. She was a member of the National Academy of Sciences Committee on Authentication Technology and Its Privacy Implications; the Federal Trade Commission’s Federal Advisory Committee on Online Access and Security, and the National Task Force on Privacy, Technology, and Criminal Justice Information. She was a vice-chair of the California Bipartisan Commission on Internet Political Practices and chaired the Computers, Freedom, and Privacy (CFP) Conference in 2004. She co-chaired Microsoft’s Trustworthy Computing Academic Advisory Board with Fred B. Schneider, from 2003-2014. Prior to Berkeley, she served as staff counsel at the Center for Democracy & Technology in Washington, D.C.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? Values in design; governance of technology and governance through technology to support human rights/civil liberties; administrative law.

Domain of Application: Criminal Justice, Information Search & Filtering, General Machine Learning (not domain specific), discrimination, privacy, cybersecurity, regulation generally.

Faculty

Marion Fourcade

Professor

Sociology, UC Berkeley

I am a Professor of Sociology at UC Berkeley and an associate fellow of the Max Plack-Sciences Po Center on Coping with Instability in Market Societies (Maxpo). A comparative sociologist by training and taste, I am interested in national variations in knowledge and practice. My first book, Economists and Societies (Princeton University Press 2009), explored the distinctive character of the discipline and profession of economics in three countries. A second book, The Ordinal Society (with Kieran Healy), is under contract. This book investigates new forms of social stratification and morality in the digital economy. Other recent research focuses on the valuation of nature in comparative perspective; the moral regulation of states; the comparative study of political organization (with Evan Schofer and Brian Lande); the microsociology of courtroom exchanges (with Roi Livne); the sociology of economics, with Etienne Ollion and Yann Algan, and with Rakesh Khurana; the politics of wine classifications in France and the United States (with Rebecca Elliott and Olivier Jacquet).

Domain of Application: Credit, General Machine Learning (not domain specific), Health, Employment/Hiring.

Moritz Hardt

Assistant Professor

Electrical Engineering and Computer Science, UC Berkeley

Domain of Application: General Machine Learning (not domain specific).

Sonia Katyal

Professor

School of Law, UC Berkeley

Professor Katyal joined the Berkeley Law faculty in fall 2015 from Fordham Law School, where she served as the associate dean for research and the Joseph M. McLaughlin Professor of Law.

Her scholarly work focuses on intellectual property, civil rights (including gender, race and sexuality) and technology. Her past projects have studied the relationship between copyright enforcement and informational privacy; the impact of artistic activism on brands and advertising; and the intersection between copyright law and gender with respect to fan-generated works. Katyal also works on issues relating to cultural property and art, with a special focus on new media and the role of museums in the United States and abroad. Her current projects focus on the intersection between internet access and civil/human rights, with a special focus on the right to information; algorithmic transparency and discrimination; and a variety of projects on the intersection between gender and the commons. As a member of the university-wide Haas LGBT Cluster, Professor Katyal also works on matters regarding law and sexuality. Current projects involve an article on technology, surveillance and gender, and another on family law’s governance of transgender parents. Professor Katyal’s recent publications include The Numerus Clausus of Sex, in the University of Chicago Law Review; Technoheritage, in the California Law Review; and Algorithmic Civil Rights, forthcoming in the Iowa Law Review.

Professor Katyal is the co-author of Property Outlaws (Yale University Press, 2010) (with Eduardo M. Peñalver), which studies the intersection between civil disobedience and innovation in property and intellectual property frameworks. Professor Katyal has won several awards for her work, including an honorable mention in the American Association of Law Schools Scholarly Papers Competition, a Yale Cybercrime Award, and a Dukeminier Award from the Williams Project at UCLA. She has published with a variety of law reviews, including the Yale Law Journal, the University of Pennsylvania Law Review, Washington Law Review, Texas Law Review, and the UCLA Law Review, in addition to a variety of other publications, including the New York Times, the Brooklyn Rail, Washington Post, CNN, Boston Globe’s Ideas section, Los Angeles Times, Slate, Findlaw, and the National Law Journal. Katyal is also the first law professor to receive a grant through The Creative Capital/ Warhol Foundation for her forthcoming book, Contrabrand, which studies the relationship between art, advertising and trademark and copyright law.

In March of 2016, Katyal was selected by U.S. Commerce Secretary Penny Pritzker to be part of the inaugural U.S. Commerce Department’s Digital Economy Board of Advisors. Katyal also serves as an Affiliate Scholar at Stanford Law’s Center for Internet and Society, and is a founding advisor to the Women in Technology Law organization. She also serves on the Executive Committee for the Berkeley Center for New Media (BCNM) and on the Advisory Board for Media Studies at UC Berkeley.

Before entering academia, Professor Katyal was an associate specializing in intellectual property litigation in the San Francisco office of Covington & Burling. Professor Katyal also clerked for the Honorable Carlos Moreno (later a California Supreme Court Justice) in the Central District of California and the Honorable Dorothy Nelson in the U.S. Court of Appeals for the Ninth Circuit.

Courses taught include Property Law; Law and Sexuality; Advertising, Branding and the First Amendment; Law and Technology Writing Workshop; and Law, Innovation and Entrepreneurship (in 2019).

Domain of Application: Scholarship (digital humanities, computational social sci).

Shreeharsh Kelkar

Lecturer

Interdisciplinary Studies Field, UC Berkeley

I study computing infrastructures and their relationship to work, labor, and expertise.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? My new project tries to understand the tensions in data science between domain expertise and machine learning; this is an issue that is salient to the question of opacity and interpretability.

Domain of Application: General Machine Learning (not domain specific), Health, Employment/Hiring, Education.

Postdoctoral Scholars

Morgan G. Ames

Postdoctoral Scholar

Center for Science, Technology, Medicine & Society, UC Berkeley

Morgan G. Ames is a research fellow at the Center for Science, Technology, Medicine, and Society at the University of California, Berkeley. Morgan’s research explores the role of utopianism in the technology world, and the imaginary of the “technical child” as fertile ground for this utopianism. Based on eight years of archival and ethnographic research, she is finishing a book manuscript on One Laptop per Child which explores the motivations behind the project and the cultural politics of a model site in Paraguay.

Morgan was previously a postdoctoral researcher at the Intel Science and Technology Center for Social Computing at the University of California, Irvine, working with Paul Dourish. Morgan’s PhD is in communication (with a minor in anthropology) from Stanford, where her dissertation won the Nathan Maccoby Outstanding Dissertation Award in 2013. She also has a B.A. in computer science and M.S. in information science, both from the University of California, Berkeley. See http://bio.morganya.org for more.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? Machine learning, particular deep neural networks, have become subjects of intense utopianism and dystopianism in the popular press. Alongside this rhetoric, scholars have been finding that these new machine learning techniques are not and likely will never be bias-free. I am interested in exploring both of these topics and how they interconnect.

Domain of Application: General Machine Learning (not domain specific).

Sarah Brown

Postdoctoral Scholar

Electrical Engineering and Computer Science, UC Berkeley

Sarah Brown is a Chancellor’s Postdoctoral Fellow in the Department of Electrical Engineering and Computer Science. Sarah’s research to date has focused on the design and analysis of machine learning methods in experimental psychology settings. This includes development of machine learning models and algorithms that are reflective of scientific thinking about the data, analyzing their limits in context and developing context-appropriate performance measures. She is curious about how these data provenance related issues and analysis techniques translate into the study of fair machine learning.

Sarah Received her BS, MS and PhD from the the Electrical and Computer Engineering Department at Northeastern University. Her graduate studies were supported by a Draper Laboratory Fellowship and a National Science Foundation Graduate Research Fellowship. Her dissertation, Machine Learning Methods for Computational Psychology, developes application-tailored learning solutions and a better understanding of how to interpret machine learning results in the context of studying how the brain creates affective experiences and mental pathologies.

Outside of the lab, Sarah is a passionate advocate for underrepresented STEM engagement at all levels. Currently she serves as treasurer for Women In Machine Learning and previously as finance and sponsorship chair as a co-organizer for the WiML Workshop. She has also served in a variety of leadership positions in the National Society of Black Engineers at both the local and national levels including National Academic Excellence Chair.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? I am interested in how the type of adaptations to and analyses of machine learning algorithms used to satisfy requirements of scientists based on their data relate to and translate to issues of fairness. I see parallels between the two, rooted in a greater dependence on data provenance than is often present in the conversations in the machine learning community.

Domain of Application: General Machine Learning (not domain specific).

Stuart Geiger

Postdoctoral Scholar

Berkeley Institute for Data Science, UC Berkeley

Stuart Geiger is an ethnographer and post-doctoral scholar at the Berkeley Institute for Data Science at UC-Berkeley, where he studies various topics about the infrastructures and institutions that support the production of knowledge. His Ph.D research at the UC-Berkeley School of Information investigated the role of automation in the governance and operation of Wikipedia and Twitter. He has studied topics including moderation and quality control processes, human-in-the-loop decision making, newcomer socialization, cooperation and conflict around automation, the roles of support staff and technicians, and bias, diversity, and inclusion. He uses ethnographic, historical, qualitative, quantitative, and computational methods in his research, which is grounded in the fields of Computer-Supported Cooperative Work, Science and Technology Studies, and communication and new media studies.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? I study how people design, develop, deploy, understand, negotiate, contest, maintain, and repair algorithmic systems within communities of knowledge production. Most of the communities I study — including Wikipedia and the scientific reproducibility / open science movement — have strong normative commitments to openness and transparency. I study how these communities are using (and not using) various technologies and practices around automation, including various forms of machine learning, collaboratively-curated training data sets, data-driven decision-making processes, human-in-the-loop mechanisms, documentation tools and practices, code and data repositories, auditing frameworks, containerization, and interactive notebooks.

Domain of Application: Information Search & Filtering, General Machine Learning (not domain specific), Scholarship (digital humanities, computational social sci), Education.

Joshua Kroll

Postdoctoral Scholar

School of Information, UC Berkeley

Joshua A. Kroll is a computer scientist studying the relationship between governance, public policy, and computer systems. Currently, Joshua is a Postdoctoral Research Scholar at the School of Information at the University of California at Berkeley. His research focuses on how technology fits within a human-driven, normative context and how it satisfies goals driven by ideals such as fairness, accountability, transparency, and ethics. He is most interested in the governance of automated decision-making systems, especially those using machine learning. His paper “Accountable Algorithms” in the University of Pennsylvania Law Review received the Future of Privacy Forum’s Privacy Papers for Policymakers Award in 2017.

Joshua’s previous work spans accountable algorithms, cryptography, software security, formal methods, Bitcoin, and the technical aspects of cybersecurity policy. He also spent two years working on cryptography and internet security at the web performance and security company Cloudflare. Joshua holds a PhD in computer science from Princeton University, where he received the National Science Foundation Graduate Research Fellowship in 2011.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? I’m basically here to study these precise topics.

Domain of Application: Credit, Criminal Justice, Fraud/Spam, Network Security, Information Search & Filtering, General Machine Learning (not domain specific), Health, Employment/Hiring, Housing, Political/redistricting.

Brandie Nonnecke

Postdoctoral Scholar, Research & Development Manager

CITRIS & the Banatao Institute, UC Berkeley

Dr. Brandie Nonnecke is the Research & Development Manager for CITRIS, UC Berkeley and Program Director for CITRIS, UC Davis. Brandie researches the dynamic interconnections between law, policy, and emerging technologies. She studies the influence of non-binding, multi-stakeholder policy networks on stakeholder participation in internet governance and information and communication technology (ICT) policymaking. Her current research and publications can be found at nonnecke.com

She investigates how ICTs can be used as tools to support civic participation, to improve governance and accountability, and to foster economic and social development. In this capacity, she designs and deploys participatory evaluation platforms that utilize statistical models and collaborative filtering to tap into collective intelligence and reveal novel insights (See Projects), including the California Report Card launched in collaboration with the Office of California Lt. Gov. Gavin Newsom and the DevCAFE system launched in Mexico, Uganda, and the Philippines to enable participatory evaluation of the effectiveness of development interventions.

Brandie received her Ph.D. in Mass Communications from The Pennsylvania State University. She is a Fellow at the World Economic Forum where she serves on the Council on the Future of the Digital Economy and Society and is chair of the Internet Society SF Chapter Working Group on Internet Governance.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? I conduct research on the benefits and risks of algorithmic-based decision-making, including recommendations on how to better ensure fairness, accountability, and positive socioeconomic inclusion. This research is available at http://citris-uc.org/connected-communities/project/inclusive-ai-technology-policy-diverse-urban-future/ and through the World Economic Forum at https://www.weforum.org/agenda/2017/09/applying-ai-to-enable-an-equitable-digital-economy-and-society

Domain of Application: General Machine Learning (not domain specific), Policy and governance of AI.

Dan Sholler

Postdoctoral Scholar

rOpenSci at the Berkeley Institute for Data Science, UC Berkeley

I study the occupational, organizational, and institutional implications of technological change using qualitative, ethnographic techniques. For example, I studied the implementation of federally-mandated electronic medical records in the United States healthcare industry and found that unwanted changes in the day-to-day activities of doctors influenced a national resistance movement, ultimately leading to the revision of federal technology policy. Currently, I am conducting a comparative study of the ongoing shifts toward open science in the ecology and astronomy disciplines to identify and explain the factors that may influence engagement and resistance with open science tools and communities.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? I am interested in discussing and studying how the implementation of AI and other algorithmic applications might impact the day-to-day activities of workers and alter the structures of organizations. In particular, I would like to interrogate how AI-led changes might influence workers’ perceptions of what it means to be a member of an occupational or professional community and how the designers and implementers of algorithmic technologies consider these potential implications.

Domain of Application: Health, Scholarship (digital humanities, computational social sci).

Graduate Students

Shazeda Ahmed

PhD Candidate

School of Information, UC Berkeley

Shazeda is a third-year Ph.D. student at the I School. She has worked as a researcher for the Council on Foreign Relations, Asia Society, the U.S. Naval War College, Citizen Lab, Ranking Digital Rights, and the Mercator Institute for China Studies. Her research focuses on China’s social credit system, information technology policy, and role in setting norms of global Internet governance.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? I study China’s social credit system, which uses troves of Chinese citizens’ personal and behavioral data to assign them scores meant to reflect how “trustworthy,” law-abiding, and financially responsible they are. The algorithms used to calculate these scores are classified as either trade or state secrets, and to date it seems that score issuers cannot fully explain score breakdowns to users. There are plans to identify low scorers on public blacklists, which could discriminate against people who are unaware of how the system operates. Through my research I hope to discover how average users perceive and are navigating the system as it develops.

Domain of Application: Credit.

Michelle Carney

MIMS Candidate

School of Information, UC Berkeley

Currently a graduate student at the UC Berkeley Master’s of Information Management and Systems, Michelle studies the intersection of Data Science and User Experience. Michelle also facilitates the meetup Machine Learning and User Experience in San Francisco, where she organizes professional tech talks and panels on the topic of human-centered machine learning.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? I am interested in how to design for machine learning: how do we design in ways that collect the right data that also empower our users to understand what the machine learning models are doing, and also allow users transparency into how the algorithms decide their recommendations/personalization. While I am interested in the academic research (and there’s a lot of really great stuff!), I am particularly interested in the applications of transparent design of models in the professional space and how to engage tech companies in a way to create best practices.

Domain of Application: Criminal Justice, Information Search & Filtering, General Machine Learning (not domain specific).

Roel Dobbe

PhD Candidate

EECS/Berkeley Artificial Intelligence Research Lab, UC Berkeley

Since Fall 2013, I am pursuing a PhD degree in Electrical Engineering & Computer Sciences at UC Berkeley, under the guidance of Professor Claire Tomlin in the Hybrid Systems Group. My main interests are in modernizing energy systems and other societal infrastructures and decision making through the integration of control theory, machine learning and optimization. With this comes an interest in understanding social implications of automation technologies, and integrating critical thinking and social science perspectives in research, design and integration of new technologies. More recently, I have been teaching about issues of social justice and how these relate to our work as research and engineering professionals.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? I build algorithms that are used by operators to balance electric grids and require a certain level of interpretability and performance. In the industry, I have build tools to help explain predictions made by algorithms in decision making tools, aimed at improving transparency, understanding and trust for end-users. Lastly, I am studying principles from value sensitive design and responsible innovation and aim to translate these into concrete principles for the development of systems that rely on machine learning and artificial intelligence.

Domain of Application: General Machine Learning (not domain specific), Health, Energy.

Thomas Krendl Gilbert

PhD Candidate

Machine Ethics and Epistemology, UC Berkeley

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? I am specifically interested in the potential forms of social autonomy and moral agency that are available to different classes of algorithms. Alongside this, I focus on the work that goes into training machine learning tools and how this compares to historical waves of automation and inequality.

Domain of Application: Information Search & Filtering, General Machine Learning (not domain specific).

Daniel Griffin

PhD Candidate

School of Information, UC Berkeley

Daniel Griffin is a doctoral student at the School of Information at UC Berkeley. His research interests center on intersections between information and values and power, looking at freedom and control in information systems. He is a co-director of UC Berkeley’s Center for Technology, Society & Policy and a commissioner on the City of Berkeley’s Disaster and Fire Safety Commission. Prior to entering the doctoral program, he completed the Master of Information Management and Systems program, also at the School of Information. Before graduate school he served as an intelligence analyst in the US Army. As an undergraduate, he studied philosophy at Whitworth University.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? In what some have called an age of disinformation, how, and with what effects, do people using search engines imagine and interact with the search engine algorithms? How do the teams of people at search engines seek to understand and satisfy the goals and behavior of people using their services? What sort of normative claims does, and possibly can, society make of the design of the search engine algorithms and services?

Domain of Application: Information Search & Filtering.

Anne Jonas

PhD Candidate

School of Information, UC Berkeley

After previously working in program management at the Participatory Culture Foundation and the Barnard Center for Research on Women, I now study education, information systems, culture, and inequality here at the I School. I am a Fellow with the Center for Technology, Society, and Policy and a Research Grantee of the Center for Long-Term Cybersecurity on several collaborative projects.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? Use of algorithms in educational curriculum provision, assessment, evaluation, surveillance and discipline. Also working on a project related to “regional discrimination” that looks at how geographic markers are used to block people from certain websites and web based services.

Domain of Application: Criminal Justice, Information Search & Filtering, Employment/Hiring, Education.

Daniel Kluttz

PhD Candidate

Sociology, UC Berkeley

I am currently a PhD candidate in sociology at the University of California, Berkeley. Prior to coming to Berkeley to pursue the PhD, I practiced law in Raleigh, NC. In my research, I study the relations between law, society, and social and technological change across such settings as today’s digital economy, contemporary US oil and gas development, legal education, and the early American magazine industry. My research draws from intellectual traditions in law and society, organizational theory, cultural sociology, economic sociology, and technology studies. I employ both quantitative and qualitative methods in my work, including longitudinal and multi-level modeling, geospatial analysis, in-depth interviews, direct observation, and historical/archival methods.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? With my training in both law and sociology, my research interests pertain to the formal and informal governance of economic and technological innovations and the organizations involved with such innovations. For example, with Professor Marion Fourcade (UC Berkeley sociology), one current research project uses in-depth interviews and direct observation to study the political economy of personal data markets, particularly the social, legal, and ethical issues involved in the collection, storage, circulation and processing of individual data by commercial entities. By investigating how companies, regulators and consumer-rights organizations engage in, govern, and contest the collection and commodification of digitally sourced personal data, we aim to shed new light on the often opaque infrastructures on which these markets depend and the social (re-)construction of notions of privacy, consent, and fairness within this domain.

In future work, I plan to build on this research. For example, I am interested in the legal conditions and social processes by which firms and industry professionals value the data that they hold or acquire, especially the increasingly prized data used to train machine-learning-based systems. If data is the “new oil,” as many claim, then studying how data are imbued with value, whether more formally (by law) or informally (by industry conventions and norms), has direct implications for notions fairness and transparency within the competitive marketplace. Finally, by then comparing industry techniques and perceptions of valuation for personal data to sociological forces influencing how individuals value such data, I hope to reveal structural and cultural gaps within and among these groups while adding a more holistic view of the data-based economy.

Domain of Application: Credit, Information Search & Filtering, General Machine Learning (not domain specific), Employment/Hiring, Law/Policy.

Nitin Kohli

PhD Candidate

School of Information, UC Berkeley

Nitin Kohli is a PhD student at UC Berkeley’s School of Information, working under Deirdre Mulligan. His research examines privacy, security, and fairness in algorithmic systems from technical and legal perspectives. On the technical side, Nitin employs theoretical and computational techniques to construct algorithmic mechanisms with such properties. On the legal side, Nitin explores institutional and organizational mechanisms to protect these values by examining the incentive structures and power dynamics that govern these environments. His work draws upon mathematics, statistics, computer science, economics, and law.

Prior to his PhD work, Nitin worked both as a data scientist in industry and as an academic. Within industry, Nitin developed machine learning and natural language processing algorithms to identify occurrences and locations of future risk in healthcare settings. Within academia, Nitin worked as an adjunct instructor and as a summer lecturer at UC Berkeley, teaching introductory and advanced courses in probability, statistics, and game theory. Nitin holds a Master’s degree in Information and Data Science from Berkeley’s School of Information, and a Bachelor’s degree in Mathematics and Statistics, where he received departmental honors in statistics for his work in stochastic modeling and game theory.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? My research interests explicitly are in the construction algorithms that preserve certain human values, such as fairness and privacy. I’m also interested in legal and policy solutions that promote and incentivize transparency and fairness within algorithmic decision making.

Domain of Application: Criminal Justice, Information Search & Filtering, General Machine Learning (not domain specific), Employment/Hiring, Scholarship (digital humanities, computational social sci), Education.

Sam Meyer

MIMS Candidate

School of Information, UC Berkeley

Sam Meyer is a masters student at the School of Information studying data visualization and product management. In the past, Sam worked as a software engineer in biotech.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? My MIMS final project is focused on algorithmic transparency to a non-technical audience.

Domain of Application: General Machine Learning (not domain specific).

Benjamin Shestakofsky

PhD Candidate

Sociology, UC Berkeley

I am a PhD Candidate in the Department of Sociology at the University of California, Berkeley. My research centers on how digital technologies are affecting work and employment, organizations, and economic exchange.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? I am interested in the human labor that supports machine-learning systems.

Domain of Application: Work and labor.

Amit Elazari Bar On

PhD Candidate

School of Law, UC Berkeley

Amit is a doctoral law candidate at BerkeleyLaw and a Research Fellow at CTSP, Berkeley School of Information. She is the first Israeli LL.M. graduate to been admitted to the doctoral program at Berkeley or any other top U.S. doctoral program in law, on a direct-track basis. She graduated Summa Cum Laude from her LL.M. in IDC, Israel following the submission of a research thesis in the field of intellectual property law and standard-form contracts. She holds an LL.B. and a B.A. in Business Administration (Summa Cum Laude) from IDC, is admitted to practice law in Israel and has worked at one of Israel’s leading law firms, GKH Law. Amit has been engaged in extensive academic work, including research, teaching, and editorial positions. Her research interests include patents, privacy, cyber law, copyright, and private ordering in technology law. Her work on Intellectual Property and cyber law has been published in the Canadian Intellectual Property Journal and presented in leading security and Intellectual Property conferences such as IPSC, Defcon and BsidesLV.

How do your research interests relate to the topics of algorithmic opacity/transparency and/or fairness? The law, and specifically IP and anti-hacking laws could serve as a barrier for algorithmic opacity, stifling vital research and tinkering efforts, or as an enabler of transparency: as means to construct safe harbors limiting enclosure and monopolization of algorithms. Similarly, private ordering mechanisms, whether tech-based or contract-based (EULAs and ToUs), operating in the shadow of the law, affecting millions, but dictated by few, serve a key function in regulating the algorithmic landscape, including by limiting users and researchers access to the design and backbone of algorithms. In this respect, information security and the study of algorithms have much in common, and I hope to explore what lessons can be learned from cyber law.

More generally, the law could construct incentives that will foster algorithmic transparency and even the use of AI for the greater good, promoting social justice — yet the challenge will be to create incentives that internalize the benefit from compliance with the law (either by introducing market based incentives or safe harbors) without relying on stringent, costly, enforcement. I hope to explore these questions and others in the course of the workshop.

Domain of Application: General Machine Learning (not domain specific).

Schedule

Nothing from December 17, 2017 to December 30, 2017.

Subscribe to our mailing list here to get the latest updates.

Funding Note: AFOG is funded by a research gift from Google to support cross-disciplinary academic research and conversations between industry and academia to explore and address issues related to fairness and opacity in algorithms.

Banner Photo Credit: “UC Berkeley South Hall” by Anand Rajagopal for I School IMSA is licensed under CC BY 2.0