Finn Brunton, New York University and Helen Nissenbaum, Cornell Tech and New York University
Since we began planning the “International Workshop on Obfuscation: Science, Technology, and Theory” a year ago, there have been numerous shifts in the world’s technological, political, and economic landscape, from a US election influenced by email leaks and algorithmically-promoted fake news stories, to the merger of some of the world’s largest telecom and media companies into data-driven advertising behemoths, to data breaches of major entertainment companies, healthcare providers, and voting systems (to name but a few). We define obfuscation as the production of noise modeled on an existing signal in order to make data or information more ambiguous, uncertain, and difficult to exploit—an idea that is particularly salient in the era of big data technologies. In concert with other practices and tools, obfuscation offers a novel and unique means of evading data surveillance, building privacy-respecting platforms without sacrificing utility, and improving security (including through obfuscating code or hardware itself). However, while obfuscation has long been a methodology engaged by researchers and developers in certain subfields of computer science, engineering, and applied technologies, it has only recently been taken up and studied as a broader strategy or set of tactics by humanists, social scientists, policymakers, and artists.
Building off the 2014 Symposium on Obfuscation, as well as the myriad case studies we researched for our 2015 book Obfuscation: A User’s Guide for Privacy and Protest, our intention for this workshop was to bring together a group of interdisciplinary scholars, industry researchers and practitioners, independent software producers, and privacy artists and activists to help shape this nascent field and seed the beginnings of a more holistic research community. Of course, obfuscation is not a singular solution, but instead operates across diverse scenarios, fields, and sociotechnical contexts, and can be wielded by and against many different actors—including both those with and without power. In most of the applications we consider, it serves as a means for individuals to evade scrutiny and create spheres of freedom and privacy, including freedom from being locked into an increasingly consolidated set of technologies and technology owners. Still, it has become clear that governments, corporations, or other institutional actors may also engage techniques of obfuscation, often for more nefarious ends.
Our goal, then, was not to attempt to nail down obfuscation, but rather to open up its myriad forms and applications to critical consideration. Following the shifting and provisional structure of obfuscation as a strategy—and in true workshop format—we intended not merely to present typical academic papers, but to spark conversations across disciplines, methodologies, and applications, through a variety of formats that included prototypes of products, artistic interventions, and speculative proposals, as well as theoretical and empirical research from a range of fields. Moreover, as an interdisciplinary and multi-sector endeavor, our hope was to simultaneously engage technical issues, ethical and political concerns, and evaluations of the strengths and weaknesses of various use cases, and to consider the potential benefits and limitations of obfuscation overall.
To that end, we identified four key themes that not only helped structure the presentations themselves, but also emerged out of discussions throughout the weekend:
Threat Models: Put simply, understanding threat models is key to determining in which cases obfuscation is the solution and in which cases it is not. Throughout the workshop, it became clear that effective threat modeling includes understanding adversarial capacities and dependencies, levels of coordination, and questions of time and scale. For example, obfuscation may be an effective defense against lower-level adversaries (such as a jealous spouse or boss), while still being breakable by those with greater (or networked) computational, financial or legal, resources; similarly, what seems hidden today may be revealed tomorrow. Several speakers raised concerns about the dangers of under- or over-estimating the threat level, as well as perpetuating an “adversarial arms race.”
Benchmarks and Metrics: Anyone who has created obfuscation tools has been asked, “But does it work?” Throughout the course of the workshop, several different models for assessing success were offered for different contexts, including measurements based on financial costs, computing power, time to de-obfuscate, political efficacy, and so on. There are many cases where identifying benchmarks can be socially or technically challenging, such as in determining training data for “real” or “fake” news. Moreover, it became clear that often those creating obfuscating systems have only a partial view of their adversaries (or of the effects of their own actions), and often are forced to make quick assessments based on limited information—which can be quite dangerous. In addition to identifying quantitative metrics for measuring particular tactics’ efficacy, some also suggested considering equally rigorous qualitative standards in evaluating its performative or aesthetic impacts.
Ethical Justifications: Many have questioned the ethical justifications for obfuscation techniques, highlighting cases in which tactics may seem to enable instances of free-loading, wasting resources, or failing to challenge larger structures of power. These must be carefully considered in context—as well as with regards to asymmetries of information or other forms of power—while also offering legitimacy to obfuscation as a concept and offering confidence to those creating such systems. Several participants also raised ethical questions about who participates in obfuscation tactics and in what ways, noting that people with diverse and intersecting identities may be unevenly impacted by different forms of surveillance and resistance.
Safeguarding Obfuscation: Widely-available technologies and platforms may not support or allow for obfuscation, as a matter of function or policy. In anticipation of those who would not like obfuscation to take place, many asked how to best create space for the development of a toolkit of obfuscation. Several speakers also raised questions about who is best engaged to support obfuscating tactics: technologists, researchers, activists, or everyday users?
Needless to say, the workshop provoked more questions than answers, and we anticipate these themes and queries will continue to shape obfuscation research moving forward.
In keeping with the workshop format, rather than publishing a formal set of proceedings, we have asked our panelists to each provide a brief essay summarizing their project, concept, application—but with an emphasis on the questions, challenges, and discussions raised during the weekend, as well as those they anticipate will guide future research in this area. In such a way, the pieces in this collection constitute a small taste of the wide range of research in the field of obfuscation, some of which may be found by consulting their recent publications, but much of which continues to be a work in progress. As with the workshop itself, this report is a starting point rather than an end point.
To conclude, we would like to express our gratitude to all of our presenters for taking on the difficult task of defining this emerging field, particularly as it engages diverse theoretical backgrounds, methodologies, and applications. We would also like to thank our planning committee (see sidebar), as well as our sponsors: the NYU department of Media, Culture, and Communication and NYU Law School’s Information Law Institute, and the National Science Foundation. We look forward to continuing to build this research community and to improving our means of putting obfuscation in practice.
Stay in Touch
We'll send occasional announcements about conference details and follow-up initiatives.
International Program and Organizing Committee:
Paul Ashley, Anonyome Labs Benoît Baudry, INRIA, France Finn Brunton, New York University Saumya Debray, University of Arizona Cynthia Dwork, Harvard University Rachel Greenstadt, Drexel University Seda Gürses, Princeton University Anna Lysyanskaya, Brown University Helen Nissenbaum, Cornell Tech & New York University Alexander Pretschner, Technische Universität München Reza Shokri, Cornell Tech