1.2


Privacy and Accountability Need Not Be Opposing Goals

Bryan Ford

Prof. Bryan Ford

Prof. Bryan Ford leads the Decentralized and Distributed Systems Research Laboratory at the Swiss Federal Institute of Technology in Lausanne (EPFL). Since earning his Ph.D. at MIT, Ford has held faculty positions at Yale University and EPFL. He holds the AXA Chair on Information Security and Privacy at EPFL.

Giving up our privacy may be insufficient to ensure our cyber security in an AI-led cyber war. Thankfully, it is also unnecessary.

In our digital world, privacy often seems at odds with security and accountability.¹ For example, do we need to know who is behind their screen proposing a service in order to ensure the service is secure and the provider can be held accountable for it — that is, both responsible for complying with rules and able to demonstrate that they are? The early Internet promised a global platform for free expression open to all without censorship or discrimination. However, the massive arrival of anonymous spammers and trolls elicited widespread calls for a stronger identification of users, in order to keep abusers accountable or at least prevent them from creating a new false identity the instant their previous one gets blocked. Now, AI-driven deepfakes can be used to generate millions of false identities and interactions online, amplifying their power of misinformation and chaos by orders of magnitude. These democracy-threatening abuses led to calls for social media platforms to ‘do something’. However, their responses often erode privacy, as anonymous employees and opaque AI-driven algorithms can decide whether each user ‘seems’ human, and their judgements end up unevenly enforced. These algorithms demand massive amounts of privacy-invasive data about users. In addition, the resulting arms race between AI-driven fakery and detection is a war that real humans are doomed to lose.

Now, digital financial platforms such as cryptocurrency exchanges increasingly forbid anonymity outright, cancelling bitcoin’s early aspirations toward privacy and ‘financial inclusion’, that is, open and democratic financial systems that allow global participation.

Amidst these tensions, it is natural to view privacy and accountability as opposing goals that we must balance. This dichotomy is wrong for two reasons. First, giving up our privacy — even all our privacy — will be insufficient to make user identification truly secure or accountable in the long term if we cannot escape the AI-versus-AI arms race, as cyber wars are in practice often led with AI tools. Second, giving up our privacy may be not only insufficient but also unnecessary. Indeed, typical data-driven approaches conflate identity with personhood, confusing the pool of information about a user with the basic fact of existing as a unique person and the ability to prove that fact securely online.

Approaches driven by Big Data assume that what is important about ‘us’ are bits of identifying information stored in databases: our names, addresses, ID numbers, social media profiles, etc. However, digital information is increasingly forgeable. Relying on information analysis for user identification is both what compromises our privacy and what gets us into the artificial intelligence arms race. India’s Aadhaar program² represents a grand experiment in the data-driven approach, aspiring to assign every citizen a unique ID number via biometric identification. Numerous issues of reliability, exclusion, and corruption in Aadhaar, however, have proven a terrifying case study of the risks entailed in assuming that digital information can reliably represent a real person.

¹ Privacy, Security and Accountability: Ethics, Law and Policy, edited by Adam D. Moore, Rowman & Littlefield Publishers / Rowman & Littlefield International, 2021

2 Aadhaar Failures: A Tragedy of Errors, Reetika Khera, Economics & Political Weekly, April 2019

New approaches promise strong security and accountability while preserving full digital and physical anonymity.

Thankfully, collecting and analysing identifying information is not the only way to achieve accountability online.

As an alternative to privacy-invasive identification — that is, knowing who is doing what online — experimental proof of personhood methods attempt to create anonymous-but-accountable digital tokens that securely and uniquely represent real people without having to identify them.³ Researchers explore multiple approaches⁴ to proof of personhood that exhibit a variety of security and privacy properties.⁵ Some of these approaches promise strong security and accountability despite full digital and physical anonymity. For example, digital ‘presence’ tokens can attest that conference attendees are unique and real people without embodying any identity information.

Cryptocurrencies and central bank digital currencies represent another area of tension between security and privacy.⁶ Financial compliance regulations require user identification, but this threatens the anonymity, autonomy, and ‘borderlessness’ prized by many cryptocurrency users. Similarly, the perception of central bank digital currencies as tools of digital surveillance by governments and corporations may threaten their adoption. But with technologies for decentralized management of private data, for example, neither cryptocurrencies nor central bank digital currencies necessarily need to accept an ‘either-or’ choice between privacy and accountability.⁷ Future digital currencies might be anonymous and even cash-like by default,⁸ but could nevertheless enable investigators to follow dirty money trails through warrant-based tracing processes, even without knowing the name or account information of the target.⁹

We must be wary of both the security-purist viewpoint that privacy must be sacrificed on the altar of law and order and the privacy-purist viewpoint that we must live with arbitrarily amplified online abuses as the price of free speech.

The solutions to achieve both security and privacy will lie in the middle. We need better communication and knowledge transfer between the regulators and the technologists who understand and develop these tools.

3 Using “Proof of Personhood” To Tackle Social Media Risks, Aengus Collins and Bryan Ford, EPFL, March 2021

6 Design Choices for Central Bank Digital Currency, Sarah Allen et al. Global Economy & Development Working Paper 140, Brookings Institution, July 23, 2020

7 CALYPSO: Private Data Management for Decentralized Ledgers, Eleftherios Kokoris-Kogias et al., August 2021

8 How to Issue a Central Bank Digital Currency, David Chaum, Christian Grothoff and Thomas Moser, Swiss National Bank, March 2021

9 Open, Privacy-Preserving Protocols for Lawful Surveillance, Aaron Segal, Joan Feigenbaum and Bryan Ford, July 2016

The central challenge in addressing the societal impact of cyber security measures is the dual-use character of cyber technologies: they both provide benefits to society and present the greatest threats to it. The infrastructure, the expertise, the knowledge and the methods all originate in the same ecosystem. The only defences we have against cyber risks are cyber technologies themselves.

Since no security guard can fend off a lightning-fast algorithm, cyber surveillance, tracking, profiling, automated analysis and decision-making seem to be the only options. The malevolent activities in cyber space can only be reduced by flooding the entire cyber ‘body’ with cyber poison and these invasive measures can compromise the exact societal values that cyber technologies are meant to serve, such as privacy, dignity, trust, solidarity, rule of law, civil and human rights, health and safety, among others. A societal approach to cyber security design would first determine which of these societal values cyber technologies generate, and what values are threatened when these cyber technologies come under attack.

Societies in general can be distinguished from one another by the degree to which they regard security as a collective problem or as an individual problem. Whereas Scandinavian countries organize the security of their societies in terms of seeking collective good and avoiding collective bad, highly liberal and individualistic societies like the United States trust that allowing citizens the maximum of freedom to seek the good and avoid the bad will in the end be best for all. Central European countries lie somewhere in between.

The challenge lies in the reality of privatised technology development, as true today as it was for President Eisenhower in 1961. Security in general and cyber security in particular hold the greatest risk in the face of the conundrum that is created when financial values are prioritised over societal values. It is the situation where decisions about what cyber technologies to build and how to build them are based on corporate balance sheets rather than on values and public good.