Facial recognition, drones, heat maps, massive data processing and cellphone monitoring: Decades ago, they appeared only in science fiction movies, but in recent years, they have been increasingly applied for security purposes. With the world combating COVID-19, the integrated use of such technologies has become a global phenomenon.
Although such measures are aimed at the public health emergency, how can we guarantee that the databases and applications developed for protecting the population’s health will not be directed toward not-so-noble purposes and prioritize privacy? It is imperative to deepen the debate about the use of these tools, expanding the understanding of where, how and when to apply them in an ethical, fair way to communities worldwide.
In Israel, the security service Shin Bet has started to employ a powerful surveillance program used to fight terrorism to monitor patients with COVID-19 and possible carriers in order to retrace their movements. Despite having identified 500 infected individuals, Israel’s highest court ruled that without special legislation authorizing the program and drawing specific limits, the Shin Bet has no authority to track civilians in its efforts to contain the crisis. The risk of a dangerous invasion of privacy that would undermine democracy was the primary concern, and it is a valid one.
In Brazil, COVID-19 has concentrated efforts around heat maps. In this way, states were able to consult aggregated and nonindividualized information on population displacement in real time and in different locations to assess the efficiency of social isolation measures and facilitate decision-making. There are also companies monitoring 60 million cellphones using GPS data, Wi-Fi and Bluetooth signals, under the claim that data collection occurred with the permission of the applications’ users, negating possible privacy concerns.
In Russia, even phone and credit card data have been used to map those who made contact with infected people and must be quarantined, in addition to the robust system of 170,000 cameras and facial recognition. At the end of March, Moscow police reportedly identified and fined more than 200 people who allegedly violated quarantine and self-isolation orders. A Russian media report pointed out that some of the alleged offenders had been away from home less than half a minute before they were caught by a camera, displaying the cutting effectiveness of this technology that disregards the privacy of those it monitors.
But on top of these privacy worries, there are doubts about the effectiveness of such methods. Smartphone use is far from ubiquitous, even in affluent countries. It is also essential for the device to constantly be on, with the app installed, as well as with GPS and Bluetooth activated. Moreover, users, once infected, must register it on the app in order for the program to function.
Other countries have implemented similar contact tracing applications, generally supported by GPS or Bluetooth smartphones, which seek to catalog all personal interactions that their carriers have had over the past 14 days. The moment that one of these individuals tests positive for COVID-19, messages are sent to the entire network of people who have been in contact with the infected person, warning them to quarantine. But GPS data raises special concern regarding privacy violations. If analyzed over time, this information can easily reveal not only the identity of each “anonymized point” on the map, but also tastes, routines and sexual preferences, and has even exposed marital infidelities.
In some situations, the public’s concerns regarding privacy have come to fruition. A security breach in Qatar of one of these applications exposed sensitive data from more than 1 million people. In North Dakota, it was discovered that a similar application was passing data to Google and Foursquare, further revealing that the widespread privacy concerns are not without foundation or reason.
In the United States, San Francisco banned the police from using facial recognition technologies on the grounds that the use of technology would jeopardize civil rights and freedoms. Data capture has increasingly sought to reach sensitive human information. Images taken in public places can, for example, lead to biometric data drawn on faces and possible inferences about the individual’s religion and racial or ethnic origin.
Do you remember that free and harmless app you downloaded that asked for enduring access to your GPS and Bluetooth? Such information was requested to be collected and passed on to “partner applications” after being properly monetized, for treatment on different purposes. The next time you encounter these permits, it’d be wise not to click “accept” without further reflection.
Technological mechanisms can be essential for human development and, immediately, to overcome this unfortunate pandemic. Far from defending the abolition of these technological tools, foster public debate in order to promote their improvement and adequacy. We must keep in mind that such mechanisms should aim to maximize human well-being and dignity, true foundations of democracies worldwide, and not be used as means to violate the values enshrined in the U.S. Constitution or to undermine its foundations and objectives.
Anderson de Paiva Gabriel is a Brazilian visiting scholar at the UC Berkeley School of Law and a doctoral candidate at the State University of Rio de Janeiro.