The 21st century has been marked by the incredible advancement of the Artificial Intelligence (AI) industry that has gradually become part of our day-to-day life, transforming many aspects of our society. AI is defined as “a system’s ability to interpret external data correctly, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation” (Haenlein and Kaplan 2019, 5). It is important to understand that AI machines become more efficient and more accurate the more data can store.
And Big Data is the collection and aggregation of informational data on large datasets that are going to be analyzed to reveal patterns and associations. Thus, Artificial Intelligence technology and Big Data are complementary. The excitement around AI and Big Data stand on the fact that together they have the ability to identify patterns that cannot be detected by human cognition.
After the attacks of 9/11, there has been an unprecedented gathering of private informational data of citizens for security reasons in the United States, which gave rise to what experts have been calling the age of ‘capitalist surveillance.’ The incentives linked to national security have pushed forward the development of technologies that would use human biometric information to properly identify citizens, and one of these technologies is known as Facial Recognition Technology (FRT) (Ibid;). In short, this technology uses an algorithm that matches the image of a face with one that has already been stored in its database. Today FRT is still used primarily for security purposes, such as law enforcement, and border controls.
In the past three decades, the way our society perceives and responds to risk has changed. One of the recent approaches to assess risks of the modern era was advanced by the sociologist U. Beck in 1992. Given the incredible technological changes, he decided to study the fears and transformations of social attitudes towards risks. And argued that risks in modern society are central to the political and social conflicts as they became the prime characteristic of the world order today (Neri & Cozmani 2019, 663). Beck argues that “modernity is a project of social and technological control by a nation-state.”
Thus, risk society is an unavoidable condition of modernity, and it concerns everyone. Today Facial recognition technology is an example of the ‘reflexive modernity’ in which we live. Indeed, this technology was primarily developed to manage certain risks related to national security and public safety. However, throughout time and due to its advancements this technology has become a source of risk itself. The main risk identified by experts and most of the actors involved in the development of Facial Recognition Technology concerns privacy rights, which is a risk that brings major ethical concerns. The core of this concern stands on the biometric informational data that is obtained, stored, shared, and further used for different purposes by different agencies and that citizens are not aware that this is happening.
Today, the traditional regulations that protect the privacy rights of individuals do not account for this kind of collection of informational data. It is important to highlight that the right to privacy is granted as a fundamental human right, by article 12 of the Human Rights Declaration. And it is an essential source to balance the power relationship between governments and citizens, and between citizens and big private businesses (Nakar 2010, 100). In fact, most of the leading actors involved in the development of Facial Recognition Technology, such as Google, Amazon, IBM, and Microsoft have been recognizing that the risk of this technology is indeed related to the potential for its abuse which may lead to the violation of fundamental rights (Justin 2015).The private actors manifested their concerns with regard to FRT. For instance, Microsoft reported that the risks are associated with the potentiality that governments will have on surveilling citizens while violating people’s freedom. Also, the company stressed that the algorithms that work for FRT accuracy are not yet fully reliable, thus it is dangerous to use it on a large scale as there are problems with bias identification (Smith 2018). Another example is found in a letter to Congress written by IBM which pointed out its concerns on how the technology will be used by law enforcement agencies. The main risk highlighted by the company concerned the abuse of FRT for surveillance purposes, leading to the violation of freedoms and basic human rights (IBM Letter to Congress, 2020).
Moreover, also civil rights groups outsourced their concerns with regards to surveillance and privacy rights. For instance, ‘Fight for the Future’ a US civil rights group has emphasized and manifested their distrust towards the motivations behind the development of FRT by these big companies, and its sale and agreements with the public sphere (e.g. police departments). Also, another example is the non-partisan independent watchdog POGO that has recently testified in Congress its concerns with regards to misidentification, bias algorithm, the potential to surveillance. They compared the use of this technology with no regulation to authoritarian countries such as China and Russia.
Technology clearly advances faster than privacy legal protections. According to Brookman, the US legal framework that should protect personal data privacy is still very weak, especially when compared to the European one (2020, 358). Indeed, in the United States, federal and state laws select only specific datasets to protect, that is considered to be sensitive, such as children’s information. Rather than having affirmative privacy protection over most of the personal information (Brookman 2020, 259).
The creation of a regulatory regime for FRT is still an ongoing process in the United States. Private and public actors are still in the phase of assessing the risks and defining the standard-setting for a viable regulatory framework that would encompass all. Until today, it is possible to find some regulations in specific states/cities that were brought by the public sector, and some self-regulatory measures taken by different companies that deal with FRT to decelerate the sales and developments.
In this case of Facial Recognition Technology, there is a significant reliance on alternative types of regulations, in particular, there is growing support for self-regulation. However, none of them have been precisely targeting the risk that concerns privacy rights, although many private actors have been alerting about the risk.Self-regulation happens when the target – to which the regulations apply – and the regulator – who enforces the rules – are the same individual-firm/industry or company. Indeed, companies such as Microsoft, Amazon, Google, Identix Incorporated, have already adopted some self-regulations in the past year.