Facial Recognition as a Security Tool*

Mª Josefa Ridaura Martínez

I. Among the many fields in which AI-based facial recognition is or could be applied, this brief section focuses on its potential as a security tool. It highlights not only the advantages of this technology, but also the significant and complex challenges it poses—particularly in relation to fundamental rights that underpin democratic states.  

However, before delving further into the subject, several preliminary considerations must be made:

a. On the one hand, although not universally accepted, European regulations regard a person’s face as personal data. For instance, the Spanish Data Protection Authority has long considered images captured through video surveillance to constitute personal data, insofar as both access control and responses to criminal acts aim to identify individuals. When video surveillance systems incorporate biometric identification capabilities, such processing is also deemed to involve a special category of data. Therefore, determining the nature of data is essential in order to establish the applicable legal framework.  

b. On the other hand, it is important to distinguish between two systems: verification systems, which compare two templates to produce a 1-to-1 match—for instance, in private uses such as unlocking mobile phones, accessing bank accounts, or even workplace monitoring. These uses are, in most cases, voluntarily accepted by the user, and such consent determines the lawfulness of the practice. By contrast, biometric identification systems capture an image and compare it—even in real time—with images previously stored in databases (1-to-many).  

This latter system is the primary one employed in public places for security purposes. Although first implemented in the 1990s, its capabilities have since been significantly enhanced through artificial intelligence.

II. Facial recognition offers numerous advantages, particularly in the field of security, which is the focus of this discussion. It can assist in the resolution of criminal investigations and has proven useful in identifying stalkers, combating terrorism, and recognising disaster victims, among other applications. Thus, since the launch of INTERPOL’S Facial Recognition System in late 2016, the technology has contributed to the identification of nearly 1,500 terrorists, criminals, fugitives, and missing persons. More recently, in Ukraine, it has been employed to detect saboteurs, identify corpses, and help reunite families.  

III. Alongside the strengths of facial recognition, its weaknesses are equally significant. In the absence of a robust legal framework with appropriate safeguards, its use may lead to scenarios of mass surveillance, with serious implications for fundamental rights that lie at the core of democratic states. These concerns go beyond privacy and data protection, as facial recognition can also undermine freedom of movement, freedom of thought, freedom of assembly, the presumption of innocence, the right to effective judicial protection, and the prohibition of discrimination, among other rights.  

The impact on fundamental rights is further exacerbated by the fact that the use of facial recognition cameras for security purposes does not rely on consent, unlike other voluntary or—at least consented—uses mentioned earlier. An individual walking through a public space, whose face is captured and converted into data by a facial recognition system, has given no consent whatsoever. I would emphasise the following point:  

i. Indiscriminate tracking gives rise to mass surveillance scenarios, which sit uneasily within the framework of democratic societies. It also prompts the enduring question of whether it is proportionate to monitor everyone in order to control only a few—particularly given that surveillance on public roads is generalised and indiscriminate, targeting not only those individuals being sought.

It is true that no right is absolute—not even the right to privacy. However, in a democratic state, any intrusion into privacy to preserve in the name of security must be justified and comply with the principles of suitability, necessity, and proportionality. The ECHR has repeatedly outlined the requirements that an active surveillance program must meet: (a) it must be clearly and precisely grounded in law; (b) the legal framework must provide strong safeguards for the rights of those concerned; and (c) the measure must be necessary in a democratic society (see, for instance, Marper v. the United Kingdom).

ii. Facial recognition also affects personal liberty and the presumption of innocence, as one of key risks associated with its use is the potential for mistaken arrests. This occurs particularly when law enforcement officers fail to undertake additional verification measures beyond the biometric results provided. It must be stressed that the output of a biometric comparison system is never binary (yes or no); it provides a probability of match, which must therefore be corroborated. In this regard, software developers themselves acknowledge that facial recognition algorithms are less accurate than those based on other biometric data such as fingerprints or iris scans, which increases the risk of false positives or false negatives. As a result, the significant margin of error inherent in facial recognition systems calls for enhanced verification safeguards, particularly given the often inadequate guarantees surrounding the conditions under which facial images are obtained.  

Furthermore, this flawed identification process frequently coincides with discriminatory biases, as most individuals wrongfully arrested have been Black or women. The seriousness of such biases means that erroneous arrests resulting from facial recognition may constitute violations of personal liberty, leading to unlawful detention and undermining the presumption of innocence. Given that we are dealing with a fallible technology carrying significant risks, its use for policing purposes demands rigorous verification beyond the sole output generated by the system.   

iii. Facial recognition also contributes to a chilling effect, as the awareness of constant surveillance can deter individuals from exercising fundamental rights such as freedom of assembly and protest. This has been recently acknowledged by the ECHR which held that the use of «highly intrusive» facial recognition technology to identify and arrest participants in a peaceful demonstration could produce a chilling effect on the rights to freedom of expression and association. As such, its use «against a person exercising their right to freedom of expression is incompatible with the ideals and values of a democratic society» (Glukhin v. Russia, ECtHR, 4 July 4 2023).  

In short, given these risks, the Council of Europe has cautioned that the use of facial recognition systems by law enforcement agencies should be permitted only when strictly necessary to prevent an imminent and serious threat to public safety.

IV. The use of facial recognition in Spain for security purposes gives rise to two main scenarios:

a. Its use for criminal investigation purposes, governed by Organic Law 7/2021 of 26 May, on the protection, detection, investigation, and prosecution of criminal offences and the execution of penalties. In this context, the Security Forces and Corps use ABIS (Automatic Biometric Identification System): a tool that employs artificial intelligence algorithms to determine, within seconds, whether an image contains the face of a person already registered in police records. This program—at least in principle—does not result in mass surveillance, and its associated risks are more limited, as it is used in a post-offence context.  

However, its use in Spain gives rise to a number of concerns stemming from the lack of comprehensive regulation on the matter—since the aforementioned law does not provide full coverage. This regulatory gap raises important questions regarding the methods used to obtain images, the subsequent management of the databases in which they are stored, and the conditions under which such data are retained.

b. For surveillance purposes aimed at preserving public security.

Within this framework, Organic Law 7/2021 likewise fails to provide adequate regulation regarding the circumstances under which the use of facial recognition may be permitted. As such, it constitutes a deficient legal framework that must be adapted to the requirements established by the recent Artificial Intelligence Regulation (AIR), which was formally adopted on 24 May 2024. This Regulation, in general terms, prohibits indiscriminate surveillance and classifies facial recognition as a high-risk system.

In brief, it exempts from its requirements systems intended for biometric verification (1-to-1). With regard to identification-based facial recognition (1-to-many), the Regulation does not apply to activities related to military, defence, or national security purposes, but it does apply in the context of public security.

i. Specifically in the area of public security, the Regulation prohibits real-time biometric identification for law enforcement purposes in public spaces, except in a limited set of circumstances exhaustively listed, where such use is deemed necessary to achieve an essential public interest of sufficient importance to outweigh the associated risks.  

These exceptions are set out in Article 5(1)(h) and are subject to strict safeguards: beyond clearly defined cases in which such use is permissible, an impact assessment must be conducted, and prior express authorization must be obtained either from a judicial authority or from an independent administrative body of a Member State, whose decision is binding.

ii. When facial recognition is not used for law enforcement purposes, but rather in other contexts where it is considered a high-risk system, its use is not prohibited—as it is in the previous case—but must comply with the obligations established by the Regulation (Article 6). This includes conducting risk assessments and ensuring ongoing transparency and human oversight throughout its deployment.

The AIR does not apply to all areas in which facial recognition may be used. Other uses—by different actors and in different settings—are subject to specific data protection regulations. This includes, for example, its deployment by private companies or in publicly accessible spaces such as stadiums and large event venues. However, such applications fall outside the scope of these brief reflections.

In any case, the Spanish legislature must urgently undertake the reform of national legislation to align it with the requirements of the Regulation, as well as those stemming from the Council of Europe’s Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law, adopted earlier this year.  

*This translation has been revised by María Amparo González Rúa from the original Spanish version, which can be consulted here.

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *