Lessons from the past: From the right to data protection to a right to artificial intelligence?*

Daniel Jove Villares

The General Data Protection Regulation (GDPR), in addition to being the cornerstone of the set of regulations that shape the European data ecosystem, is in a way a beta version of the regulatory model with which the European Union intends to define the digital space. Therefore, understanding the rationale underlying the regulation of the right to data protection proves useful not only for what it may contribute to the understanding of this specific fundamental right, but also for a better comprehension of the European Union’s Artificial Intelligence Act (AI Act), since many of its features —such as proactivity, prevention or a focus on risk as a determining criterion— were first tested within data protection regulation. Taking this into consideration,  along with certain lessons derived from the regulatory developments that has brought us to the present moment, I would like to share some reflections that, although open to debate, aim to contribute to the broader discussion surrounding AI, both in terms of its regulation and the potential articulation of «a genuine ‘right to artificial intelligence’» (Presno Linera).

The evolutionary process that the right to data protection has undergone shows that, when the context and the reality upon which the right is projected are subject to change, definitive conclusions should not be drawn. The fact that at a given moment a right does not exist — for instance, because its protection is deemed to be ensured through the facets or manifestations of other pre-existing rights— does not preclude it from eventually becoming consolidated as an autonomous right. The process of differentiation between privacy and data protection is a good example.

If that is the case, how can we know when we are facing a new right that requires specific recognition? Although the answer to this question is complex and much broader, one of the key elements lies in the identification and delimitation of the legal interest or interests at stake. Having a clear understanding of the purpose of the right, as well as its defining content, makes it possible to determine the most appropriate way to approach its regulation, prevents unnecessary duplication and helps establish the level of legal protection best suited to the reality that is meant to be safeguarded. In this way, it becomes possible to discern whether one is truly facing a genuine right or, on the contrary, whether existing protective instruments are already in place, in which case it would only be necessary to incorporate new dimensions or facets into pre-existing rights.

When considering the possible recognition of a right to artificial intelligence, the experience of data protection law invites us to look for the distinctive elements that may reveal the existence of purposes that cannot be protected through pre-existing rights. From a regulatory perspective, the various legal frameworks that develop the right to data protection —particularly the GDPR— highlight the need to address certain debates that will help shape the regulatory model governing each innovation or technological phenomenon. In this regard, and without delving into (necessary) debates concerning how and by whom the protection of rights should be carried out —whether through more, less or no self- regulation at all— it seems reasonable to assume that, in the virtual space, personalization and flexibility in actions are essential. Otherwise, the law runs the risk of falling too far behind, and additional threat to rights and freedoms that should be avoided as far as possible.  

In the debate over whether to opt for greater or lesser self-regulation, the EU appears to have generally adopted a position in which, at most, it accepts a form of ‘regulated self-regulation’. However, there is another unresolved question that, as a society, we need to address: the personalization of legal responses. On a technical level, it seems increasingly feasible that, just as personalized advertising is offered, specific protective measures could be tailored to each situation, and even to individuals. If such a level of precision in legal responses were possible, if it were feasible to design customized protection models for every person, should they be adopted? This is an issue that, at least in relation to certain rights, will require serious consideration.

In this regard, one factor that must be taken into account is that personalization will likely go hand in hand with the commoditization of rights. In the case of personal information, it increasingly appears to be more of a product and less of a good belonging to the individual. In other words, the functionalities, utilities and conveniences generated by the technological ecosystem have gradually eroded individuals’ ability to preserve a private sphere, leading to a devaluation of rights such as data protection and privacy in the name of progress. As if this shift were not already concerning enough, a new transformation is taking place —one that, moreover, is being driven by the very same entities that enabled the initial devaluation. A transformation that consists in offering spaces of inviolability in exchange for a price or as a means of fostering consumer-users loyalty. The process is paradoxical: first, the value of the right is reduced to zero and then it is sold as something worth preserving, but in exchange for a price. In the meantime, it has lost its strength as a personal right. This should be a variable to consider when discussing the adoption of models that involve personalized guarantees. It is certainly not the only one, but it must be considered, especially if the aim is to carry out that transition properly.

Finally, starting from the regulatory model, though with clear implications for the configuration and consolidation of rights, attention should be drawn to the growing importance of risk as a priority approach in the regulation of rights. It is true that risk has always been present as a factor shaping the scope of rights, for instance, the possibility — the risk— that another person might kill us is one of the underlying reasons for the existence of the right to life, though not the only one.

What has changed? Risk has taken over the lead. Traditionally, in configuring rights reactively, it was assumed that their very proclamation would have sufficient deterrent force to prevent harm from occurring and, only when it did occur, would the reparative mechanism be activated. Now it is no longer just one criterion among others; it has become the determining element that defines the regulation, to the point of deciding whether something can be done or not. The scope of what is possible is no longer determined by the content or purpose of the right, but by the level of risk deemed legally acceptable. Risk has replaced rights as the central element. And this entails certain problems, because it is not the same for the mere possibility of something occurring to exist —without legal consequences arising unless it actually happens— as it is to establish levels of risk and let those levels determine the scope of what is legally permissible. The conception of freedom and autonomy changes substantially.

In practice, this has led the legislator to no longer being content with establishing remedies for problems and harm, but rather to seek to anticipate them. The final outcome may not differ significantly; for example, when the level of risk is unacceptable, then risk-based prohibition ends up being equivalent to a reactive model. In the other cases, risk levels are equivalent to what would traditionally be considered exceptions allowing interference with a right. However, insofar as harm prevention operates as a superior criterion, the approach is substantially different.

Rights are no longer defined by what they are or what they enable, but rather through a negative approach in which the probability of certain events occurring becomes the basis for decision-making. It is not the same to say, ‘I do this because the content of this right allows me to’, as ‘I do this in this specific way because there is a more or less likely possibility that harm may occur’. Moreover, by placing risk at the centre, the context, the circumstances, become the conditions of possibility for the exercise of that right, rather than variables to be analysed ex post in the event that harm eventually occurs.

This is not to say that this option is a bad one; indeed, it may well be the appropriate one. The question, however, is whether this model has been chosen out of conviction and because it is indeed the most suitable for, say, achieving the ‘Brussels effect’ or, on the contrary, whether the precautionary approach has been embraced out of sheer inertia, necessity, or perhaps fear and uncertainty about what technological evolution may bring. Whatever the reason may be, the fact remains that this decision has significant effects on the nature of rights, as it is leading to their objectification. Yes, the subjective elements are still present, for instance, in the case of the right to data protection, individuals can still exercise the various powers it confers (such as the rights of access, rectification, erasure, objection, portability, among others). However, the general guarantee of the right, its everyday protection, no longer depends on the defensive action of its holders, but rather on the measures adopted by data controllers, who have become the true guarantors of the right.

But why, in technological contexts, does the objectification of rights carry so much weight?

The adoption of this approach may be a response to the complexity of the interrelationships and actions that take place through digital means. Their mass nature, dynamism and the impossibility of constantly monitoring for potential interferences with rights, compel the adoption of more proactive models for the protection of individuals’ legal interests. In addition, there is also a reason rooted in financial pragmatism: if the operation of the digital market depended on the constant exercise of individual rights, its functioning would be significantly slowed down.

Whether due to factual impossibility or commercial convenience it seems quite logical to shift a substantial part of the responsibility to the operators, designers and those in charge, as they are the ones in a position to ensure compliance with the regulations. In this way, the primary safeguard for individuals would no longer lie in the actions they may exercise, but in the protective obligations that others are required to fulfil. Therefore, the real battle in the defence of rights must be fought primarily in the design of duties and obligations. However, it is evident that the more protection is objectified—shifted toward anticipation and third-party responsibility— the less room remains for personal self-determination. In this regard, the AI Act stands as the paradigm of objectification. In the AI Act, operators’ obligations are everything. The subjective element has virtually disappeared. There are no action-based tights, only obligations for those who wish to operate with AI systems. This heavily one-sided legislative approach may suggest that the EU does not conceive of AI as a right, but rather as a product; something that undoubtedly poses an obstacle to the eventual articulation of a right to artificial intelligence. For the right to artificial intelligence to eventually become a real and effective right, the legislator must first be truly aware of what is being regulated—that AI is, or can be, more than just a product.

*This translation has been revised by María Amparo González Rúa from the original Spanish version, which can be consulted here.

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *