1. Introduction: the risk-based approach
Across Western countries, the 21st century saw a rise in the resort to a risk-based approach to regulation as a response to the development of what has been defined since the 1980s as a “risk society”. This regulatory model was soon adopted, amongst others, by the European Union, first with respect to fields such as environmental and health law and also within that digital technologies. Since the publication of the Commission’s Communication on a digital single market strategy for Europe, EU institutions have used more and more frequently the tool of risk to encourage greater accountability of both public and private actors for the potential collateral effects connected to the use of digital technologies or the processing of data.
The risk-based approach consists of the adoption of a regulatory framework where duties and obligations are scaled and adapted to the concrete risks deriving from a specific activity. The binary logic of compliance/non-compliance is thus overcome by a form of “compliance 2.0”, where legal requirements are rather tailored to the targets of regulation themselves. The most typical structure of the risk-based approach, such as that characterising the GDPR, features a mechanism by which risk evaluation and risk mitigation are put in place directly by the targets of regulation. The recently proposed AI Act, however, seems to turn upside down such a perspective by implementing a more clearly top-down form of risk-based regulation.
2. From the GDPR to the AI Act: the shift from the bottom-up to the top-down model
As is well known, the principle of accountability, expressed within Article 5(2), runs across the entire GDPR. In practice, that principle is also implemented through the resort to a regulatory model based on risk, whereby the data controller and the data processor are required to be able to prove that they have put in place all the technical and organisational measures that are necessary to ensure that the principles enshrined within the regulation are respected. If they are not capable of proving the adoption of such measures, they will be held liable for damages (Articles 24 and 25). Consequently, data controllers and data processors must assess how risky their activities are for the protection of privacy and personal data and act accordingly by developing the best strategy to minimise the risks they have identified. The risk-based approach of the GDPR is, in other words, inherently grounded upon the “responsibilisation of the regulatee”. The traditional top-down legislative dialectic shifts towards a more collaborative architecture, where the governed must implement the appropriate risk management strategies to avoid liability.
The more recent proposal for a Digital Services Act (DSA) also features a risk-based approach to content moderation. On the one hand, the DSA aims at introducing new duties and obligations upon providers of intermediary services. On the other hand, it also tries to encourage moderation practices that are more transparent and protective of individuals’ fundamental rights, notably freedom of expression and information. The DSA directly identifies four risk categories for providers (general providers, hosting providers; online platforms; very large online platforms) and provides more and more severe obligations based on such a classification. Through this asymmetric approach, the DSA welcomes the principle of proportionality, which represents a key feature of Union risk-based digital regulation.
However, whereas the GDPR, following a bottom-up perspective, delegated completely to the targets of regulation the duties to assess and mitigate risks, the DSA diverges slightly from this model since it identifies itself as the objective criteria based on which the classification into risk categories is made. Nonetheless, the shift from a bottom-up to a top-down logic is not yet complete: in the case of very large online platforms, for instance, ample leeway is given to them to develop the appropriate mitigation strategies for reducing the systemic risks connected to their activities (Articles 26-27). Ultimately, the approach followed by the DSA can be defined as a hybrid. The overall structure of the DSA can be represented as a spectrum ranging from a predominantly top-down, compliance-based discipline to an increasingly predominantly bottom-up approach. Since they carry the most risks and since, due to their dimensions and revenues, VLOPs can put in place the appropriate measures for risk assessment, management, and mitigation.
Within the AI Act, the shift from a bottom-up to a top-down model is even more evident. Also, in this case, the proposal identified four risk categories, applicable to the various AI systems: unacceptable risk, high risk, limited risk and minimal risk. The systems pertaining to the first group (e.g., those for real-time remote biometric recognition in publicly accessible spaces) are prohibited by Article 5. The second group, which includes a list of AI systems identified by Annex III, emendable by the Commission (Articles 6-7), faces a long list of quality and transparency requirements, while providers and users of those systems must comply with control obligations and duties (Articles 8 ff.). Finally, whereas limited-risk systems (such as chatbots) must simply comply with transparency requirements provided by Article 52, all other systems, considered subject to minimal risk, are not regulated at all (though the adoption of codes of conduct is encouraged).
In the case of the AI Act, therefore, risk assessment and risk mitigation are tasks that are not entirely delegated anymore to the discretion of the targets of regulation. On the contrary, the classification of a specific AI system within a specific risk category, and consequently the decision concerning the risk mitigation measures to be put into place, is an automatic top-down process. Although the Commission, within its explanatory memorandum to the proposal, states that the AI Act “puts in place a proportionate regulatory system centred on a well-defined risk-based regulatory approach”, some have argued that the proposal does not truly feature a real risk-based approach.
The perspective adopted by the AI Act, in fact, is in many ways the reverse of that characterising the GDPR. If within the GDPR, the evaluation and mitigation of risk, as a means to protect individuals’ rights to privacy and data protection, were tasks left to the discretion of the data controller and data processor, the AI Act carries out itself those tasks. In fact, if it is true that, pursuant to Article 9, high-risk AI systems will have to be subject to a risk management system, it is nonetheless also true that such a provision plays a role that is ultimately residual within the framework established by the proposal.
3. Risk as a manifestation of European digital constitutionalism?
The adoption of multiple perspectives on the risk-based regulation of digital technologies may raise concerns as far as the consistency and coherence of EU digital policies are concerned, especially with respect to the AI Act proposal. In fact, the top-down model of the AI Act is arguably the opposite of the bottom-up one of the GDPR. Nonetheless, all three acts (the GDPR, the DSA, and the AI Act) have at least one common feature.
Indeed, all three legal instruments, through the notion of risk, aim to strike a balance between the various interests at stake: on the one hand, the economy-oriented interest towards innovation and the creation of an internationally competitive digital single market; on the other hand, the often-conflicting interest towards the protection of democratic values and the rights and freedoms of individuals. Risk, in other words, functions as a proxy for an activity, that of the balancing of interests and values, which is intrinsically constitutional by its own nature. What changes, at a deeper level, is the way risk regulation itself is dealt with and the relationship between the regulator and the regulated: if within the GDPR, the regulatee is responsible for striking such a balance, the decision made in the DSA and AI was to shift progressively that duty from the regulatee to the regulator. The cause for such a development, moreover, may be attributed to a slight change of direction that has invested in the digital policies of the Union in recent years. The overall legal imprinting of European institutions, starting from the Court of Justice of the EU, has indeed progressively shifted from an eminently liberal (and negative) to a more clearly democratic (and active) approach as a result of the rise of European digital constitutionalism.
The apparent incoherence of the GDPR, DSA, and AI Act, therefore, may be brought back to unity through the adoption of a perspective constitutionally oriented to risk, which sees risk not simply as a regulatory model but, more deeply, as a necessary tool for building a balanced legal system and capable of ensuring equal protection of all interests at stake. In this sense, although the ways in which risk is implemented may be different, the ultimate goal of risk-based regulation within digital policies is one: the protection of the founding values of European digital constitutionalism.
Suggested citation
Pietro Dunn and Giovanni de Gregorio. ‘Risk-based regulation in European digital constitutionalism’ (The Digital Constitutionalist, 21 April 2022). <https://digi-con.org/risk-based-regulation-in-european-digital-constitutionalism/>