The debate around the social, economic and political impacts of the widespread implementation of Artificial Intelligence systems brings different approaches and, at times, irreconcilable ones. But there seems to be a converging element in the current context, which is, the adoption of regulatory initiatives would be unavoidable.
Although there are significant divergences around which regulatory instruments would be most appropriate – on a spectrum ranging from pure self-regulation to command-and-control approaches – the growing number of regulatory initiatives in various jurisdictions[i] seems to corroborate the hypothesis around convergence.
However, what is proposed here is an approach to the phenomenon – the debate around AI regulation – from a dynamic perspective, on the grounds that this would be the best way to analyze the feasibility of regulation, as well as the possible limits regarding regulatory policy’s efficacy and effectiveness.
1. Regulatory waves
The hypothesis is that the characteristics of the regulatory waves will influence the capacity of regulation[ii] to modify certain behaviors and, thus, obtain the results it intends to achieve in each regulatory initiative.
Regulatory wave is understood here as the gradual process through which a given problem – economic, political and/or social – is brought to the policy agenda, becoming the object of deliberation in the legislative sphere. From the resulting institutional innovation, this process implies the creation and/or reform of organizations, the definition of agencies responsible for the enforcement, monitoring and evaluation of regulatory instruments. Such steps, not necessarily sequential, would constitute a regulatory wave, even though their initial and final milestones – in technical terminology, the wave’s equilibrium points – are difficult to identify precisely.
And two characteristics of the waves – length and amplitude[iii] – are essential to analyze how recent dynamics can impact the future implementation of new regulatory frameworks.
The length of regulatory waves – understood as the distance between two regulatory peaks – has been reduced over time, leading to overlapping regulatory debates around complex issues.
The amplitude of regulatory waves – understood both as their intensity, in terms of projection in the public debate, and their depth, in terms of the level of detail in which the regulatory problem is discussed – has expanded over time, leading to an increase in the complexity of the debates and the number of actors involved in the process.
Take the European case as an example[iv]. The regulatory wave involving the subject of personal data was followed by the wave related to digital markets and, finally, the regulatory wave of Artificial Intelligence.
From the beginning of the formal debate on personal data in the European Union, in 2012[v], through the final political agreement, on December 15, 2015, until May 25, 2018, when the General Data Protection Regulation came into force, we have had almost 6 years.
Regarding the regulation of digital platforms, the publication of the Commission proposal of Digital Markets Act and the Digital Services Act takes place in December 2020, the political agreement is already reached in the first half of 2022, and both come into force on November 1 2022[vi]: here, there was a period of about 3 years.
The regulation of Artificial Intelligence started in 2020, with the publication of the White Paper on AI: a European approach to excellence and trust and the Assessment List on Trustworthy AI by the High Level Expert Group on AI Artificial Intelligence Act. Subsequently, the proposal was presented on April 21, 2021 and the European Parliament passed the AI Act on June 14, 2023. Here, the regulatory wave overlapped the wave of digital platforms and has not yet returned to the break-even point.
This compression of regulatory waves – or reduction of their length – in a relatively short period, is demonstrated in the timeline outlined above: we observe three regulatory waves (consubstantiated in the GDPR, in the DMA/DSA and in the future AI Act), which unfold in several rules that affect economic agents that operate in the technology and innovation market.
And it is not just regulations that deal with socially, economically and politically sensitive problems, but they do so comprehensively – in the scope of the affected agents – and in detail – with regard to the depth of the regulation – or, in other words, they constitute waves with high amplitude.
And what might this imply, in the field of AI regulation, in terms of the challenges brought about by implementing such far-reaching initiatives?
The AI Act is in the final stages of elaboration – currently, in the EU Trilogues, which may end in 2023 – and its rules will probably come into effect by 2025. And by that time, the main provisions of the DMA and of the DSA, in addition to the GDPR, will be in effect. Several rules of these four regulatory frameworks must be followed, simultaneously, by the same economic agents and organizations.
And here I consider it important to bring the concept of salience. The salience bias consists of a cognitive condition that predisposes individuals to focus or pay attention to a particular stimulus or information that is more prominent, visible or that arouses their attention/emotion.
In a context in which there are several regulatory waves in a short period of time – all of them dealing with complex issues that impact the innovation and new technologies market – there is a risk that regulators focus their attention on a specific problem.
And this situation can be aggravated, from the moment that the regulatees start to make the strategic use of the salience. In other words, regulated parties can take advantage of the fact that a certain topic is more prominent in the public debate and, thus, prioritize compliance with the rules of that specific regulation, neglecting other regulations. Given the existence of simultaneous and/or sequential regulatory waves – which makes it difficult for the agencies responsible for enforcing public policies to consolidate their institutional capacities – it is possible that the timely identification of such a strategy is challenging.
Salience, in this case, can function as a diversionary element, which facilitates the concealment of non-compliance with certain rules (non-salient regulated issues) by the regulatees.
The regulation of Artificial Intelligence, in this context, given its salience in the contemporary public debate – as well as its transversal characteristic, which can affect not only companies directly involved in the development of technology, but a much broader range of economic agents – should serve as a warning sign for such a risk.
But what can be done to prevent the strategic use of salience by the regulatees?
3. Inter-institutional coordination
The agencies responsible for enforcing, monitoring and evaluating the regulatory policies resulting from the most recent waves need to identify topics where there is shadowing of the various regulatory instruments and, mainly, the points where there is potential divergence between the commands and/or regulatory standards of the various policies.
But this is only viable if the regulators implement effective inter-institutional coordination initiatives, which allow – respecting the competences of each agency – the development of protocols for joint action and negotiated solutions for complex controversial issues.
And inter-institutional coordination becomes more urgent (and fundamental) as we identify how recent the regulatory waves are, which necessarily implies a lower degree of maturity of the agencies responsible for implementing those policies. The process of building institutional capacities requires time, the consolidation of expertise requires planning and the edification of a track record depends on time and knowledge.
Therefore, in their initial stages of implementation, regulatory policies for personal data, digital platforms and artificial intelligence will require regulators to be able to act in a coordinated manner, avoiding the watertight treatment of problems that have significant lines of intersection.
In other words, regulators will need to be guided more by regulatory objectives and less by the salience of the regulated problems.
[i] According to a report published by Stanford University, from 2016 to 2022, 31 (thirty-one) countries approved a total of 123 (one hundred and twenty-three) laws related to Artificial Intelligence. MASLEJ, N., FATTORINI, L., BRYNJOLFSSON, E. et al. The AI Index 2023 Annual Report. AI Index Steering Committee, Institute for Human-Centered AI, Stanford University. Stanford, CA, 2023, p.267.
[ii] OECD. Measuring Regulatory Performance: Evaluating the Impact of Regulation and Regulatory Policy. (ed. Cary Coglianese). Expert Paper No. 1, aug., 2012.
[iii] In technical terms, the amplitude corresponds to the height of the wave, measured by the distance between the wave’s equilibrium point (rest) and its highest point (crest). The length represents the distance between two successive crests (or also two troughs, which consist of the distance between the equilibrium point and the lowest point of the wave).
[iv] [iv] E, portanto, assumimos como uma hipótese plausível a existência of the Brussels Effect, ou seja, a suposto de que tal realidade possa se reproduzir em outros países. Vide Anu Bradford, The Brussels Effect, 107 Nw. U. L. Rev. 1 (2012).
[v] The 25th January, when the European Commission proposes a comprehensive reform of the EU’s 1995 data protection rules.
[vi] Even though its rules have deferred validity in time.
Felipe Roquete is a PhD Candidate in Regulation Law (Fundação Getúlio Vargas/Rio de Janeiro) and holds a Master in Political Science (Universidade de Brasília). He is a federal civil servant in Brazil, currently acting as Head of the Antitrust Leniency Unit in the Brazilian Antitrust Authority (Administrative Council for Economic Defense - Cade).