This blog post wants to contribute to the debate raised by Nic Fishman and Leif Hancox-Li in their paper ‘Should attention be all we need? The epistemic and ethical implications of unification in machine learning’
The paper ‘Should attention be all we need? The epistemic and ethical implications of unification in machine learning’ (henceforth, the paper) by Nic Fishman and Leif Hancox-Li discusses the epistemic and ethical perks and shortcomings of unification for different areas and tasks in Machine Learning (ML) under the transformers’ architecture. Machine learning is a subfield of artificial intelligence, broadly defined as the capability of a machine to imitate intelligent human behaviour.
From a broader perspective, unified science is ‘in the philosophy of logical positivism, a doctrine holding that all sciences share the same language, laws, and method or at least one or two of these features.’ According to the authors, ‘[u]nification is often considered a virtue in science.’ In the case of ML systems, since the transformers architecture has proven successful across several domains such as Natural Language Processing or Computer Vision, some researchers claim that unification of ML areas and tasks under the transformers architecture might be possible and even a plausible way to reach the so-longed-for Artificial General Intelligence. Fishman and Hancox-Li analyze whether such unification is scientifically/technically possible in the first place and ‘desirable’ or advantageous for ML research in the second one.
Unification within this context implies a homogenization within the approach for ML problems to be solved only by using a transformers architecture. This way, ontological unification will allow a much easier understanding of the root characteristics of ML systems, getting us closer to more accurate detection, insight and solving of the challenges they pose. However, in the end, the authors conclude that ‘on current evidence, the epistemic benefits are not as strong as analogous benefits from unification in the natural sciences.’ They do not deny that there are arguments for a case for unification such as ‘ethical benefits from open-source unified models’ but in their words, such unification is nowadays neither possible nor something to aspire to without overcoming further research.
As a legal scholar reading and enjoying the paper, there was one thought, one claim, that was pounding my head all over my reading and this was: how unification will affect the regulation and policymaking over ML systems? My intention here is not to run ahead of all the arguments put forward within the paper in favour of the benefits that unification might entail for regulation and policymaking. However, I think this is an important point that should be discussed and considered, and I hope this blog post can also contribute to the academic debate that the authors tried to spark within their paper.
The case for unification
What does unification entail (in general and for regulation and policy making)?
According to Petkov, ‘[t]he core assumption behind the unificationist model is very minimalistic, i.e., that there is a common ground between all forms of (sic) scientific explanation.’ Also, following Bartelborth, unification ‘looks for unifying patterns that are often but not necessarily causal.’ Therefore, the unification of ML systems will entail a simplification within the regulatory/policy-making process. By streamlining all fields and tasks of ML to their common ground or unifying patterns, regulators and policymakers can focus on aspects that are inherent to their tasks, such as deciding how ML systems should be created to comply with democracy, the rule of law and fundamental rights and when, where and how they can be used to follow on such compliance. Hypothetically speaking, if unification under the transformer’s model was accurate (something that the authors of the paper deny), the regulation would be much more focused and all stakeholders involved within the AI lifecycle will be more directed under a single common approach. This tactic has its advantages but also disadvantages.
If we question whether regulation should act upon unification under the current transformer’s model, I think it is more of a technical question. From a legal point of view, for instance, jeopardizing diversity in research approaches might infringe the rights to equality and non-discrimination. If that was the case, then, regulation should act. For the same reason, I do not think that regulation should push toward unification. Benefit a research approach above others when such an approach has its weaknesses and this can impact fundamental rights does not seem the most strategic choice.
Benefits of unification for regulation and policy making
The unification of ML systems might help to reduce the divide between regulators/policymakers and ML developers. Such a divide has entailed regulatory flaws such as the GDPR’s highly discussed right to explanation: since state-of-the-art ML systems have a black-box nature, even when some information or interpretation can be provided about their outcomes, it is technically impossible to grant a right to explanation for data subjects interacting with ML. The unification of ML systems might merge and therefore push the race to open the black box. Further, having a common ground or unifying patterns will help regulators and policymakers to better and easier understand ML and be in a better position to detect and discuss the ethical and legal problems that such systems pose with the people in charge of designing and deploying them.
We can find a case for unification within the debate over the definition of Artificial Intelligence (AI) within the proposal for an EU AI Act. Unification of ML systems will allow regulators and policymakers to better grasp exactly what is AI and what is not and when its use is necessary (also connecting with the principle of proportionality). This would allow a simpler and more comprehensible approach to the definition of AI for all stakeholders and legal certainty for ML systems providers.
Also, unification might help to reduce the vagueness of regulation, another thing that has been criticised regarding the GDPR’s application to AI systems. Such vagueness can end up having crucial consequences since certain deployments of ML products have catastrophic consequences for fundamental rights as we have already repeatedly seen (for instance, here and here). As previously mentioned, the definition of an AI system would benefit from a common and unifying framing under the transformers architecture. This might contradict the argument for technologically neutral regulation but, since such an argument has proven suboptimal within the abovementioned case of the GDPR, unification might be the answer to the uncertainty raised by technological neutrality.
Lastly, unification can help with the auditing of ML systems. Proper auditing requires both ethical/legal and technical skills and it is very difficult to find people with such a mixed background. This is particularly relevant if we think about the self-assessment of compliance with the EU AI Act’s requirements (Articles 8 to 15) by providers allowed by such a regulatory instrument.
Drawbacks of unification for regulation and policy making
I agree with the authors of the paper that almost all (but not only) the epistemic and ethical risks they mention against the unification of ML systems might be of some relevance. The lack of diversity in ML systems and researchers working on such systems, the path dependency, the deskilling or the tendency to ignore domain experts, will not help the regulatory/policy-making cause in general. Lack of diversity in ML research will favour dominant narratives both in methodological and diversity terms, as the authors of the paper put forward. Further, path dependency will jeopardize ML research preventing us from perhaps discovering ML systems that will be more compliant with democracy, the rule of law and fundamental rights. Lastly, as it has been discussed, the tendency to ignore domain experts in both regulatory and ML fields will lead to partial and flawed outcomes.
The paper ‘Should attention be all we need? The epistemic and ethical implications of unification in machine learning’ analyses the epistemic and ethical perks and shortcomings of unification for different areas and tasks in ML under the transformers’ architecture. The authors conclude that more research is needed before such unification is possible and optimal. I argue that the unification of ML systems might be beneficial from a regulatory/policy-making perspective since it might reduce the divide between them and ML developers, increase legal certainty, reduce the vagueness of regulation and policy-making instruments and help enhance better-informed auditing of ML systems. However, from a very different place, I arrive at the same point as the authors of the paper: more research is needed to overcome the technical and ethical/legal drawbacks of unification to not neglect the threat that ML systems might entail to democracy, the rule of law and fundamental rights.