Chaos was the law of nature, order was the dream of manHenry Adams
Digital techniques that manipulate online behavior, increasing connectivity through smart and embedded devices, some of which can connect our brains to a two-way information exchange highway with the internet, and the rising prominence of the metaverse, are just some of the digital technologies that greatly impact individual’s fundamental individual rights such as privacy, autonomy, identity, societal values and sometimes democracy. These and other digital influences in people’s daily lives expose them to standard threats to which “they are vulnerable under typical circumstances of life in a modern world order composed of states” as contemplated by Beitz in his broad and practical conception of human rights. This definition contains important elements when considering whether to create new human rights in the context of digital influences or neurological technologies, especially to prevent so-called “rights inflation”, which refers to the tendency to label everything that is morally desirable as a “human right”.
World War II gave rise to the first codification of human rights in the Universal Declaration of Human Rights (UDHR), but since its inception in 1948, human life and people’s role in society have been dramatically transformed by emerging technologies that were unimaginable at the time when the UDHR was written. It is becoming increasingly clear how emerging digital technologies and their influence on humans and societies are blurring boundaries between humans and computers, which demands a reconsideration of the applicability and relevance of existing human rights in the digital world. Critics such as Yuste et al. believe that existing international regulatory instruments such as the Asilomar Artificial Intelligence (AI) statement of cautionary principles are insufficient to protect humans against digital technologies and more specifically, neuro-technologies. Although the Asilomar principles provide guidance on AI developmental issues, ethics, and how to develop beneficial AI, it is silent about specific principles regarding the protection of human rights that are unique to the implementation of neuro-technologies. These principles simply reiterate existing legal and ethical rights, including the responsibility to take moral implications into account when designing and building advanced AI systems (principle 9); aligning highly autonomous AI systems with human values (principle 10); designing AI systems to be “compatible with ideals of human dignity, rights, freedoms, and cultural diversity” (principle 11); personal privacy and the right to access, manage and control the data they generated or given to AI (principle 12); preventing the unreasonable curtailment of people’s real or perceived liberty (principle 13); providing the freedom of choice in delegating decisions to AI systems to accomplish human-chosen objectives (principle 14); that advanced AI systems should “respect and improve, rather than subvert, the social and civic processes on which the health of society depends” (principle 17); and that risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact (principle 21).
Neurosciences may share many of the legal-ethical issues raised across scientific fields, but certain finer nuances relating to privacy such as the privacy of our thoughts, threats to our autonomy and self-determination, or how we identify ourselves or represent ourselves in a digital society or the metaverse are definitely unique considerations probed by emerging digital and neuro-technologies.
Against this backdrop, it is critical to acknowledge that the internet (with which brain-computer interfaces will increasingly be connected) is currently and practically run by a transnational private regime of big-tech companies in accordance with their own business models. In contrast, nation-states are desperately trying to govern the internet via modern constitutionalism guided by the sovereign authority of the relevant nation-state. This tension creates a situation where individuals interacting with the internet ecosystem demand a greater level of fundamental rights protection, but the internet simply lacks the necessary structural feasibility to effectively provide these rights. It is in this context that the concept of digital constitutionalism originated as the dream of man to re-establish order in the chaos of (digital) nature (as per the quote of Henry Adams above). Where powerful private multinational companies are active and dominant role players next to nation states, it is critical to “seek to articulate a set of political rights, governance norms, and limitations on the exercise of power on the Internet”. At the moment a wide variety of civil society groups and multinational technology corporations affirm their (subjective) digital rights via a patchwork of legally binding and non-binding instruments, making use of both democratic and institutionalized processes, which simply serve to benefit the interests of the “rule makers”. Many of these documents have political, financial, or technical aims to protect an ideal-type architecture as opposed to focusing on the protection of fundamental rights of humans against threats posed by “a historically determined assemblage of design, codes, infrastructures and usages”.
However, realistically speaking, a global digital constitution seems almost impossible to achieve. Privacy, is one of the fundamental rights most influenced by digital technologies, and debates around it are heavily influenced by socio-technical, economic and political cultures. For example, contemporary notions of privacy in China, which are based on a combination of the traditional Chinese emphasis on the importance and central role of the family and responsibilities to the state, stand in stark contrast to the notions of privacy in Western individuals, where the right to privacy is considered to be a fundamental individual right and intrinsic good.
Similarly, equal access to the internet and the information sources it provides also complicates the implementation of a global digital constitution. China, Burma/Myanmar, Vietnam, and Iran count among the countries in the world with the most severe internet censorship, with Russia, Belarus, Pakistan, and the Arab world following closely behind. It is easy to see how governmental influences in these countries will negatively influence concepts of privacy, and access to information provided via the internet, and how this will affect people’s right to autonomously choose their interactions with internet services or how they will make decisions limited to governmentally curated information. It is also easy to see how hard it will be, if not impossible, for these countries to subject their governmental control to a globally binding digital constitution that possibly contradicts their national values and beliefs.
The process of standardizing any form of digital constitutionalism will similarly be influenced by socio-cultural and political dimensions. To prevent the prioritization of one cultural view above another any standardization proposal, one must acknowledge and accommodate differences in cultural dimensions of informational privacy.The uncertainty and negotiations around the European Union and United States Privacy Shield regarding cross-border data flow and their different notions of what the concept of privacy constitutes and how it should be protected serve as a prime example of how cultural, political, conceptual, and economic interests (to name a few) can complicate the reaching of any form of agreement regarding digital constitutionalism.
Regardless of these difficulties, proponents of digital constitutionalism argue that it will return political concerns and perspectives, informed by economic and technical realities back into the governance of the internet and ground the political struggle over the internet explicitly in the fundamental rights of individuals. In the current legal discourse, sovereigntists aspire to subject the internet to state-centred instruments such as national laws and governmental policies, whilst digital constitutionalists aspire to a normative framework informed by international human rights law, as well as domestic democratic constitutions. To successfully transition from current state-centred instruments to a digital constitution, it seems a good idea to align a potential digital constitution with international laws that focus on human rights to ensure that international laws acquire a new constitutional function by supplementing state laws in these global, digital challenges.
The work of Marietjie Botes has been funded by the Luxembourg National Research Fund (FNR)—IS/14717072 ‘Deceptive Patterns Online (Decepticon)’; the European Union’s Horizon 2020 Innovative Training Networks, Legality Attentive Data Scientists (LEADS) under Grant Agreement ID 956562; and REMEDIS Project INTER/FNRS/21/16554939/REMEDIS (Regulatory solutions to MitigatE DISinformation).
Postdoctoral Researcher at the SnT Interdisciplinary Centre for Security, Reliability, and Trust, University of Luxembourg, Luxembourg, European Union; and Honorary Research Fellow, University of KwaZulu-Natal, Durban, South Africa.