HomeSymposiaComparing Transparency Requirements: from Global Legislative Efforts on Generative AI 

Related Posts

Comparing Transparency Requirements: from Global Legislative Efforts on Generative AI 

Reading Time: 7 minutes
Print Friendly, PDF & Email

Overview

The rising concerns by generative artificial intelligence (GenAI) systems ramp up global legislative efforts on this emerging technology. As one of the fundamental principles in AI governance, transparency principle have been embedded in these efforts as a series of requirements. This output will overview the latest legislative efforts of three regulators, i.e., the United States (U.S.), Europe (EU), and the People’s Republic of China (China), compare their transparency requirements for GenAI systems, respectively, and propose personal thoughts on each comparison.  

1.     Generative AI and Transparency

GenAI, as one of AI categories, can be used to create new content based on end-user prompts. In the past year, there have been enhancing worries about GenAI affecting our lives. For instance, despite sometimes GenAI-created content looks convincing, but it does present wrong, which make human different to identify. With that, GenAI is treated as “increas[ing] the spread of disinformation on the internet”.  In addition, GenAI could be used by scammers to clone the voice of people’s loved one to steal their money. These existing and potential impacts of GenAI on our lives need to be mitigated. So, how to navigate these concerns?  

Promoting AI transparency can be a helpful step. As one of the recognizable principles in AI governance, transparency principle is referred as requiring AI actors to “provide meaningful information [about AI systems] appropriate to the context…” including at least: 1) affirmative notification that end users are interacting with their AI systems; 2) about how decisions are made by AI systems, such as the logic of generating new content by a GenAI system, in clear and plain language; 3) and redress mechanism (e.g., a right to explanation of decision-making).

Ensuring transparency has a close relationship with developing accountable (responsible) and trustworthy AI systems which is treated as a fundamental commitment to develop AI. Specifically, in the context of a GenAI system, ensuring transparency means stakeholders (e.g., end users, regulators, scholars, or advocates) are expected to understand how the system generates content, on the basis of disclosed information. With understanding of it, stakeholders would have more opportunities to identify if the process of generating content or the outcome is reliable. As such, stakeholders can hold an appropriate GenAI actor accountable for a particular outcome or process adversely affecting them. In addition, trust between (Gen)AI products and end users is established partly based on the products’ engagement with their end users, which is consistent with what other consumer products or services do. This engagement requires providing sufficient information to end users at the appropriate time in order to convince end users to buy and use their (Gen)AI products.

2.     Introducing and Comparing Latest “Transparency” Requirements of the U.S., EU, and China

Due to the risks from GenAI, governors across the world, including the U.S. EU, and China are stepping up their legislation on this emerging technology, which has been reflected in their latest legislative activities. Importantly, their latest developments of legislation cover requirements about transparency, recognizing transparency principle as an approach to govern GenAI.

On September 8, lawmakers in the U.S. published a bipartisan framework for the regulation of AI, which is treated as a “comprehensive legislative blueprint for real, enforceable AI protections.” This summary of the proposed U.S. AI Act includes a section named “Promote Transparency”, which covers some transparency requirements subject to GenAI systems.

In the EU, the world’s first comprehensive law of AI, known as the “EU AI Act”, has been recently passed for the third version draft, setting the stage for finalizing this law. The latest draft amends some transparency requirements, imposing new transparency obligations to operators of GenAI systems.

In June, China issued its first legislation on GenAI (GenAI Provisions), following the other two legislations for AI systems (AI Provisions and Deepfake Provisions) adopted in last year. Although the previous two regulations have implicated some obligations for GenAI systems, this new legislation is particularly applicable to GenAI systems. It introduces some obligations, including transparency requirements to the providers of GenAI systems.

The three aspects of legislative efforts have different provisions regarding transparency for GenAI systems. These provisions are categorized into six blocks, shown in the following sheet. Below will explain each block and compare three legislative efforts by each block.

a)    Informing users interacting with GenAI systems

All these legislative efforts have provided a requirement that enables end users to be aware of being exposed to GenAI systems. They are consistent with one of the elements of transparency principle. However, the narratives between these efforts are different, which also raises some questions. For instance, the summary of the U.S. AI Act seems to grant end users “a right to an affirmative notice”, which has not appeared in the provisions from the EU and China. The other ones require developers of GenAI systems to notify users about the existence of the systems in a “timely, clear, and intelligible manner”. Since the full text of the U.S. AI Act has not been released yet, it is still unclear how this Act defines “[the] right to an affirmative notice”. Does that really mean end users have a right to file a lawsuit against the party when they are not affirmatively informed?

b)    Labeling AI-generated content

It is reasonably believed that labeling content that GenAI systems create is considered a recognizable way of promoting GenAI transparency. All three legislative efforts require providers or otherwise organizations (e.g., API users or importers of GenAI systems) to take a technical solution (e.g., labeling, watermark), making generated content visible for the recipient of that content. With that, any recipient is supposed to be cautious about the authenticity of content (e.g., generated texts or voice). Currently, this technical approach is adopted and being tried to use in the private sector in different ways. But its effectiveness is still questionable. Some experts in the AI field argued that bad actors may still copy, revise, crop, or blur generated content, which obscures the label or watermark.

c)     Disclosing information about GenAI systems

In terms of promoting GenAI transparency, the requirement involving disclosing information about GenAI systems is with high consensus among global regulators. But the questions involved, such as what particular information needs to be released, are still pending. The U.S. AI Act laid down limited categories of disclosure, requiring developers of GenAI systems to disclose information about “training data, limitations, accuracy, and safety of [Gen]A.I. models to users and companies deploying systems”. More general than the U.S., AI Provisions of China stipulates that GenAI systems shall publish at least the “basic principles, purposes and motives, main operational mechanisms of the [GenAI systems]”.

On the EU side, the latest draft of the EU AI Act requires that high-risk AI systems shall disclose information about “the characteristics, capabilities, and limitations of performance of the systems” to achieve “sufficient transparency” to end users. However, the Recitals of the latest draft clarify that the development of a GenAI system as such does not lead to a high-risk classification. In other words, under the current draft of the EU AI Act, GenAI systems have no obligation to release information related to GenAI system itself, which is argued as insufficient.

d)    Independent researchers’ access to data involving GenAI systems

As mentioned in section c), it is uncertain what information about GenAI systems should be disclosed. Part of the reasons rests on developers’ protection of their trade secrets involving the disclosed information. Developers may avoid disclosure of information about GenAI systems on the basis of protecting their protected trade secrets. However, on the other hand, given the demand for regulating GenAI systems, it is necessary to understand them and assess their performance and risks by accessing sufficient information about the systems. Thus, providing independent researchers access to data involving GenAI systems in a controlled-base environment, is treated as a balancing approach, which can harmonize providers’ interests and demand for regulation on GenAI.

This approach has been adopted in the U.S. AI Act. It stipulates that “developers should provide independent researchers access to data necessary to evaluate A.I. model performance”. However, there are few details about how to implement this approach, such as the standard of being qualified for an independent researcher and what code of conduct they should comply with for the protection of potential trade secrets.

e)     Registering GenAI systems in a public database

All three legislative efforts have laid out a requirement under which AI-system developers shall register their systems in a database before putting them into place. And system developers need to provide certain information in the registered system. However, whether GenAI systems are subject to registration is different among the three requirements. The result depends on the nature of GenAI systems.

Under the latest draft of the EU AI Act, only those AI systems categorized as high-risk shall be subject to registration requirements. In other words, considering that GenAI systems are not categorized as high-risk AI systems, there is no need for GenAI systems to comply with the registration process. Similar to the case in the EU,  GenAI Provisions of China provides that only those GenAI developers with “public opinion properties or the capacity for social mobilization” shall perform the filing process at a database. Although there is a lack of clear standards in the evaluation of this condition, most GenAI systems of China tech giants (e.g., Huawei, Baidu, Tencent), have conducted filings for their GenAI systems. In addition, there is no clue where the proposed database only applies to limited categories of AI systems from the summary of the US AI Act. On the basis of the summary narratives, it is reasonably assumed that the US AI Act will require all kinds of AI systems, including GenAI systems.

f)     Granting a right to explanation of GenAI decision-making

Arguably, granting individuals a right to explanation enables them to understand how an AI system they are using makes certain decisions, but more importantly, to provide a basis for challenging the decisions, and to seek remedies if the decisions adversely affect these end users. 

The latest draft of the EU AI Act proposed a right to explanation for the first time. Under the provision related to this right, “affected persons” can request from a developer of its AI system a clear and meaningful explanation; can also complain to their appropriate regulators; and have a right to judicial remedy if complaints to those regulators go unresolved. Again, this right only applies to those affected by AI systems of high risk, not including GenAI systems.

The US AI Act does not set a right to explanation, while a global AI policy advocate called U.S. lawmakers for including this right in the proposed U.S. AI Act. On the other hand, China brings ambiguity to this issue. Without using the expression “a right to explanation”, instead, AI Provisions require (Gen)AI providers to give individuals an explanation when decisions they make “create a major influence on users’ rights and interests”, leaving a question about how to identify “major influence”.

3.     Last thought

The latest legislative efforts regarding transparency requirements are important for GenAI governance. However, to ensure GenAI transparency, there is still a lot more work to figure out. For instance, instead of disclosing the exact same information about GenAI systems to all stakeholders, a more feasible way could be providing different information to various stakeholders tailored to their transparency needs.  Stakeholders, such as regulators, end users, NGOs, and scholars, have different purposes, abilities, and resources in understanding of GenAI systems. Stakeholders need to work together to identify what a proper understanding of their needs should be.

Rick Cai
Professional of tech law and policy

Rick Cai, a professional of tech law and policy, specializing in global data privacy and AI governance.

[citationic]

Featured Artist