This post is part of the DigiCon symposium Transparency in Artificial Intelligence Systems? Posts from this symposium will be published over the next Thursdays. If you are working on topics related to AI and transparency, follow these posts and take a look at our call for blog posts.
1. The impact of AI systems
Imagine: You just found out that your subsidy application has been refused, despite preparing every document carefully as per requirements. You want to know why. The authority informs you that they cannot tell you because their instructions for use provided by the artificial intelligence (AI) developers do not include how to obtain an explanation for a binary ‘yes’ or ‘no’. Instead, it turns to the provider of the AI system, and they explain that divulging the explanation of how this result was reached would require disclosure in breach of their trade secrets. As the situation currently stands, an individual in such a situation cannot receive the needed explanation and is thus prevented from challenging the decision.
With the boom of AI systems, the public sector started catching on by developing their own or leasing tools from the private sector. However, AI systems used in decision-making can have harmful effects. For instance, the recent Dutch child benefits scandal and the preceding SyRI fiasco are illustrative of algorithmic errors, intentional secrecy by the government and ignorance of early warnings by legal advisors, which led to devastating outcomes for affected families whose claims were marked as fraudulent. There were no explanations as to why. Accessing information about AI systems is crucial for the exercise of the procedural rights of individuals and the duties of public bodies, which are required to provide reasons for their decisions. Yet, trade secrecy stands in the way. In other words, trade secrets impede transparency and make the detection of illegalities onerous.
2. Trade secrets: a weak form of protection with strong consequences
The most common way to protect AI technology is by keeping it secret, avoiding disclosure of the system and its elements. Trade secrets are of great economic importance to commercial enterprises as a means of protecting their innovations. Since AI technology is increasingly excluded from copyright or patent protection, companies developing AI systems and products are left with trade secrecy as the primary means of preservation of their economic interests.
The nature of trade secrets has been disputed and contested. Although sometimes they have been considered to constitute an aspect of intellectual property (IP), in some instances they were not even considered to be part of it. For instance, whereas trade secrets are specifically classified as an IP right in Italy, other countries rely on unfair competition and tort law. Regardless, companies often opt for this fragile form of protection due to low costs, lack of formalities seen in other IP protection, secrecy-derived information value, and a (potentially) indefinite duration.
Information pertaining to an AI system can be protected under the EU Trade Secrets Directive (TSD)—an instrument aimed at protecting against the unlawful acquisition, use and disclosure of trade secrets. Specifically, trade secrets represent information which is (1) secret, (2) commercially valuable and (3) kept secret through reasonable steps of the person lawfully in control of the information. To comply with these requirements, companies rely on non-disclosure agreements and licensing clauses which forbid reverse engineering, thereby incurring great costs.1Reverse engineering can be described as ‘the process of extracting the design elements from an existing system or an industrially manufactured product by examining its structures, states, and behavioural patterns’; Jasper Siems, ‘Protecting Deep Learning: Could the New EU-Trade Secrets Directive Be an Option for the Legal Protection of Artificial Neural Networks?’ in Martin Ebers and Marta Cantero Gamito (eds), Algorithmic Governance and Governance of Algorithms (Springer 2021) 138-154. The open and indeterminate language of provisions in the TSD reveals great flexibility as a tool for adaptable legal protection of AI systems with remedies for infringements, including injunctions and seizures. However, trade secrets do not constitute exclusive rights. Instead, they are only protected against misappropriation. Member States are left free to enact higher standards for their protection. In principle, trade secret protection is infinite until the secrecy is broken. Reliance on breach of confidence greatly differs from the protection provided by exclusive IP rights. Once the information is exposed, trade secret protection is rendered meaningless. Nonetheless, as long as they last, trade secrets stand at odds with transparency.
3. Transparency – a contrast to secrecy
In the private sphere, trade secrets are a well-established practice. However, its use by public bodies is controversial as it raises different issues from those pertaining to the private sector. Transparency, as a component of the principle of good administration, should not be undermined by trade secrets in such contexts. At risk is the erosion of the rule of law in the face of automation. When commercial entities cross the boundaries of the private sphere and seek financial gains by providing a public service, disclosure should be available. Unfortunately, trade secrets obstruct this goal and thus directly stand at odds with the critical constitutional concepts of legitimate legal systems and governance.
Member States are obliged to dismiss trade secrets measures when protecting a legitimate interest recognised by EU or domestic law.2Article 5(d) Trade Secrets Directive. However, the TSD does not provide any guidelines for public authorities or judges regarding how to strike a balance between the private interest to keep algorithmic processes undisclosed and the public interests. Instead, it is left in the hands of courts to interpret. More importantly, the TSD is unspecific with regards to what constitutes ‘public interest’. Recital (21) merely lists a few examples—namely, public safety, consumer protection, public health, and environmental protection. It remains uncertain whether this list is exhaustive and whether it includes transparency. Consequently, it is questionable whether public bodies could rely on ‘public interest’ justification when breaching secrecy obligations in favour of public interest.
Achieving transparency on a technical level in the operationalisation of an AI system is difficult due to its inherent opacity. Trade secrets represent legal obstacles to achieving transparency and, with it, sufficient explainability.
4. Explainability as a prerequisite for AI transparency
AI should be developed and used responsibly. However, to enable responsible AI, we need to know what AI is doing. In this sense, transparency is vital in order to build trust in this area, ensure accountability, prevent harmful consequences, and foster further innovation and improvement for its use in society. Therefore, access to underlying information of the AI system is needed (e.g. parameters of the algorithm, their weights, architecture, source code, testing and validation information). Unfortunately, gaining access to the rules of an AI system might be insufficient to uphold transparency. In that sense, transparency might only reveal the technical properties of an AI system but not necessarily how a decision was made. Therefore, explainability is needed under transparency. Specifically, information should be interpretable by humans and contain the logic utilised in selecting specific inputs and reaching a particular conclusion. Without explainability, public bodies cannot give a reason for a certain decision because simply saying ‘because the AI said so’ is not enough. An affected person must be able to discern and understand the information on which a decision is founded so that they can challenge it. This requires AI output to be explainable to enable transparency. Consequently, the duty to give reasons represents an explanation requirement as a specific form of transparency which requires a decision-making process to be explainable. In other words, explainability enables the decision-maker to understand why a certain decision was made by an AI system which in turn allows them to provide an explanation of reasons for a decision. In this way, transparency can provide critical tools for respecting the duty of public bodies to provide reasons for their decisions through explainability.
One might think that data protection might be of assistance. However, observing the General Data Protection Regulation framework, it is still unclear whether data protection rights—even if they are interpreted to encompass the right to explanation—take precedence over trade secret rights. Furthermore, scholarship elevated the importance of individual rights over those of companies and argue that data protection rights should take precedence over trade secret rules. The same goes for transparency. Unfortunately, the recent AI Act proposal has not shed much light on this topic, as it has elevated trade secrecy above transparency obligations for AI systems by promising compliance with confidentiality standards.3See page 11 and Article 70 of the AI Act proposal. Perhaps we will have to see how the recent credit-scoring cases play out to infer the view of the CJEU on this.
5. Impact on innovation, rule of law and due process – the need to (re)consider
Innovation is a chain in constant making. Each novel invention in any field is based on prior knowledge. Reliance on trade secrets as means of protection supplementary to other IP rights represents an obstacle to innovation. This is due to the concealment of information which is kept outside of the public domain. Although trade secrecy protects extant innovations, future innovation is at risk due to restricted knowledge flows. Specifically, previous studies have found that trade secret protection affects future innovation negatively by reducing inventors’ productivity and harming innovation in the long run.
The connection between AI and IP is problematic because AI challenges traditional IP perceptions when it comes to trade secrets. The main rationale for IP protection rests on an exchange that is made between creators sharing their creations with society for the sake of innovation and receiving certain rights, compensation, and protection in return. In this way, society benefits from such dissemination and the rights-holder is allowed to recoup their investment by enjoying a limited monopoly that affords a competitive advantage in the market. In this new context, AI presents challenges. So far, the protection of AI systems has been limited in the area of copyrights and patents, thereby reducing incentives for revealing algorithmic information and making companies turn to trade secrecy. However, trade secrets oppose the traditional utilitarian rationale of IP protection because they obscure innovation by limiting information flows through confidentiality clauses. Although trade secrets are not IP rights, AI systems are claimed to be IP. Ultimately, such an approach is counterproductive because it entails weaker rights for companies and less algorithmic transparency.
Due to the inherent and intentional opacity of AI technology, trade secret protection has become excessive in relation to its concerning effects on innovation, due process, and the rule of law. Although this effect can be damaging in the private industry, the impact of secrecy on the functions of public governance is more troubling. The most known example illustrating the impact of trade secrecy on due process is visible from State v Loomis case in the US.4State v Loomis, 881 N.W.2d 749 (Wis. 2016). In this case, an individual was labelled as someone with a high risk of recidivism by an AI system COMPAS, thus leading to an imprisonment charge. Trade secret protection has barred him—the affected party—from assessing the accuracy of information used in the estimation by the AI system. This, in turn, prevented him from challenging the decision made. Similarly, decisions made by public authorities can face the same obstacles due to secrecy. With the obstruction of transparency, accountability is also affected.
Trust in AI systems is inherently lacking. Trade secrecy exacerbates this mistrust. Methods and purpose of transparent and accountable democratic governance are in conflict with aims of commercial gain and competition veiled in secrecy. If no steps are taken to address this issue, the decision-making infrastructure underlined by private interests with commercial values of secrecy will direct the law rather than the law imposing the conditions under which decision-making operates. Therefore, we should not allow private interests to shape the conditions under which the public is impacted.
Is there a way to divulge meaningful information that would enable transparency without trade secrets being encroached upon? The scholarship is making promising strides with various suggestions for compromise. However, this compromise requires further classification in the law and practice. In that regard, the evolving framework for AI regulation presents an opportunity to bridge the gap and rethink the systems of protection for both public and private interests.
Secrecy does not necessarily guarantee sustainable security. I warn against trade secrecy of AI systems since its disadvantages outweigh the benefits to the detriment of both companies and the public interest. Instead, the system of protection of AI systems should be rethought in order to enable algorithmic transparency and safer rights of developing companies. This is needed for several reasons. On the one hand, the status of trade secrets is uncertain as it offers less protection than copyrights or patents, making AI systems susceptible to reverse engineering. On the other hand, to respect their duties, public bodies required to provide an explanation for a decision made with the assistance of an AI are facing a dilemma of either denying individuals the right to explanation or violating trade secrets of the company which developed the system. In both cases, impediments to information disclosure and, thereby, to transparency due to precarious secrecy do not constitute an optimal solution.
Ida Varosanec. ‘Silence is golden, or is it? Trade secrets versus transparency in AI systems’ (The Digital Constitutionalist, 17 November 2022). Available at https://digi-con.org/silence-is-golden-or-is-it/
- 1Reverse engineering can be described as ‘the process of extracting the design elements from an existing system or an industrially manufactured product by examining its structures, states, and behavioural patterns’; Jasper Siems, ‘Protecting Deep Learning: Could the New EU-Trade Secrets Directive Be an Option for the Legal Protection of Artificial Neural Networks?’ in Martin Ebers and Marta Cantero Gamito (eds), Algorithmic Governance and Governance of Algorithms (Springer 2021) 138-154.
- 2Article 5(d) Trade Secrets Directive.
- 3See page 11 and Article 70 of the AI Act proposal.
- 4State v Loomis, 881 N.W.2d 749 (Wis. 2016).
Ida Varošanec is a PhD researcher at the Faculty of Law of the University of Groningen. Within her doctoral research, she is seeking ways to reconcile opposing public and private interests in the context of artificial intelligence. Specifically, she is working on the facilitation of co-existence between transparency and intellectual property rights within the evolving European regulatory framework for AI. Ida received an LL.M in European Economic Law (cum laude) and LL.B in International and European Law at the University of Groningen in the Netherlands.