HomePostsSymposiaAI Transparency and Explainability put to the test

Related Posts

AI Transparency and Explainability put to the test

Reading Time: 8 minutes
Print Friendly, PDF & Email

This post, bringing a practitioner perspective on AI transparency policies, is part of the DigiCon symposium Transparency in Artificial Intelligence Systems? Posts from this symposium will be published over the next Thursdays.

Open Loop in a nutshell

Open Loop is a global program, supported by Meta, that builds on the collaboration and contributions of a consortium composed of regulators, governments, tech businesses, academics, and civil society representatives. Through experimental governance methods, like policy prototyping, Open Loop members test both novel and existing governance approaches to new and emerging technologies, including proposed legislation, ethical principles, regulatory guidance and technical guidelines.

Open Loop program on AI Transparency and Explainability in the APAC

As part of Open Loop’s mission to connect policymakers and technology companies to help develop effective and evidence-based policies around AI and other emerging technologies, we published a new Open Loop report. This time around we are presenting the findings and recommendations of our policy prototyping program on AI transparency and explainability (T&E), which was rolled out in the Asia-Pacific (APAC) region in partnership with Singapore’s Infocomm Media Development Authority (IMDA) and Personal Data Protection Commission (PDPC).

The Open Loop consortium also counted with the generous participation and active engagement of AI Singapore, Aicadium, Plug and Play, Craig Walker Design and Research, and TTC Labs.

Supported by this stellar team of partners, we worked with 12 companies across the APAC region to co-develop and test a policy prototype on AI transparency & explainability based on Singapore’s Model AI Governance Framework (MF) as well as its Implementation and Self-Assessment Guide for Organizations (ISAGO).

Lists the 12 companies participating in the Open Loop consortium. Meta is the sole US representative, joined by 5 from Singapore (Evercomm, Deloitte, Ngee Ann Polytechnic, Qiscus, and Trabble), 4 from Indonesia (Bukalapak, Halosis, Nodeflux, and Traveloka), 1 from Taiwan (Qsearch) and 1 from Hong Kong (Travelflan).
Participating Companies

Program phases

The overall program was structured into three chronological phases: foundational, procedural, and delivery. The working assumption underlying this structure and methodology is that there is no one-size-fits-all approach for explanations of AI outputs. This is a deeply contextualized effort. The development, design, and delivery of AI explanations needs to: take into account a number of contextual factors (amongst which are audience and purpose); select the specific XAI techniques appropriate for its use case; reflect and make trade-off decisions amongst competing or conflicting values; and choose amongst different visualization and presentation interfaces in order to be clear and meaningful.

A diagram presenting three sequential phases, each with a label. The first one is the Foundational phase, which involves the preparatory work that companies need to do to provide an explanation. It is followed by the Procedural phase, in which there is the selection of explainability techniques. Finally, the Delivery phase involves designing the interface for presenting explanation.

Throughout the program, we supported companies with a comprehensive technical assistance package, which included dedicated mentoring sessions, the use of a machine learning operations (MLOps) platform, and a comprehensive technical guidance toolkit that gave participants an overview of the latest AI explainability techniques, along with examples and illustrations.

Methodology

In order to test the policy prototype, we used an innovative methodological approach. We acknowledged AI explainability (XAI) as a multidimensional concept and operationalized a set of XAI practices according to its four fundamental elements: audience (whom to provide the explanation to?), context (in what context is the explanation provided?), purpose (what are the goals that the explanation is seeking to achieve?), and content (what content will the explanation include?). These four elements, each of which were in turn broken down into different categories, enabled participants to shape and map their explanations to specific use cases, which we called “explainability scenarios“.

Companies built their own path to building AI explainability solutions by identifying the recipient, the circumstances, the reason and main message they wanted to provide about their AI product, service or feature

These scenarios served as points of departure to help the participants build explainability solutions, forming a set of personalized pathways that companies would follow when building their explainability features.

An interaction diagram combining the dimensions from the previous pictures: the three chronological phases and the four elements of audience, context, purpose, and content.

Through this methodological approach, we captured the experience of participants receiving, handling and following the policy prototype, testing its clarity, effectiveness and actionability

Defines three elements for policy evaluation: clarity, effectiveness, and actionability. Policy clarity refers to the extent to which the policy text can be meaningfully understood. Policy effectiveness refers to the extent to which following the policy guidance ensures that AI decision-making processes are explainable, transparent and fair, and that AI solutions are human-centric. Finally, policy actionability refers to the extent to which the policy prototype supplies the means to act upon it and implement its instructions.

Testing insights, Tradeoffs, and Technical, Policy and Usability considerations

Regarding policy clarity, participants found the text to be clear, accessible, and understandable. As a suggestion for further improvement, participants recommended restructuring the policy prototype in a more granular level, tailoring its guidance according to different stakeholders and specific use cases.

Concerning policy effectiveness, the policy prototype was deemed to be effective in two aspects. First, it raised awareness of the role and responsibility of AI developers in building trustworthy AI. Second, it provided high-level guidelines that enabled participants to identify the main risks of building and deploying AI systems, paving the path for them to design their products in a way that meets the goals of explainability, transparency and human-centricity.

Regarding policy actionability, participants expressed doubts and anticipated implementation difficulties. They argued that it would be hard to translate the policy text into concrete outputs as the latter would require more detailed and practical instructions. To solve this issue, participants suggested mapping the policy guidance to the AI product lifecycle stages, articulating and connecting specific policy recommendations to the distinct technical steps that are involved when designing, developing and deploying an AI system, including its explainability components.

Throughout this process, we observed how the participants made use of the policy prototype to build and deploy AI explainability solutions in practice, in the context of their specific products and services. As a result, we learned about the tensions and challenges that the participants encountered when delving into this technical endeavor, capturing 4 main tradeoffs:

The four tradeoffs oppose transparency and explanation to security, effectiveness/accuracy, disclosure of potential IP issues and meaningfulness and actual understanding.

When tasked with building an interface design for their AI explainability solution, participants also shared a number of important technical, policy and usability considerations that we documented in this report.

Technical considerations:

  • determine the sheer feasibility of explaining AI decisions and recommendations
  • ensure the quality of data training sets
  • implement traceability mechanisms as part of the explainability building process
  • build XAI solutions that are not only cost efficient, but can easily and adequately scale

Policy Considerations:

  • XAI’s range and depth: when going through the process of presenting and delivering an explainability solution, companies often asked themselves how detailed an explanation should be and how far should companies go in opening up their books and explaining their technical modus operandi. Apart from important elements regarding trade secrecy and incentives for innovation associated with the protection of IP rights, there are also other relevant aspects in terms of comprehensiveness and meaningfulness of the XAI solution: what to include in an explanation that reflects its complexity in an accurate manner, while still being accessible and comprehensible? This is to be decided on a case-by-case basis, but best practices should be developed to assist companies in this delicate exercise.
  • The Impact of XAI on policy making: XAI should not only comply and meet the expectations of policymakers and regulators, but should also affect future policy making and regulation, incorporating examples of practices that can then be fostered and adopted by future policy guidance and policy making processes.
  • The Human factor in XAI: two specific roles of the human in the delivery of an AI explanation were highlighted:
    • The human as a user of the AI system, empowered to adjust its parameters and to actively participate in the decision and recommendation made by these systems
    • The human as one who monitors and enforces the correct use of the AI system, intervening in its operation in specific cases

Usability considerations:

  • Visualization: A recurrent piece of feedback that we heard from our cohort of participants outlined the importance of using visualizations (graphical images, animations and interactive modules) when presenting and delivering XAI solutions
  • Customization: Inspired by the scenario-based approach of our program, participants recommended tailoring the explanation to specific audiences
  • Simplicity:  XAI solutions need to be short, simple and clear in order to foster meaningful understanding and increase users’ adoption of the correspondent products and services. In this process, there is a delicate balance to make in formulating explanations that capture the complexity of the AI systems in a simple but not overly simplified manner
  • Perfectly imperfect: An explanation should include a reference to its limitations. As reminded by a number of our participants, AI never has a 100% accuracy in its predictions. AI is not a silver bullet for human problems and needs, and showing its limitations is a way to elicit and build trust with people that use and interact with this technology
  • Seamless Flow:  XAI solutions should be integrated into the product or service in a way that flows naturally and creates a seamless experience for the user. The explanation should not be seen as an accessory of the product; explanation should be part of the product, an almost indistinguishable part of the product
  • User empowerment: Another design feature for XAI solutions proposed in this phase is to empower and provide users with control options over the decision and recommendations produced by the AI/ML systems.

Policy Recommendations

Based on the results of this Open Loop program, and the feedback received from its participating companies, we advise policymakers that are dealing with the question of how to regulate AI Transparency and Explainability to:

  • Get practical”: Develop best practices on assessing the added value of XAI for companies and calculating its estimated implementation cost. This would then help the industry plan for and prepare their journey towards AI explainability, doing it so in a more confident and well-informed manner
  • Get personal”: Make XAI guidance more personalized and context-relevant, that is, better tailored to specific types of companies, stakeholders and areas of activity. Being more explicit and granular about whom the policy is addressed to in the first place, and drafting policy guidance in a way that relates and maps to the operational day-to-day company practices, could help ensure that the policy guidance is unpacked at the right layer in the company, while increasing its overall adoption and use
  • Connect the dots”: Create new or leverage existing toolkits, certifications and educational training modules to ensure the practical implementation of XAI policy goals. When these resources are connected to AI policy frameworks and regulatory guidance, companies will have a more concrete idea of the gaps they need to fill in terms of human and technical resources, skills and competences, as well as implementation challenges and costs
  • Get creative together”: Explore new interactive ways to co-create and disseminate policy, and increase public private collaboration, like citizen participation, strategic foresight, crowdsourcing and use case compilations
  • Test and experiment”: Demonstrate the value and realize the potential of policy experimentation, leveraging policy prototyping as a way to help build effective policies that allow tech businesses to better absorb and integrate normative provisions in product development stages.

Conclusion

The entire program, as reflected in its final report, succeeded in achieving the goals we had defined from its outset:

  • Test Singapore’s AI governance framework and accompanying guide (MF and ISAGO) in the field of AI T&E, for policy clarity, effectiveness and actionability.
  • Make recommendations to improve specific XAI elements of Singapore’s AI governance framework and accompanying guide, and contribute to their wider adoption.
  • Provide clarity and guidance on how companies can develop explanations for how their specific products and services leverage AI/ML to produce decisions, recommendations or predictions (XAI solutions).
  • Showcase best XAI practices, contributing to the actual operationalization of this responsible AI principle and normative requirement, while offering evidence-based recommendations for AI T&E in the APAC region.
  • Demonstrate Open Loop’s role as an effective platform for the formulation of evidence-based policy recommendations aimed at informing and shaping AI governance.

With Open Loop’s experimental and multi-stakeholder, consortium-driven approach, we hope to continue broadening the perspectives involved in the responsible AI – and wider AI governance – debate by enriching it with input grounded in qualitative and community-generated evidence. And, in this process, we encourage policymakers to join our efforts and embark on similar experimental governance programs.

Read the full report here.

The positions developed above are the responsibility of the authors. We, the symposium editors, believe it brings important issues and experiences to the debates. Publication, however, should not be taken as an endorsement of particular policies.

Suggested citation

Antonella Zarra, Norberto Andrade and Laura Galindo, ‘AI Transparency and Explainability put to the test: overview of the Open Loop Singapore program’ (The Digital Constitutionalist, 15 December 2022). Available at https://digi-con.org/ai-transparency-and-explainability-put-to-the-test/

AZ
Antonella Zarra
NA
Norberto Andrade
LG
Laura Galindo
[citationic]

Featured Artist