As members of a multidisciplinary, multi-UK university-based research team with the namesake “AGENCY”, we often need to reflect on the question, “How can Generative AI (GAI) be framed in terms of agency?” as our research focuses on assuring digital citizen agency in a world with complex online harms. In this blog post, we leverage insights from our research to argue for a holistic way of framing innovations such as GAI in terms of digital citizen agency to empower users with the control and confidence to navigate complex online harms.
1. What do we mean by digital citizen agency and GAI?
The AGENCY research team refers to digital citizen agency as the ability for people and society to be empowered through technology. It offers a framework in which we can empower people and balance that empowerment with societal concerns (such as public health, safety, and security) while ensuring respect for crucial principles such as freedom of expression. In comparison, GAI refers to a form of artificial intelligence (AI) capable of understanding, learning, and applying knowledge across various tasks, thereby allowing it to create images, speech and other complex data types. However, given its potential to facilitate complex online harms, such as disinformation, deep fakes, and autonomous cyber-attacks, there is a need for digital citizens to be empowered to allow critical engagement with the technology and be protected from such harms.
Therefore, to align GAI with digital citizen agency, we prioritise two foundational tenets:
1. Individuals should be central to the development process of technologies like GAI. Their interests and well-being should be a primary consideration in the software development life cycle.
2. A multidisciplinary approach is indispensable for addressing the complex online harms posed by GAI throughout its lifecycle.
By adhering to these guiding principles, we can frame GAI in a manner that accentuates its potential to serve digital citizen agency while acknowledging and addressing the complex implications it could have on society and discuss them below.
2. Co-creation of GAI
The first tenet of our proposal centres on the need to co-create GAI with individuals. By adopting a digital citizen-centred design approach that emphasises user control and trusted interactions to empower digital citizens to engage meaningfully with technology, we can address the psychological and embodied barriers to empower meaningful engagement with technology and overcome the complex online harms posed by GAI. Such an approach would ensure responsible innovation and promote user well-being by assessing unintended consequences (i.e., complex online harms) throughout the GAI’s lifecycle.
3. Multi-disciplinary Approach
We argue that embracing a multidisciplinary approach is indispensable for understanding the intricate risks associated with GAI. Such an approach not only refines the scope of digital citizen agency but also mitigates the deficiencies present in existing regulatory structures designed to manage harm. Conventional legal frameworks typically rely on an ex-post methodology, addressing harm after it has occurred. This model proves inadequate for tackling the multi-faceted, socially influenced harms that arise from GAI, as it is often insufficient for unambiguously determining culpability and allocating liability. While emerging regulations like the European Union’s AI Act are advancing towards an ex-ante risk-based model to protect against the complex online harms posed by GAI, we assert that this effort should be complemented by a multidisciplinary approach. This would enable the development of holistic strategies that empower digital citizens to navigate and mitigate complex online harms effectively through a GAI’s lifecycle.
The core of a multidisciplinary approach is to tackle the question from different perspectives (for example, social sciences, law, human-computer interaction, and computer science) and bring these perspectives together in a way that creates a nuanced and insightful understanding of how to enhance digital citizen agency and mitigate complex online harms. An illustrative example of one such approach would be a first step of adopting a computer science perspective to provide a systematic approach to preventing harm through design. Here, harm is defined as an unintended consequence of a protection mechanism. Framing this in terms of GAI, it is helpful to understand it by adopting a simple framework of Fault → Error → Failure.
Below is an example of how this approach may be applied to disinformation.
- Fault: Biased Training Data -The fault often lies in the training data if it does not cover a wide range of datasets, resulting in biased or misleading information, which can lead to an eventual failure.
- Error: False Confidence -There may be overconfidence in the reliability of the data used in the GAI model.
- Failure: Disinformation -The result is AI-generated disinformation that may cause complex online harms, such as voter or market manipulation due to inaccurate information, i.e., ‘fake news’.
Upon identifying the underlying ‘fault,’ we can initiate preventive strategies that directly address the root causes of failure. These strategies can gain further robustness by integrating supplementary tools such as educational and training initiatives and adopting Corporate Digital Responsibility principles, which facilitate meaningful digital citizen engagement with GAI. This allows us to understand the complexity, interdependence, and unpredictability that define the social frameworks in which GAI operates and create holistic solutions to complex online harms posed by GAI. In addition, we propose that multidisciplinary perspectives to framing GAI through digital citizen agency and the consideration of unintended complex online harms should underline the training of teams working with GAI to increase user trust in the technology and ensure that citizens are protected.
We contend that GAI should be conceptualised within the framework of digital citizen agency, given its propensity for unintended complex online harms. Tackling the multifaceted harms associated with GAI necessitates an interdisciplinary approach to address these complexities. By actively involving a diverse range of stakeholders in framing GAI through the lens of digital citizen agency, we can introduce fresh perspectives often absent in conventional discussions. This inclusive approach provides opportunities for non-regulatory options to facilitate digital citizen agency, paving the way for a holistic solution to address the unintended consequences of this new technology.