1. Navigating the Digital Landscape Through a Gendered Lens
In the intricate tapestry of our hyperconnected era, Artificial Intelligence (AI) is no longer a made-up element of futuristic novels and Sci-Fi movies but a tangible, evolving reality that deeply impacts humankind across the globe. As stated in the Decolonial Manyfesto, “AI is a technology, a science, a business, a knowledge system, a set of narratives, of relationships, an imaginary”. As its influence expands, it brings about significant and transformative changes in various domains, from healthcare and education to entertainment, advertising, financial services, law enforcement and crime prevention.
The potential of AI technologies to meaningfully improve numerous facets of human existence cannot be eclipsed by the prevailing “AI Hype”. Yet, the ever-growing deployment of AI systems also comes with challenges that demand prompt consideration. Among the most pressing issues is the risk of reinforcing longstanding societal disparities — which is especially concerning when it amplifies biases and prejudices faced by historically marginalized groups, including the LGBTQ+ community, racial minorities, people with disabilities, and women.
Within this context, biases are deeply woven threads that shape the fabric of our online interactions, decisions, and perceptions. These biases, often subtle and unnoticed, play a pivotal role in perpetuating societal divisions, influencing perspectives and shaping the course of humankind. Because new technologies relate to power dynamics and dominant understandings of the world, biases can derive from societal institutions, practices, and common-sense that go way beyond the mere programming of a computational system.
Furthermore, access and ability further intensify the gender disparity globally. To that extent, research found that women, especially from the Global South, encounter obstacles in obtaining the educational tools necessary for AI expertise. These disparities, whether in financial support, technological infrastructure, or even foundational resources like computers and internet access, widen the skill gap between genders globally. Compounding this issue is the reality that many AI systems are primarily trained on datasets representing a limited demographic: predominantly Western, Caucasian, male, and affluent. Consequently, such algorithms tend to misinterpret or overlook certain communities.
In the sections that follow, we will delve deeper into some poignant examples that shed light on the politically charged landscape of AI and the gendered dimensions of surveillance and control. These cases further illustrate the profound societal and geopolitical implications of biases embedded within AI systems.
2. The Gendered Gaze of Surveillance in Iran: AI as the New Enforcer
AI systems can be used to disproportionally target women and have more control of their whereabouts for law enforcement purposes, which is particularly concerning in totalitarian political regimes. The Iranian government’s recent move to deploy AI-powered facial recognition systems in public transportation is not just a mere technical choice; it is, in fact, a political statement. Designed to track and monitor women not adhering to the strict hijab-wearing mandates, the systems build upon the country’s existing informational infrastructure, wherein the profiles of Iranian citizens have been integrated into a national biometric database containing vast amounts of personal data. This surveillance apparatus, which combines traditional forms of state control with advanced AI tools, underscores the growing complexities in state control and enforcement measures, particularly when women’s rights are at stake.
This case is a testament to how technology can be weaponized to monitor and control women’s behaviors and choices, amplifying existing societal oppressions and inequalities. The very act of monitoring and regulating women in public spaces exemplifies the broader challenges of balancing technological advancements with ethical considerations and human rights.
3. Generative AI and the Surveillance of the Intimate: Image Generator Apps and the Impact on Women’s Privacy
Beyond the public sphere, concerns related to AI and surveillance also permeate the private lives of individuals, significantly impacting even the most intimate aspects of daily life. Within this context, Generative AI (a subset of the broad AI realm) stands at the forefront of technological innovation: in essence, it consists in algorithmic systems that operate by training on vast amounts of human-generated content, learning patterns to mirror human creativity, and then producing new, original content, be it images, text, music, or videos. While this offers immense potential for industries like entertainment, advertising, and design, it also poses significant ethical concerns.
For women, AI’s potential harms become particularly conspicuous when related to the freedom of choice over their bodies. A pertinent example of the ethical challenges posed by Generative AI is the application Lensa: designed to generate creative avatars based on user-uploaded photos, it leveraged the power of AI to produce images that caught the attention of the online world. However, the app created inappropriate, overtly sexualised, and non-consensual nude renditions of women, particularly those of Asian descent. Male users, in turn, were more frequently depicted in professional or heroic roles such as soldiers and astronauts. This not only underscores the gendered biases embedded in AI systems but also raises alarming questions about consent, privacy, and the potential misuse of generated content.
Historically, privacy was perceived and rooted in notions of reclusion and secrecy — encapsulated in the “right to be left alone” —, while the contemporary perspective is substantially broader. As societies have transformed into intricate, hyperconnected networks of digital interactions, privacy has come to signify an individual’s right to exercise control over their personal information. In this context, privacy relates to the broad protection of the individual within the collective, going from a mere prerogative to a recognised fundamental right.
Moreover, scholars indicate that women are generally more cautious about privacy on the internet compared to men, revealing significant disparities in social media habits depending on the user’s gender. These heightened concerns can be attributed to the differential risks that women face online, as they are more frequently targeted by intrusive and violent online practices and are more susceptible to facing dire moral repercussions — especially if it pertains to intimate details or physical attributes. This discrepancy accentuates the gendered nuances in the perception of notions such as ‘privacy’ and ‘consent’, thereby wearying the very essence of women’s data protection rights.
4. The Symbolic Dimension of Generative AI and Gender Representation
To navigate the symbolic implications of AI, we must thoroughly examine its role in shaping representativeness, identity, and collective discourse. The symbiotic relationship between technology and gender is often illuminated and perpetuated by various forms of media and visual content — from movies and literature to mainstream narratives, all acting as potent catalysts in solidifying gender norms.
The implications can be quite graphic: when Generative AI tools — like OpenAI’s Dalle-2 — are tasked with generating images from ostensibly gender-neutral prompts such as “CEO”, they resulting images primarily depict men, often exuding an aura of confidence and authority. Contrarywise, a prompt like “assistant” tends to generate images that predominantly feature women, occasionally in settings or poses that could be perceived as degrading, sexualized, or cartoonish, thereby perpetuating a narrative of ineptitude or triviality. The resulting cycle further embeds these negative notions both into the technological fabric and the collective psyche.
A hands-on exploration of this was undertaken in August 2023, utilizing Dall-e 2 to illustrate these observations:
Figure: Dalle-2 responses for the prompt “CEO”, Test 1 | Saboya (2023)
Figure: Dalle-2 responses for the prompt “assistant”, Test 2 | Saboya (2023)
In essence, these visual representations underscore the pressing need for a more nuanced and equitable approach in AI development. As we continue to integrate AI into our societal fabric, it becomes imperative to challenge and rectify these embedded biases, ensuring that the technology truly mirrors the diverse and multifaceted world it serves.
5. Towards a More Ethical and Inclusive AI Governance
What emerges from this exploration is a pattern that cannot be ignored. AI, in its current state, is not merely a tool or a distant vision from a Sci-Fi movie, but a reflection of human partialities embedded within its algorithms. To effectively address the issue, one must look not only at the technology itself, but also at the broader socio-political context in which it operates.
The intertwining of AI, identity and surveillance paints a complex picture, especially when viewed through a gendered lens. From the streets of Tehran to the concerns emerging from Generative AI tools, the societal implications are profound. The case studies discussed are a vivid backdrop against which to discuss the broader issues of state control, women’s rights, and the potential dangers of unregulated AI design and deployment.
As society continues to intertwine with technology, the call for robust governance and regulation becomes not just a recommendation but a necessity. It is a call for an AI that respects and serves all of humanity equitably, prioritizing ethics and accountability and recognizing the unique challenges and perspectives each individual brings.
The politics of technology and gender intersect in intricate ways, demanding a well-balanced comprehension towards achieving the common good. Because biases are a vivid manifestation of societal norms, practices, and power dynamics, tackling the issue means examining both the technology and the societal structures it operates within, resisting oversimplified gender perceptions and bringing intersectionality to the forefront.
In that sense, the journey of AI should not be perceived as a race to achieve the once-deemed impossible. Rather, it is imperative to consider the real objectives behind the deployment of new technologies, the inherent political disputes, and the sociotechnical imaginaries they foster. As AI systems evolve, so too must their governance structures, ensuring they remain reflexive, inclusive, and grounded in the multifaceted realities and idiosyncratic implications.
In a nutshell, acknowledging and integrating gender as a pivotal aspect of the AI debate is essential to fostering more inclusive and equitable propositions to numerous societal challenges in the digital era. This requires a collective responsibility to ensure that these technologies are harnessed for the greater good, without compromising the rights, freedoms, and dignity of oppressed and marginalized groups.
Maria Beatriz Saboya
Maria Beatriz Saboya is a Brazilian Certified privacy professional (FIP, CIPP/E, CIPM, CDPO/BR) and attorney specialising in IT Law, Privacy and Data Protection. Currently pursuing an LL.M. in Law & Technology at King’s College London, Beatriz's research focuses on gender-based bias in AI systems and the regulatory challenges of new technologies from a transversal perspective. She is the founder of the networking group Mulheres na Privacidade (‘Women in Privacy’), an initiative dedicated to connecting women, combating gender inequality, and advocating for a more inclusive digital future.