HomePostsDigital RightsToward a Theorization of Digital Constitutionalism

Featured Artist

Related Posts

Toward a Theorization of Digital Constitutionalism

Reading Time: 7 minutes
Print Friendly, PDF & Email

This post is a contribution to the symposium What is Digital Constitutionalism? and not an official editorial position. We nevertheless welcome the author’s contribution and encourage further posts probing the meaning of digital constitutionalism and its limits as an analytical approach.

I. Introduction

That the nature of the state is undergoing one of its intermittent phase shifts and that technologies of data processing, analysis, and prediction lie at the crux of this metamorphosis–neither of these ideas is today seriously in doubt. When bureaucrats decide whether to grant or withhold benefits to disabled and seriously ill people now, they often do so based on a machine prediction. Depending on what a machine prediction directs, child welfare agencies decide whether or not to take children away from parents. School districts, following firms such as Unilever and Goldman Sachs, use machine learning instruments to determine which teachers to hire and fire. Police deploy predictive tools to anticipate where crimes will happen, and even who will likely commit them. Chicago, where I live, has a machine-generated ‘heat list’ of likely murderers. When a suspect is flagged, officers then use facial recognition to track them down, leveraging the rich and detailed digital tapestries of the urban environment created by public cameras and microphones. The list could go on. These examples, which could be drawn from either side of the Atlantic, illustrate only some of the many ways digital technologies are scrambling the terms upon which the state interacts with those subject to its jurisdiction.

How, then, to think productively about the causes, effects, and normative stakes of these forms ‘digital constitutionalism,’ the ‘AI state’ or the like? What does such an inquiry look like? I want in this blog post to offer some generalizations about the way in which (it seems to me) such theorization now proceeds within legal scholarship, and the way in which it might henceforth profitably be extended. In capsule form, my core point is that the current discourse in the legal academy turns mainly on the problem of ‘rights translation,’ where it might more profitably be training on the problem of ‘structural diagnostics.’ Among its merits, this puts the salient questions with more acuity than rights talks. And it is more porous to critical theory’s important questions of power, knowledge, and material, substructural determinants.

The dominant mode of scholarship on digital technology today takes, it seems to me, the following form. It is motivated, at the threshold, by the observation that a specific computational tool is being deployed in ways that are prima facie worrisome. A police prediction tool has disparate error rates for people of different races or ethnicities. An app for making decisions about welfare issues denials does not give applicants to chance to explain or contextualize under conditions where such voice might plainly make a difference. Or a public surveillance tool used to deter crime becomes the training data for a facial recognition tool that seems to invade personal privacy or control political dissidents.  A specific policy problem is taken, then, as the launching point for inquiry into the way in which the state’s technologies impinge upon individual interests. These interests are usually configured as ‘rights.’ The scholarly project then aligns with the task of adapting or reconceptualizing that right within a new terrain marked out by the use of digital tools. The result is a translation. It is, for instance, a new definition of ‘discrimination’ tailored to the algorithmic context. Or it is a ‘right’ to an explanation or a right to transparency in machine decision-making. Or instead, it is the rights against the use of a technology, which in the limit pitches into an argument for prescribing a specific technology, or more generally the diffuse category of ‘artificial intelligence’ or ‘robots’ more generally.

Much can be said for this style of scholarship. Not least, it responds directly to the quiddities of human suffering or misfortune in a fairly direct way. Consistent with a Rawlsian spirit (epitomized by his famous difference principle), it takes seriously the plight of those most grievously harmed by state power. By hypostatizing a response to such harm in the verbal form of a ‘right,’ this literature resonates with the dominant style of moral claim-making of the late twentieth and early twenty-first century. Yet this approach also has serious gaps. In a conference paper presented in 2020, Mariano-Florentino Cuéllar and I draw up a distinction between ‘rights’ and ‘policy.’ We argued that it did not make sense to theorize the problem of digital constitutionalism in terms of rights for a number of reasons: 

Not least, the governance of AI systems is not well pursued through the management of binary interpersonal relations. Changes to a reward function or an interface, for example, are almost certain to propagate out complex and plural effects on the whole population subject to regulation. Efforts to reduce rates of false negatives, for example, are mathematically certain to change the rate (and the distribution) of false positives. Rights are an inapt lens for thinking about AI systems because of their entangled quality, which makes it implausible to isolate just one pair of actors for distinctive treatment. As it has long been apparent, rights–especially when enforced by courts–are not an ideal vehicle for managing what Lon Fuller called polycentric disputes. There’s every reason to think Fuller’s worries have equal weight in this novel technological context, where an intervention to improve the lot of one subject can have complex ramifications for many others. Moreover, the manner in which normative concerns about equality, privacy, and due process arise out of AI systems is not well captured by the idea of a right standing on its own. [T]he technical choices of algorithmic design and also their embedding in institutional consequences can entail a range of contestable normative judgments. The manner in which predictions are reported, the feasibility of verifying the basis for predictions, and the nature of any dynamic updating all depend on normative judgments as much as the choice of training data and reward function. Worse, technical judgments (say, about what reward function is used) can be entangled in complex ways with system design choices (say, the manner in which predictions are expressed in a user interface). Picking out a single thread of interaction between the state and an individual as a “right” may not even be sensible–let alone practically effective. Rather, the effects of an AI system are often spread out across aggregations who experience a classification rather than concentrated on individuals. At the margin, the size of those effects will also depend on the prior institutional and policy landscape in place when an AI system is adopted.

These critiques, it seems to me, apply with some force to the dominant modality of legal scholarship on digital constitutionalism. 

III. From ‘Right Translation’ to ‘Structural Diagnostics’

In thinking about the alternative theoretical frame that be might be deployed in lieu of rights translation, I want to suggest here a kind of ‘structural diagnostics.’ This is complementary to, albeit more ambitious than, the focus on ‘policy for AI systems’ that Cuéllar and I advanced in the 2020 paper. (In a forthcoming piece in Daedalus, we deepen that approach in terms supplied by administrative law. I don’t propose to take that line of thought further here, but rather to say something new here). 

‘Structural diagnostics’ takes as initial coordinates from insights offered by the historian David Edgerton and the political scientist David Stasavage. First, Edgerton’s stimulating The Shock of the Old challenges histories of technology that are ‘innovation-centric,’ and hence assume that novel technologies are necessarily better than, or necessarily displace, older ones, and warns technological determinism. Stories of technological ‘retrogression’ and horizontal ‘drift,’ he explains, are as common and consequential as forwarding progress. Second, in The Rise and Decline of Democracy, Stasavage provides compelling evidence that democracy is more likely to emerge when rulers do not have as much knowledge of, and hence the ability to influence, what their subjects are doing. Where rulers are relatively weak, Stasavage suggests, democracy is more likely to arise as a strategy for state maintenance and state-building in the context of weakness.  

In thinking about how technology influences the forms of state power, Edgerton and Stasavage suggest we should not assume that the new simply displaces the old, but there should be particular attention to the specific political and social context in which a new technology arises. There is always such a context, always a grid of private and private forces jockeying for advantage. How technology enters this mêlée, and whether those forces with the upper hand at a given moment see fit to adopt that technology, disseminate it, or suppress it–all this depends on the contingent dynamics of a given moment in time. 

This has implications for how to theorize about digital constitutionalism. Most importantly, rather than rising above such claims, normative ideas such as rights are best understood as potential moves within that pell-mell contest; they are moves for privileging one side or another, not an exit from context or a path to its cartography. Specifically, legal rights, moreover, emerge in a durable and effective form only where they rest on a sustaining institutional and bedrock. Hence, they are best understood as by-products of the same volatile mix that determines when and how new technologies are adopted or suppressed. (I am neuralgic, perhaps, about rights’ fragile institutional foundations: In the United States, in particular, I have argued in a recent book, overly hasty declarations of the fundamental right have instead crumbled from within because of the ability of reactionary social mobilizations to capture the institutions’ redoubts necessary for their defence). Like democracy in Stasavage’s telling, rights flourish upon when there’s apt institutional soil. Even for those whose academic agenda is primarily normative and reformist, there is hence a good reason to set aside a mechanical, narrow-bore model of rights translation and to adopt a broader one.

A new technology, then, doesn’t write on a blank slate. It is rather written by the ex-ante state of affairs. How an authoritarian state such as China leverages a new analytic capacity will be dramatically different from the way that a social democracy such as Sweden uses it–or the way that a post-social democracy such as the United States does. Across these contexts, both the state and the array of market and community forces ranged around it diverge dramatically. A ‘structural diagnostics’ theory takes aim at the specific contextual political economy in which technology is adopted (as Stasavage suggests) and resists teleological models of technological advances (per Edgerton).  It further attends to broader structural principles that regulate the fluid dynamics of state and society. 

Consider, by way of brief provocation in closing, three structural principles that warrant particular attention now. The first, again following Stasavage, is democracy. Pathways of democratic control are necessary means to align the aims of rulers and the ruled. But background inequalities of wealth, influence, and respect–all exacerbated in the last four decades – thwart that alignment. In this context, digital technology may impact the democratic process less by enabling state domination (as in China and Russia, say), and more because its private adoption can exacerbate the political payoffs of unequal economic or social power. Second, the rule of law aims to temper the discretion of state officials and hence render state action more predictable and so more legible to the public (reversing the dynamic described by Scott). Machine decisions, it is said, risk being opaque and seemingly arbitrary; they also imply a possibility of increasingly asymmetrical state power over the public. Finally, a central function of the state is to create ‘public goods,’ which would be under-supplied if left in private hands: Education, physical and economic security, and public health are obvious (if not uncontroversial) examples of goods without which the ordinary citizenry are profoundly disabled. New digital technology is being used to chip away at such provision, even as opportunities to enlarge the provision of public goods with those same tools are systematically ducked.

For my money, a profitable theoretical agenda for digital constitutionalism would ask how these three principles are vindicated, despite the pressures engendered by new technologies. At a moment when the national state has been weakened by four decades of critique and crisis–and when that state is increasingly beholden to and incapable of mastering private, corporate uses of the same technological tools–this effort is hardly the only way of cashing out the idea of structural diagnostics–but it is one that, to my mind, seems very much worthwhile.

Aziz Z. Huq
Frank and Bernice J. Greenberg Professor of Law at University of Chicago Law School | Website

Featured Artist