Part I
In the famous Broadway puppet show Avenue Q, Kate Monster addresses the audience with enthusiasm. Her singing is contagious as she goes about the marvellous features of the Internet. Everything is jolly and fun until she is suddenly interrupted by a grotesque monster that yells:
A lovely catchy song follows constantly interrupted by the phrase ‘The Internet is for Porn’. One ends up singing alongside strangers, together with many puppets, about how marvellous the Internet is in providing pornographic content. It is a good show.
In reality, however, there is hardly any pornographic content when one opens the Internet (unless you really look for it). It is certainly not what the Internet is for. As a matter of fact, ‘bad’ content – meaning offensive, harmful or even illegal content – is a small part of online content. We can surf the web most of the time without really fearing for our eyes.
This prompts a simple question: if the Internet is ‘for porn’, and communities tend to say or do the worse when they have the possibility, how is it that we have such a clean and neat Internet? Who is cleaning it up for us?
1. Who controls online content?
In April 1995, Mr Zeran’s phone started ringing. The calls were mostly insulting, accusing him of partaking in an online movement that condoned the Oklahoma City bombings. Mr Zeran had nothing to do with it. Confused and upset, it took him some time to discover that a message had been posted on AOL, offering offensive T-shirts and using his number as a commercial contact. Angry at this point, he asked AOL to take the post down. And so they did, only for the phone to continue ringing. Another post, another takedown. The phone rang — the same drill. For 20 days, Mr Zeran could barely use his phone as the threatening calls continued.
The now-famous case of Zeran v. AOL introduces us to the two golden questions of freedom of expression on the Internet: who is responsible for what we say or do on the Internet? And under which standards?
Those are not easy questions, and some of the brightest minds of the 1990s clashed about it. Some said it should be ultimately people, responsible individuals, defining their own norms. The Internet should be a space where no sovereign should enter, where no territorial jurisdiction should exist and we could say or do what we wanted. The arguments were strong and persuasive: no borders meant difficult delimitation of responsibilities; no centralised authority meant privates could join in communities and choose their own future. Not everyone was convinced, however. Many refused to believe in this utopia. They said: the Internet is just another space, so we should apply the same laws we have in the real world. Some others replied, arguing the ‘horse’ was new and lex informatica was needed. It sounded fancy and opened up promising avenues for research (including for Lawyers at Digi-Con!). It acknowledged the fundamental differences in how norms were built on cyberspace: it was not about paper and ink; it was about code.
What is perhaps more extraordinary about this debate is how incredibly ignored it was. We read it with enthusiasm now, but the average person could not care much about it. What they did care about, however, was protecting their children. In the U.S., when the famous section 230 was incorporated into the proposal for a Communications Decency Act (CDA), the eyes of Congress were mostly focused on the decency part of the bill. They had taken a decision: the Internet was not for Porn.
2. The Digital Equilibrium Hypothesis
When the time came for the EU to answer the same golden questions (who is responsible for what we say or do on the Internet? And under which standards?), its answer was more nuanced. Instead of freeing platforms from almost all responsibility (as does Section 230 CDA), the EU shifted the tables.
Through an implicit system of notice-and-takedown – the text never clearly refers to it – article 14 of the E-Commerce Directive (ECD) created a regime in which online companies should be responsible for cleaning illegal speech. In simple terms, if they knew about the problem, they should take action or face the consequences. Moreover, they had to do it fast (expeditiously), or unbelievably fast (24 hours in some cases). Otherwise, liability would follow.
This decision paved the way for how online content is governed in Europe. Authors use different terms to refer to this decision (‘devolved enforcement’, ‘intermediarisation’, or ‘privatised enforcement’), but they all describe the same reality. The EU had decided to give online moderation to corporations. This represented a middle way between allowing any content to pass through these companies unchecked or having state agencies/courts look at every piece of content said or written on the Internet. Companies knew they had to moderate in any case (as Klonick discovered with Facebook), or they would risk losing their customers. It was simply about nudging them in the right direction.
I suggest that such a decision opened up a new way to think about online content moderation in Europe. I call it the Digital Equilibrium Hypothesis.
It postulates that via the approval of the E-Commerce Directive (ECD), a regulatory equilibrium between the two most important actors of the Internet was achieved, namely the State (including the EU) and corporate actors. This equilibrium meant a comfortable position in a spectrum of regulatory possibilities in which both sides are happy with the bargain they get:
Let us reflect for a second. In theory, many ways could have been thought of for governing the Internet. For example, one could have had a fully publicly controlled Internet. This would have meant a national-owned grid where public agencies or the State-controlled online content. On the other side of the spectrum, one could have had a fully privatised Internet, controlled by associations of individuals, moderated by internally elected officials. Perhaps it could have been even anarchic, a space in which everyone could say or do everything anytime (a true Internet is for Porn!). None of this happened. Why?
I do not think that a formal or conscious agreement was ever reached between the parties. I have reasons to believe (via empirical research) that a dialectic relationship arose out of the ECD, in which corporate and State power ultimately found a healthy equilibrium between competing positions. This was made of pulls and pushes on both sides, arguing for more or less control of online speech. For example, one can see how this equilibrium was crafted within the ECD. While article 14 ECD favours the public side of regulation by injecting public values into the moderation equation (via the threat of sanctions), article 15 ECD instead protects companies from monitoring the entirety of the Internet. Companies accept to do the job of public power in controlling speech, but they set limits to this bargain.
Such pulls and pushes can be described under two regulatory movements.
First – privatisation of public power. The State agreed for companies to be the first line of enforcement of digital rights. This privatisation stems from article 14 ECD but is present all over the broader legal system in copyright (art. 17), terrorism (art. 5), the right to be forgotten or disinformation. In all of them, the EU is increasing the level of responsibilities of online companies, asking them to enforce and take precautionary measures. This privatisation, however, came with a demand. If those companies were to enforce digital rights, they should consider public values – meaning State defined values – and not only their internal community standards. They should be the enforcers of public policy.
Second – publicisation of corporate power. This is perhaps the most exciting side of it. It constituted an approximation of corporate structures to public institutions and principles. I dare to suggest that what explains talks of digital constitutionalism is not so much that these entities are becoming Facebookistan, new governors, autonomous legal orders or special-purpose sovereigns. I suggest something more simple. They are mimicking State structures and principles such as due process or the separation of powers because that was part of the original equilibrium. Upon receiving the mandate to enforce digital rights via ECD, companies had to step up and upgrade their moderation structures to ensure they could execute the task at hand. This explains constant comparisons to public institutions like government, public squares, common-carriers, Supreme Courts or due process principles. There is no emergent liberal constitutionalism there, but most simply an exercise of copying-pasting existing constitutional narratives. This exercise not only confers a sense of comfort to their communities (building on the historical trust placed on the constitutional State) but, above all, keeps hard regulation at bay. This is ultimately the justification for adopting constitutional narratives, not so much any sort of jus-generative constitutive moment.
This equilibrium ruled Europe for more than fifteen years. This is no longer the case. This equilibrium is broken and is now under revision.
Part II will highlight how and why.
Suggested citation
Francisco de Abreu Duarte, ‘The Internet is for Porn! (but is it?) – The Digital Equilibrium Hypothesis’ (The Digital Constitutionalist, 02 March 2022) <https://digi-con.org/the-internet-is-for-porn-but-is-it-the-digital-equilibrium-hypothesis/>
Law & Tech PhD Researcher @EUI; Consultant for Health Disinformation @WHO; Co-founder and webdesigner @DigiCon.