“When ignorance is bliss, ‘tis wise to be folly”, two majority judges in People v Smith (Cal. Ct. App. June 23 1958) argued that booksellers could be criminally liable for selling obscene materials, for otherwise they may choose ignorance over knowledge of the content of the materials they sell. In this classic case contemplating the liability of intermediaries for the content they carry or sell, the Supreme Court finally overturned Appeals Court’s decision to hold Smith liable for selling a romantic novel deemed obscene. Indeed, they found that the knowledge component was required for Smith to be liable: were Smith aware of the contents of the novel he sold, he could be held liable.
50 years onwards, legislators and judges around the world still grapple with the same questions. The Digital Services Act has a similar knowledge component for hosting platforms in article 6: it exempts platforms from liability for illegal activities and content when not having actual knowledge of those, nor awareness of facts or circumstances from which those activities and content are apparent. A similar knowledge component was already present in the e-Commerce Directive art. 14(1)(a) and (b). Under the e-Commerce Directive, the knowledge component is construed as exempting any activity involving a technical process of storing, ordering and transmitting content, provided that activity is of a mere technical, automatic and passive nature (e-Comm Directive rec. 42). A similar structure is provided in article 17 of the Directive 2019/790 on Copyright in the Digital Single Market, in which intermediary service providers are liable for unauthorised communications to the public of copyright protected works unless they have (i) attempted to acquire authorization (ii) made best efforts to ensure unavailability of copyright protected works and (iii) acted expeditiously upon receiving notice from rightholders to disable access to copyright protected works (article 17(4)). The assessment of ‘best effort’ looks at the type of service provided, the size of the audience reached and the availability of effective means; in short, the proportionality. The knowledge component has been interpreted in European case law, inter alia Google France (para 113-120), L’Oreal v eBay (para 118-124)and Peterson v Youtube (para 118, 143). In essence, the developments in case law attempt to capture the content moderation process as not interfering with the knowledge component, even though it is not entirely technical, fully automated or neutral. Article 7 of the Digital Services Act follows that trend.
Article 7 Digital Services Act
Article 7 of the Digital Services Act adds an exemption to the knowledge component. When an intermediary service carries out own-initiative investigations and takes measures aimed at identifying illegal content, or acting in compliance with national law and Union law, their subsequent awareness of illegal content does not make them ineligible from the exemption of article 4, 5 or 6; neither do such measures change the passive and automated character of the content moderation process by the intermediary. This exemption is a nudge to encourage platforms to voluntarily monitor actively against illegal content. The reasoning is the opposite to the Appeals Court in State v Smith: by not making platforms liable even upon obtaining knowledge of illegal content, the EU legislator hopes that platforms choose wisdom over ignorance to aid EU-wide efforts against misinformation, hate speech and cybercrime.
The exemption of article 7 Digital Services Act is conditional on the platform acting in good faith and diligently. Recital 26 explains how those principles should be interpreted: “The condition of acting in good faith and in a diligent manner should include acting in an objective, non-discriminatory and proportionate manner, with due regard to the rights and legitimate interests of all parties involved, and providing the necessary safeguards against unjustified removal of legal content, in accordance with the objective and requirements of this Regulation. To that aim, the providers concerned should, for example, take reasonable measures to ensure that, where automated tools are used to conduct such activities, the technology is sufficiently reliable to limit to the maximum extent possible the rate of errors”.
The principles of good faith and diligence relied upon in this provision strike a balance between the interest of the content moderator and the fundamental rights of internet service users: freedom of expression might be in contention if content moderation is encouraged by widened immunity without proper regard for the risk of over-removal as a result of that widened immunity. The interpretation of the principle of ‘good faith’, and its subsequent explanation in Recital 25 is therefore crucial to understand the way that principle mitigates the risk to freedom of expression that is inherent to content moderation. Good faith has historically been interpreted differently across jurisdictions. The following paragraph examines the various components of good faith listed in Recital 26.
Components of Good Faith in Article 7 DSA
The components of good faith in Recital 26 are objectivity, non-discrimination, proportionality, due regard of rights and interests of users and necessary safeguards in place to ensure automated technologies are sufficiently reliable. Objectivity as a principle is not clearly defined in EU law, certainly not for content moderation. It can be seen in two dimensions: firstly, the requirement for objectivity can pertain to the standards by which the platforms moderate, meaning that those standards are irrespective of inter alia religion or ideology of the content moderated, and simply reflect those types of speech that are excluded from free speech protection. In a European context, this is helpful, but given the possible exceptions to freedom of expression protection provided in article 10(2) ECHR and article 11(2) CFREU this particular interpretation of objectivity leaves room for interpretation for social media platforms. In grey area content types such as hate speech it is difficult to objectively draw a line; the question is further whether platforms are well-equipped to do so. The second dimension of objectivity could be in the application of the abovementioned standards. This means that the standards are applied irrespective of the user generating the content. This relates closely to the principle of non-discrimination. The workability of the principle of objectivity in itself therefore remains vague; if the exemption for liability for content moderation in good faith exists only if it is objective, the question rises who determines the objectivity of the standards applied or the objectivity of the moderation. Certainly in the United States there is plenty of case law and even attempts at legislation to ensure the objectivity of social media platforms. So far, few of those cases have succeeded, for it is impossible to strip platforms of any type of editorial discretion in their content moderation policies. That editorial discretion is protected by Section 230 Communications Decency Act, a piece of legislation somewhat resembling article 7 DSA. The underpinning conviction in Section 230, as well as under article 7 DSA, is the undesirability to limit platforms excessively in their standard-setting for content moderation.
The principle of non-discrimination can be derived from administrative law as well as international human rights law (CFREU art. 20 and 21). Content moderation can be compared to other automated decision-making processes in this context. An oft-discussed issue in algorithmic decision making is that algorithms have the appearance to be objective, but their training very much affects the way it affects different groups. By training algorithms on certain datasets, outcomes of their monitoring might disproportionately affect minority groups that were previously disadvantaged. This causes the principle of non-discrimination to be at risk through automated moderation. A thorough investigation of the implicit biases in moderating algorithms is therefore necessary, including the potential for direct and indirect discrimination. This has been incorporated into the DSA: assessing the risk for discriminating certain groups of the user population is a mandatory part of the required risk assessment for fundamental rights per Article 26.
The risk assessment relates directly to the good faith component of ‘necessary safeguards in place to ensure technologies used are sufficiently reliable’. Indeed, the EU has a mandatory risk assessment, but objective standards of what constitute necessary safeguards to ensure a reliable algorithm and what does not are currently lacking (although there are certainly efforts in developing assessments, see for example the fundamental rights and algorithms impact assessment). This leaves large social media platforms to their own devices to observe the risks of their algorithm and design safeguards around that. Further, without objective standards it is difficult for courts to decide whether sufficient safeguards have been put in place: content moderation is notoriously opaque, so it requires an above average understanding of automated content moderation to assess whether sufficient effort has been made to ensure reliable algorithms.
Lastly, the principle of proportionality. Proportionality is well-developed in European Union law as well as international human rights. The benefit of measures must be proportional to the damage incurred by the person affected by those measures, meaning that it is in line with the legitimate aim of those measures and not an excessive burden to the person affected (in international human rights law); and meaning that it is suitable, necessary and not excessively burdensome to the individual (in European law). In the context of content moderation, two preliminary remarks can be made. The first remark is that measures taken by social media platforms should not be excessive to the violation of the terms of service of the user. Remedies should not be seen as binary between carrying content, or removing content and blocking the user, but they should be interpreted as covering a range of in between options, for example warnings, demonetisation, reducing visibility and temporary bans. This gives platforms a wider range of options to remedy illegal content but not excessively harm the users freedom of expression. The second remark is more of a concern. Proportionality means that measures taken should not be excessively burdensome. The effect of blocking someone’s social media account or reducing its visibility may have different effects for each user. Some might run their business via a social media platform, some might use it to run a political campaign, whereas others may simply use it to post cat pictures. The proportionality of a measure taken is dependent on the circumstances of the user: blocking an account might mean the loss of livelihood for one person where it may mean the loss of an outlet for feline photography for another. It is impossible for social media platforms to assess such impacts before acting on an illegal piece of content, nor is it desirable that they possess the full knowledge to make such an assessment – although they very well might. This means that although well-intended, the principle of proportionality does not suit the fast-paced environment of automated decision-making in content moderation very well. A lack of proportionality in the content moderation process might mean that the good faith requirement of Article 7 DSA is not fulfilled, causing liability where platforms are seeking to avoid it. Theoretically, it can be an unwieldy requirement.
The Future of Article 7 DSA
Article 7 DSA fits the general tendency of the European Union to encourage social media platforms to take responsibility for the content they host. It relieves the burden of potential liability for the content hosted on their platforms by removing the knowledge component of illegal content, as long as the content moderation process is conducted in good faith. Recital 26 provides some guidance as to how ‘good faith’ is interpreted in the context of article 7, relying on several established and less-established principles of European-, administrative- and international human rights law. Each of those components of good faith has interesting implications in the context of content moderation. Further reflection upon those components, as well as guidance by the European Union and its courts may help crystalize the full extent of ‘good faith’ in the context of article 7. This is necessary to ensure that the European Union’s well-intended nudge for social media platforms to moderate more actively does not unnecessarily harm the freedom of expression of its users.
Jacob van de Kerkhof, ‘Good Faith in Article 6 Digital Services Act (Good Samaritan Exemption)’ (The Digital Constitutionalist, 15 February 2023). Available at:https://digi-con.org/good-faith-in-article-6-digital-services-act-good-samaritan-exemption/
Jacob van de Kerkhof
Jacob van de Kerkhof is a Ph.D. candidate with the Montaigne Centre at Utrecht University. His research focuses on the protection of freedom of expression on social media platforms.