HomePostsDigital RightsMODERATING ONLINE PLATFORMS AFTER THE DSA: FROM DESIGNING RULES TO ENABLING ENFORCEMENT 

Related Posts

MODERATING ONLINE PLATFORMS AFTER THE DSA: FROM DESIGNING RULES TO ENABLING ENFORCEMENT 

Reading Time: 8 minutes
Print Friendly, PDF & Email

1. Introduction

On October 12th 2023, Thierry Breton, the Commissioner for the Internal Market of the European Union, shared on X his letter to the CEO of TikTok regarding the risk of dissemination of illegal content and disinformation through the platform after the terrorist attacks carried out by Hamas against Israel. The letter invites TikTok to comply with the “obligations concerning content moderation” of the Digital Services Act, in particular those aimed at preventing harms to “children and teenagers”, who extensively use the platform. Apart from soliciting the prompt removal of content that violates the DSA, Breton calls for “effective mitigation measures to tackle the risks to public security and civic discourse stemming from disinformation”, which are increasingly posed by the circulation of “fake and manipulated images and facts” on the platform. The letter specifies that TikTok’s answer will be included in “our assessment file on your compliance with the DSA”.

This letter, which was preceded by a similar one sent to X on September 5th 2023, anticipated the submission of various requests for information (RFIs) to very large online platforms (VLOPs), including X, Facebook, Instagram, AliExpress, TikTok and YouTube, starting from October 2023. The RFI is the first stage of an investigation under the DSA, which can be followed by the access to VLOPs’ data and algorithms, the conduction of interviews to informed subjects and inspections at the platforms’ premises. The information gathered through RFIs has already motivated the opening of two proceedings against X and TikTok in December 2023 and February 2024, respectively.

The aim of this post is to disentangle the procedures that follow a RFI to an online platform, which have not been publicly explained in their practical details by the Commission yet. To do so, given that the integral text of already submitted RFIs has not been disclosed by the Commission, I will compare the press release about a specific RFI with its first legal consequence, i.e. the Commission decision on initiating proceedings against X, published on December 18th 2023, which is the only legal document about investigations under the DSA currently available. In fact, of the two Commission decisions about the initiation of proceedings, only the one regarding X can be read in full, as the document concerning TikTok is just a summary note (with the related press release).

2. The content and consequences of a request for information

According to the press release, the RFI sent to X concerned “the assessment and mitigation of risks related to the dissemination of illegal content, disinformation, gender-based violence, and any negative effects on the exercise of fundamental rights, rights of the child, public security and mental well-being”. The Commission aimed at scrutinizing X’s “policies and actions regarding notices on illegal content, complaint handling, risk assessment and measures to mitigate the risks identified”. Following the submission of the RFI, the Commission decision on initiating proceedings against X (hereafter Commission decision) identifies five areas of concerns in which the platform is suspected to have infringed the DSA provisions. A closer look at the document of this decision can help highlight the content and consequences of a RFI to a VLOP.

Firstly, Twitter International Unlimited Company (TIUC), “the main establishment of the provider of X in the European Union”, failed to “diligently assess certain systemic risks in the European Union stemming from the design and functioning of X and its related systems, including algorithmic systems, or from the use made of their services” (Commission decision, 2023). In particular, the company did not “put in place reasonable, proportionate and effective mitigation measures” for “the actual and foreseeable negative effects on civic discourse and electoral process stemming from the design and functioning of X in the European Union”: in fact, the current solutions “appear inadequate […] notably in the absence of well-defined and objectively verifiable performance metrics” (ibidem). This failure is particularly evident in the moderation of contents featuring languages different from English or pertaining to specific local and regional contexts. Suspicions that “insufficient resources [are] dedicated to mitigation measures” (ibidem) focus on the role of Community Notes, the collaborative feature that allows users to “leave notes on any post and, if enough contributors from different points of view rate that note as helpful, the note will be publicly shown on a post” (X Help Center).

Secondly, the company would systematically fail to process efficiently, “take decisions in a diligent, non-arbitrary, and objective manner” and answer “without undue delay” to the “notices […] of allegedly illegal content hosted on X” (Commission decision, 2023): therefore, X’s content moderation would be insufficient also when the input comes from users. Thirdly, the recent possibility of purchasing the blue checkmark that once marked an account as verified is considered deceptive and manipulative for users of X, who “are led to interpret […] checkmarks as an indication that they are interacting with an account whose identity has been verified or is otherwise more trustworthy, when in fact no such verification or confirmation of trustworthiness appear to have taken place” (ibidem). Fourthly, the company did not abide by the transparency requirements for online advertising, “by not providing searchable and reliable tools that allow multicriteria queries and application programming interfaces to obtain all the information on such advertisements as required by Article 39(2)” of the DSA (ibidem). Lastly, the VLOP seems “to have denied access to data that are publicly accessible on X’s online interface to qualified researchers” (ibidem) by imposing costs for using the API and prohibiting the scraping of publicly accessible data, in violation of Article 40(12) of the DSA.

As the excerpts from the Commission decision highlight, the proceedings against X were initiated as a follow-up to the platform’s response to the RFI submitted on October 12th 2023: in particular, the problematic issues regarding the handling of notices of illegal content, the measures to mitigate systemic risks and the moderation policy, indicated in the press release about the RFI, correspond to the main areas of concern addressed by the decision. The consequentiality between the RFI and the decision is underlined also by points 4 and 5 of the latter: in fact, while point 4 refers to the submission of the RFI and the response provided by X, point 5 states that, according to Article 66(1) of the DSA, “the Commission may initiate proceedings in view of the possible adoption of decisions […] in respect of the relevant conduct by the provider of the very large online platform or of the very large online search engine that the Commission suspects of having infringed any of the provisions” (ibidem).

3. What to expect from the next steps

The areas of concern addressed by the Commission decision include issues that emerge both from the bottom-up interaction with the platform and the top-down governance of its socio-technical ecosystem: data access by researchers and users’ notices of illegal content pertain to the former, while the identification of systemic risks and the measures adopted to mitigate them pertain to the latter. The co-existence of top-down and bottom-up perspectives is not only motivated by the necessity to address issues coming from different sources (e.g. complaints filed by recipients of the service versus a letter from the Commission to the platform requesting clarifications); it is also functional to investigate different aspects of the same area of concern (e.g. the functioning of Community Notes to support content moderation, whose effectiveness is questioned both by users and by the Commission as it may cover the lack of specialized personnel). The structure and content of the decision indicate that, after the enforcement of the DSA, the VLOPs and VLOSEs may not be able to consider anymore prima facie compliance as separated from the obligations to provide reasonably prompt and reliable answers to the issues raised by individual users and researchers, the traditionally disadvantaged side in the interactions with the platform.

The investigation initiated by the Commission decision aims to establish whether the failures outlined above “would constitute infringements of Articles 34(1), 34(2) and 35(1)” of the DSA as regards the first area of concern (inadequate assessment and mitigation of systemic risks), Article 16(5) and 16(6) for what concerns the second one (handling of notices of illegal content), Article 25(1) with respect to the third one (deceptive design of checkmarks), Article 39 with reference to the fourth one (lack of tools to ensure ads transparency) and Article 40 apropos the fifth one (denied data access to researchers). According to the press release about the Commission decision, the next steps of the investigation will include gathering further “evidence, for example by sending additional requests for information, conducting interviews or inspections”. Following the opening of formal infringement proceedings, the Commission will be empowered “to take further enforcement steps, such as interim measures, and non-compliance decisions” and “to accept any commitment made by X to remedy on the matters subject to the proceeding”.

Interestingly, the “DSA does not set any legal deadline” for the end of the proceedings, whose duration will depend on several “factors, including the complexity of the case, the extent to which the company concerned cooperate with the Commission and the exercise of the rights of defence”. The responsibility of carrying the investigation pertains just to the Commission, whose decision “relieves Digital Services Coordinators, or any other competent authority of EU Member States, of their powers to supervise and enforce the DSA in relation to the suspected infringements of Articles 16(5), 16(6) and 25(1)”. In this context, it is difficult to make hypotheses about a timeline for the conclusion of the proceedings against X, as this investigation will be probably accompanied by similar ones originating from the RFIs submitted to other VLOPs or VLOSEs.

4. The intertwined role of ECAT and DG Connect

Given the amount of investigative work which the Commission will embark on in the next months, it is useful to focus on the entities involved in this process: on the one side, the European Centre for Algorithmic Transparency (ECAT), within the Joint Research Centre (JRC), and, on the other side, the DSA enforcement team, within Directorate F (Platforms Policy and Enforcement) of the DG Connect. While the DSA enforcement team should have the responsibility of enforcing the regulation, some crucial investigative procedures, like the on-site inspections at the platforms’ premises, would be carried by ECAT: therefore, it is unclear how the collaboration and the division of duties between the two institutional bodies will be managed.

On the one hand, the mission of ECAT is to “provide technical assistance and practical guidance for the enforcement of the DSA” and “research the long-running impact of algorithmic systems to inform policy-making and contribute to the public discussion”: its activity will include inspecting and testing the algorithmic systems used by VLOPs to understand their functioning, studying their “short, mid and long-term societal impact” and develop “practical methodologies towards fair, transparent and accountable algorithmic approaches, with a focus on recommender systems”. On the other hand, the DSA enforcement team will be composed by “multi-disciplinary teams [of legal, policy and technology specialists] dealing with designated services and co-operating with regulatory authorities in the Member States”. These teams “will engage with stakeholders and gather knowledge and evidence to support the application of the DSA and to detect, investigate and analyse potential infringements of the DSA” (European Commission website).

As can be seen, the information provided by the Commission is currently not sufficient to understand how the work of ECAT and the DSA enforcement team intertwines specifically: in fact, both these entities focus on gathering knowledge and evidence about VLOPs and VLOSEs to support the enforcement of the DSA, to the point that ECAT features an “Algorithm inspections & DSA enforcement” team, whose work overlaps even nominally with that carried at DG Connect. For example, the profile of a technology specialist in the DSA enforcement team at DG Connect is, if not similar, at least complementary to that of an inspector at ECAT: a call for applications for 40 positions in the enforcement team at DG Connect specifies that technology specialists “will work in seamless cooperation with the European Centre for Algorithmic Transparency (ECAT) and facilitate interactions with technical teams at very large online platforms and search engines”.

5. Open questions

The application of the DSA is proceeding at fast pace: on December 20th 2023, the Commission designated three pornographic platforms as VLOPs while introducing more stringent requirements for them; on January 18th 2024, it sent RFIs to seventeen VLOPs and VLOSEs focusing on the measures they have taken to ensure data access to eligible researchers; on 19th February 2024, it announced the opening of proceedings against TikTok. While the DSA has come into force for every online platform since February 17th 2024, the main questions about the modalities and timeline of its application and the actions that platforms will need to take to ensure compliance are still open. In particular: which regulatory mechanism undergirds the progression from a RFI to the initiation of proceedings against a platform? Based on which criteria can the Commission progress from RFIs to further and more invasive investigative procedures, such as the inspections at the platform’s premises? As there is no legal deadline for the end of the proceedings, how can a platform obtain an estimate of their duration? The DSA has the potential to change the interaction between digital companies and European citizens by enhancing public accountability and users’ empowerment. However, for this potential to be realized, regulatory principles need to be translated into viable and transparent enforcement practices.

Matteo Fabbri
PhD candidate in Cybersecurity at IMT School for Advanced Studies and the University of Florence

I am a PhD candidate in Cybersecurity at IMT School for Advanced Studies and the University of Florence, Italy. My research, situated within the ethics of AI, concerns the impact of digital nudging through recommender systems on individuals’ decision-making. After obtaining a bachelor’s degree in Philosophy from the University of Bologna (2020), I completed an MSc in Social Science of the Internet from the University of Oxford (2022), an MA in Sociology and Global Challenges from the University of Florence (2022) and a Diploma di Licenza in Political and Social Sciences from Scuola Normale Superiore (2022). Moreover, I spent periods as visiting student at the University of Warwick, Ecole Normale Supérieure in Paris (ENS-PSL), and Imperial College London and worked as in intern in AI ethics and governance at BMW AG, Munich. 

[citationic]

Featured Artist