HomePostsPlatform GovernanceBeing Misled by a Google Ad: Who is Responsible?

Related Posts

Being Misled by a Google Ad: Who is Responsible?

Reading Time: 7 minutes
Print Friendly, PDF & Email

Introduction

Two recent Dutch district court decisions contemplate the responsibilities platforms have as intermediaries between advertisers and consumers. The cases decide on a dispute between Stichting Vladimir, a non-profit for victims of misinformation, and a number of Dutch media personalities on the one hand and Twitter and Google on the other hand, respectively. The litigation concerns a number of malicious advertisements for cryptocurrencies and other financial products using the portrait of some of the media personalities without their permission (entertaining video of the plaintiff confronting the advertisers). The District Court had to contemplate whether platforms were liable for the damage caused by the products promoted in the advertisements and whether platforms were moderating sufficiently to prevent this type of content from appearing. The cases differ on the advertising service offered: Google acts as intermediary matching advertisers with available space online, whereas Twitter offers advertising space on its own platform. The decisions do not hold revolutionary developments for intermediary liability, but raise questions about the future-proofness of intermediary liability for these types of advertisements and the possibility of a parallel between the famous Glawischnig-Piesczek decision. These are addressed afterwards after a brief recap of the decisions.

Case I: Rechtbank Amsterdam 18 May 2022, Stichting Vladimir c.s. v Google c.s.

Google is predominantly known for its search engine, but in essence it is an advertising company, given that almost 89% of its revenue comes from advertising. It sells advertisements via search advertising and display advertising. In search advertising, it places ads directly with the search results from Google Search. In display advertising Google acts as an intermediary; through its ‘Google Display Network’  it matches advertisers with publishers that have space to place ads, for example, on websites or in mobile phone applications. Google further offers the service of helping create ads: advertisers can upload their own ads, but they can also upload separate parts of the advertisement and have Google create the rest using an automatised process. A species of that is the native ad, in which the advertisement adjusts itself to the style of the website upon which it is published. The Google Display Network was used to create malicious advertisements for cryptocurrencies and other investment opportunities. One of the plaintiffs recalls how upon clicking the advertisement he was anonymously called and requested to install translating software. That software was subsequently used to control the computer and steal money from his savings account.

Stichting Vladimir alleges Google has acted unlawfully against one of its principals by showing him malicious advertisements, and it claims damages for the damage incurred as a consequence. The court deliberates that Google does not create malicious advertisements, it merely acts as an intermediary. The possibility to use automated means to create native ads does not mean that Google as an intermediary will be liable. Whether Google does enough to prevent malicious use of those advertisements depends on its awareness of the illegal acts of the advertisers, how onerous it would be to take preventive measures, what measures it has taken, the possibility of the negligence of users misled by the advertisements and the chance that damages arise from that negligence. Google takes responsibility for preventing misleading advertisements. It has prohibited click baiting and cloaking, techniques used to mislead consumers. It takes responsibility for those advertisements using proactive and reactive automated detection methods, using artificial intelligence. Using those methods, Google has satisfied its due diligence in monitoring, the court finds. 

Further, the court finds that it is the subsequent steps taken by the consumer after clicking the advertisement that caused the damage, not the misleading advertisement in the first place (this approach of the conditio sine qua non seems short-sighted, after all without the misleading advertisement the user would not have made the investment). It does not find those future indications of ‘advertisement’ would prevent future damages, as those damages only arise after the moment the user is lured into the swindle by the malicious ad.

Case II: Rechtbank Amsterdam 18 May 2022, Jort Kelder v Twitter

Twitter has a different advertising structure. On its platform, it offers advertisers the opportunity to create ‘promoted tweets’. These appear on the timeline of the account’s followers but also on the timeline of accounts that match the selection criteria; the advertiser can choose the territory in which the tweet appears, for what length and the target group of the tweet. Twitter users have used the portrait of a Dutch media personality, Jort Kelder, for malicious advertisements for cryptocurrencies and other investment opportunities in promoted tweets. Kelder claims that Twitter has acted unlawfully by displaying advertising using his portrait and requests damages.

The court finds that, indeed that the malicious advertisements are unlawful. It does question whether Twitter as an intermediary can be liable for that content, as it is not the creator of that content. Similar to the Google case, the court finds that the mere possibility of providing the opportunity for advertising does not qualify as an unlawful act or negligence. The court is further convinced that Twitter has fulfilled its due diligence to prevent malicious advertising: Twitter has a number of automated systems in place, but the advertisements use cloaking and undetectable language to circumvent those systems. As such, it is not liable for damages incurred by Kelder.

Moving Forward

So where does this leave us? In an environment where social media platforms provide an unprecedented opportunity of the scope of malicious advertising, aided by the relative anonymity of the advertising process, it is obvious that the digital advertising environment hosts more dangers than the analogic world. The Dutch District Court decisions do not hold a revolutionary legal development in platform liability, but they do raise questions on how to tackle similar problems in the future. Two of these questions are briefly addressed below:

Monitoring Obligations for Equivalent Content?

The first question would be how these judgments relate to the well-known (and perhaps notorious) Glawischnig-Piesczek v Facebookdecision of the Court of Justice of the European Union. In this decision the Court found that Article 15(1) of the e-Commerce Directive does not preclude a court from ordering a platform to remove content which is identical or equivalent to content previously held unlawful, as long as the order does not include active monitoring obligations for the content beyond the identical or equivalent level. In fact, for injunctions to be effective in defamation cases, it is necessary that it extends to the monitoring of similar types of content, because otherwise, circumvention is inevitable by simply combining elements of the original insult (para 46, concl. A-G. para 70). Therefore general monitoring obligations could extend to equivalent content, a departure from earlier case law in trademark protection in L’ Oreal, which only allowed for identical content. The nature of Google Display Network’s advertisement structure, as discussed above, begs the question of whether a similar monitoring obligation could be extended to malicious advertisements. In this case, the malicious advertisements in question all used similar elements and portraits, which were subsequently used to create an ad using Google’s automated process. This is similar to the defamation case in Glawischnig-Piesczek: because elements of the insulting remarks can be shuffled to create a similar insult, it is necessary to monitor for equivalent content for the injunction to be effective. The District Court does not follow this route: the current monitoring practices by Google were deemed enough. It would have been an exciting avenue for the District Courtto take a ‘Glawischnig-Piesczekesque’ approach to moderate advertisements as an intermediary, especially because there are some similarities when it comes to the automated creation of ads and defamation cases. This would create clarity when it comes to the responsibility intermediaries, in this case, one of the world’s largest companies, have to take regarding the content they help intermediate for commercial purposes.

How do we judge the monitoring requirements for platforms?

This raises a follow-up question: to what degree is this monitoring requirement one of result or of due diligence? The answer is dependent on the circumstances of the case, especially dependent on the scale on which the illegal content is spread. In these cases, the district court finds that Google and Twitter had fulfilled their due diligence with the monitoring systems they currently have in place: there was no widespread distribution of the illegal advertisements, and upon notice, they were taken down relatively swiftly, thus abiding by the standards expected under the Digital Services Act article 14(6). In these cases, that is a reasonable judgment. However, content monitoring systems have long been opaque to consumers, academics (perhaps this changes in the future, see DSA article 31(2)) on access to data for vetted researchers) and policymakers alike. They are complex systems whose impact often not fully visible. How do we assess whether a due diligence requirement is met when assessing opaque monitoring systems? And who would be equipped to make that assessment? Can this assessment go beyond the self-assessment mandated by the DSA article 26? The Digital Services Act provides us with a framework for assessing the moderating process through a risk assessment (due diligence) and the removal of illegal content (result). Under that framework, the Rechtbank Amsterdam seems to have made the right decision, finding that when notified these content types were removed in a timely manner.

Further, under the DSA framework, Google and Twitter would not be liable for damages under article 5 of the DSA. However the Rechtbank Amsterdam does not even consider that liability since the damage was not caused by their hosting of the information, but rather by actions subsequently taken by consumers upon clicking on that advertisement. That reasoning seems unnecessarily faulty when liability could have been avoided under intermediary immunity regardless of this appreciation of the conditio sine qua non. It shuts the door for potential future cases in which that immunity might not apply, for example when content was not removed expeditiously. 

Future Regulation

A promoted tweet might not be indicated as an advertisement, but there are some clues as to its commercial nature. However, in an influencer economy, where ‘regular people’ use their content to promote goods and services, the line between regular and commercial content fades even further. In most cases, this is not problematic: after all, we choose who we follow on social media (or at least cherish that illusion), and thus consent to some extent to being targeted for advertisements. However, what happens when the product promoted turns out to be faulty or risky, such as cryptocurrencies? Worldwide, several regulatory attempts have been made to regulate influencers providing financial advice online, but this happens predominantly at the influencer level. Similar to the Dutch court’s decisions, the ratio is that they create the advertisement and should therefore make sure that consumers are warned of the dangers of the product promoted. However, with the fading line between user content and commercial content, it does make sense to increase regulation at the platform level as well, for example, by requiring additional indicators when a piece of content is promoting financial products. This should better ensure the protection of unknowing consumers being advised by their favourite influencer which cryptocurrency is going to ‘moon’ next.

Conclusion

Jort Kelder, the media personality whose personality rights were infringed in the malicious advertisements, was unhappy about the decisions of the Amsterdam District Court. To his mind, the decisions confirmed the status quo of big tech firms being able to provide services without being accountable for subsequent damages. That view is not entirely correct: in the circumstances of the case, the District Court was reasonable in finding that upon expeditious removal of the illegal content, the platforms as advertising intermediaries could not be held liable for the illegal content of those advertisements. However, the suggestion that the damage is not caused by platforms displaying misleading ads seems misguided. Had the content not been expeditiously removed, it would have been worthwhile to consider the liability for the damages caused by those advertisements. In an age where advertisements are more easily accessible by social media and more easily published, it makes sense to consider the responsibilities intermediaries have in monitoring an automated advertising process. A grey area advertising only adds to the need for this proper re-evaluation.

Jacob van de Kerkhof
PhD Candidate at Utrecht University | + posts

Jacob van de Kerkhof is a Ph.D. candidate with the Montaigne Centre at Utrecht University. His research focuses on the protection of freedom of expression on social media platforms.

[citationic]

Featured Artist