Advisor to Europe’s top court favors regional limit to ‘right to be forgotten’

Google will be cheered by the view of an influential advisor to Europe’s top court vis-a-vis the territorial scope of the so-called ‘Right to be Forgotten’. Since a 2014 Court of Justice decision, search engines operating in Europe have been required to accept and review requests from private citizens to delist outdated or irrelevant search […]

Google will be cheered by the view of an influential advisor to Europe’s top court vis-a-vis the territorial scope of the so-called ‘Right to be Forgotten’.

Since a 2014 Court of Justice decision, search engines operating in Europe have been required to accept and review requests from private citizens to delist outdated or irrelevant search results associated with their name, balancing decisions against any public right to know.

Google has been carrying out these delistings on regional European subdomains, rather than globally. But in 2016 the French data protection agency, CNIL, fined it for failing to delist results globally — arguing that regional delistings were not strong enough to comply with the law.

Google filed an appeal against the CNIL’s order for global delisting and a French court later decided to refer questions vis-a-vis the scope of the rtbf to the Court of Justice of the EU.

The CJEU heard the case last fall, with Google arguing that global delistings would damage free speech, and enable authoritarian regimes to get stuff they don’t like scrubbed off the Internet.

On the flip side those who advocate for global delistings argue without them there’s a trivial workaround to the rtbf.

Although the intent of the rtbf ruling was never to remove information from the Internet but rather to allow old and erroneous data to sediment (rather than be artificially kept in public view by algorithms). And given most web users don’t look past the first page (or even the first few) search results regional delistings seems a fair enough balance — at least as things stand.

That balanced view is also now the published opinion of an influential advisor to Europe’s top court.

Advocate general Maciej Szpunar’s opinion, released today — ahead of the court making its own judgement on the matter — proposes that the regional rtbf should be limited in scope to local sub-domains, rather than being applied globally as the French data protection agency has been pushing for for several years.

In a press release summarizing the AG’s opinion, the court writes that Szpunar believes “a distinction must be made depending on the location from which the search is performed” and that “[h]e is therefore not in favour of giving the provisions of EU law such a broad interpretation that they would have effects beyond the borders of the 28 Member States”.

“[I]f worldwide de-referencing were permitted, the EU authorities would not be able to define and determine a right to receive information, let alone balance it against the other fundamental rights to data protection and to privacy,” it continues.

“This is all the more so since such a public interest in accessing information will necessarily vary from one third State to another depending on its geographic location. There would be a risk, if worldwide de-referencing were possible, that persons in third States would be prevented from accessing information and, in turn, that third States would prevent persons in the EU Member States from accessing information.”

That said, the AG is not ruling out the possibility that “in certain situations” a search engine operator may need to delist something “at the worldwide level”.

Rather, the court emphasizes, “he takes the view that the situation at issue in the present case does not justify this”.

So his current advice to the court is summarized as follows:

… the search engine operator is not required, when acceding to a request for de-referencing, to carry out that de-referencing on all the domain names of its search engine in such a way that the links in question no longer appear, irrespective of the location from which the search on the basis of the requesting party’s name is performed.

At the same time the AG emphasizes that — for valid requests — search engines must “take every measure available to it to ensure full and effective de-referencing within the EU, including by use of the ‘geo-blocking’ technique, in respect of an IP address deemed to be located in one of the Member States, irrespective of the domain name used by the internet user who performs the search”.

While the AG’s opinion is not binding on the CJEU the court tends to take a similar view so it’s a good indicator of where the final judgement will land, likely in three to six months’ time.

We reached out to Google for comment and a spokesperson emailed us the following statement, attributed to Peter Fleischer, its senior privacy counsel:

Public access to information, and the right to privacy, are important to people all around the world, as demonstrated by the number of global human rights, media and other organisations that have made their views known in this case. We’ve worked hard to ensure that the right to be forgotten is effective for Europeans, including using geolocation to ensure 99% effectiveness.

The search giant, which remains massively dominant in the European market, publishes a report detailing the proportion of requests it accepts and declines here, which shows both a steady growth in requests and that Google continues to grant only a minority of delisting requests.

Since the original 2014 rtbf decision, the EU has doubled down on the right — extending the principle by baking it into an updated data protection framework, the GDPR, which came into force in May last year and gives EU citizens rights to ask data controllers to rectify or delete their personal information.

Europe to push for one-hour takedown law for terrorist content

The European Union’s executive body is doubling down on its push for platforms to pre-filter the Internet, publishing a proposal today for all websites to monitor uploads in order to be able to quickly remove terrorist uploads. The Commission handed platforms an informal one-hour rule for removing terrorist content back in March. It’s now proposing turning […]

The European Union’s executive body is doubling down on its push for platforms to pre-filter the Internet, publishing a proposal today for all websites to monitor uploads in order to be able to quickly remove terrorist uploads.

The Commission handed platforms an informal one-hour rule for removing terrorist content back in March. It’s now proposing turning that into a law to prevent such content spreading its violent propaganda over the Internet.

For now the ‘rule of thumb’ regime continues to apply. But it’s putting meat on the bones of its thinking, fleshing out a more expansive proposal for a regulation aimed at “preventing the dissemination of terrorist content online”.

As per usual EU processes, the Commission’s proposal would need to gain the backing of Member States and the EU parliament before it could be cemented into law.

One major point to note here is that existing EU law does not allow Member States to impose a general obligation on hosting service providers to monitor the information that users transmit or store. But in the proposal the Commission argues that, given the “grave risks associated with the dissemination of terrorist content”, states could be allowed to “exceptionally derogate from this principle under an EU framework”.

So it’s essentially suggesting that Europeans’ fundamental rights might not, in fact, be so fundamental. (Albeit, European judges might well take a different view — and it’s very likely the proposals could face legal challenges should they be cast into law.)

What is being suggested would also apply to any hosting service provider that offers services in the EU — “regardless of their place of establishment or their size”. So, seemingly, not just large platforms, like Facebook or YouTube, but — for example — anyone hosting a blog that includes a free-to-post comment section.

Websites that fail to promptly take down terrorist content would face fines — with the level of penalties being determined by EU Member States (Germany has already legislated to enforce social media hate speech takedowns within 24 hours, setting the maximum fine at €50M).

“Penalties are necessary to ensure the effective implementation by hosting service providers of the obligations pursuant to this Regulation,” the Commission writes, envisaging the most severe penalties being reserved for systematic failures to remove terrorist material within one hour. 

It adds: “When determining whether or not financial penalties should be imposed, due account should be taken of the financial resources of the provider.” So — for example — individuals with websites who fail to moderate their comment section fast enough might not be served the very largest fines, presumably.

The proposal also encourages platforms to develop “automated detection tools” so they can take what it terms “proactive measures proportionate to the level of risk and to remove terrorist material from their services”.

So the Commission’s continued push for Internet pre-filtering is clear. (This is also a feature of the its copyright reform — which is being voted on by MEPs later today.)

Albeit, it’s not alone on that front. Earlier this year the UK government went so far as to pay an AI company to develop a terrorist propaganda detection tool that used machine learning algorithms trained to automatically detect propaganda produced by the Islamic State terror group — with a claimed “extremely high degree of accuracy”. (At the time it said it had not ruled out forcing tech giants to use it.)

What is terrorist content for the purposes of this proposals? The Commission refers to an earlier EU directive on combating terrorism — which defines the material as “information which is used to incite and glorify the commission of terrorist offences, encouraging the contribution to and providing instructions for committing terrorist offences as well as promoting participation in terrorist groups”.

And on that front you do have to wonder whether, for example, some of U.S. president Donald Trump’s comments last year after the far right rally in Charlottesville where a counter protestor was murdered by a white supremacist — in which he suggested there were “fine people” among those same murderous and violent white supremacists might not fall under that ‘glorifying the commission of terrorist offences’ umbrella, should, say, someone repost them to a comment section that was viewable in the EU…

Safe to say, even terrorist propaganda can be subjective. And the proposed regime will inevitably encourage borderline content to be taken down — having a knock-on impact upon online freedom of expression.

The Commission also wants websites and platforms to share information with law enforcement and other relevant authorities and with each other — suggesting the use of “standardised templates”, “response forms” and “authenticated submission channels” to facilitate “cooperation and the exchange of information”.

It tackles the problem of what it refers to as “erroneous removal” — i.e. content that’s removed after being reported or erroneously identified as terrorist propaganda but which is subsequently, under requested review, determined not to be — by placing an obligation on providers to have “remedies and complaint mechanisms to ensure that users can challenge the removal of their content”.

So platforms and websites will be obligated to police and judge speech — which they already do do, of course but the proposal doubles down on turning online content hosters into judges and arbiters of that same content.

The regulation also includes transparency obligations on the steps being taken against terrorist content by hosting service providers — which the Commission claims will ensure “accountability towards users, citizens and public authorities”. 

Other perspectives are of course available… 

The Commission envisages all taken down content being retained by the host for a period of six months so that it could be reinstated if required, i.e. after a valid complaint — to ensure what it couches as “the effectiveness of complaint and review procedures in view of protecting freedom of expression and information”.

It also sees the retention of takedowns helping law enforcement — meaning platforms and websites will continue to be co-opted into state law enforcement and intelligence regimes, getting further saddled with the burden and cost of having to safely store and protect all this sensitive data.

(On that the EC just says: “Hosting service providers need to put in place technical and organisational safeguards to ensure the data is not used for other purposes.”)

The Commission would also create a system for monitoring the monitoring it’s proposing platforms and websites undertake — thereby further extending the proposed bureaucracy, saying it would establish a “detailed programme for monitoring the outputs, results and impacts” within one year of the regulation being applied; and report on the implementation and the transparency elements within two years; evaluating the entire functioning of it four years after it’s coming into force.

The executive body says it consulted widely ahead of forming the proposals — including running an open public consultation, carrying out a survey of 33,500 EU residents, and talking to Member States’ authorities and hosting service providers.

“By and large, most stakeholders expressed that terrorist content online is a serious societal problem affecting internet users and business models of hosting service providers,” the Commission writes. “More generally, 65% of respondent to the Eurobarometer survey considered that the internet is not safe for its users and 90% of the respondents consider it important to limit the spread of illegal content online.

“Consultations with Member States revealed that while voluntary arrangements are producing results, many see the need for binding obligations on terrorist content, a sentiment echoed in the European Council Conclusions of June 2018. While overall, the hosting service providers were in favour of the continuation of voluntary measures, they noted the potential negative effects of emerging legal fragmentation in the Union.

“Many stakeholders also noted the need to ensure that any regulatory measures for removal of content, particularly proactive measures and strict timeframes, should be balanced with safeguards for fundamental rights, notably freedom of speech. Stakeholders noted a number of necessary measures relating to transparency, accountability as well as the need for human review in deploying automated tools.”

Google back in court arguing against a global ‘right to be forgotten’

Google’s lawyers are in Europe’s top court today arguing against applying the region’s so-called ‘right to be forgotten’ ruling globally domains, rather only geo-limiting delistings to European sub-domains (as it does now). The original rtbf ruling was also a European Court of Justice (ECJ) decision. Back in 2014 the court ruled search engines must respect Europeans’ privacy […]

Google’s lawyers are in Europe’s top court today arguing against applying the region’s so-called ‘right to be forgotten’ ruling globally domains, rather only geo-limiting delistings to European sub-domains (as it does now).

The original rtbf ruling was also a European Court of Justice (ECJ) decision.

Back in 2014 the court ruled search engines must respect Europeans’ privacy rights, and — on request — remove erroneous, irrelevant and/or outdated information about a private citizen.

Google was not at all happy with the judgement, and kicked off a major lobbying effort against it — enlisting help from free speech champions like Wikipedia’s Jimmy Wales.

But it also complied with the ruling, after a fashion (after all, it is EU law) — applying delistings on local domains but not across Google.com. Which means there’s a trivial workaround for circumventing EU law.

That has displeased European data protection agencies — who say Google is flouting the law and EU citizens fundamental rights are not being respected. France’s data protection agency challenged Google’s approach. In May 2016 it ordered the company to make delistings global, and fined it €100,000 for non-compliance.

Google appealed and last year the French court decided to refer questions to the ECJ for a ruling on the scope of the delisting — saying it “poses a serious difficulty in interpreting the Law of the European Union”.

And so now we’re back in Europe’s top court with Google’s lawyers arguing against making delistings global — contending it would damage free speech, and enable authoritarian regimes to get stuff they don’t like scrubbed off the Internet.

“We — and a wide range of human rights and media organizations, and others, like Wikimedia — believe that this runs contrary to the basic principles of international law: No one country should be able to impose its rules on the citizens of another country, especially when it comes to linking to lawful content,” wrote Google’s Kent Walker, in 2015. “Adopting such a rule would encourage other countries, including less democratic regimes, to try to impose their values on citizens in the rest of the world.”

The extraterritoriality problem was also chewed over by Google’s self-appointed ‘advisory council’ on the rtbf issue at that time.

And while the majority of this Google-appointed body aligned with Google’s view that there should not be global delistings, there was one dissenting voice: German MP, Sabine Leutheusser-Schnarrenberger, who wrote then:  “The internet is global, the protection of the user’s rights must also be global. Any circumvention of these rights must be prevented.”

Google and CNIL declined to comment on today’s hearing.

A second (separate) rtbf case also being heard by the ECJ today concerns whether search engines should have to remove reference to any sensitive personal information about individuals. Which would represent a significant expansion if granted.

However the case is not supported by EU data protection agencies. The individuals bringing the case had their application rejected by CNIL, leading them to pursue a legal appeal.

The key point on this case is that the current right to delist is not absolute; the rtbf only applies to private individuals, not to public figures (e.g. politicians and journalists); and also only applies where the information in question is outdated or irrelevant. So it is bounded and balanced, and absolutely does not apply to every individual and every piece of sensitive personal data.

The current implementation of the rtbf also means Google must review requests, to balance the public right to know against individual privacy rights.

The company actually denies the majority of requests — i.e. when it does not believe a request falls under the scope of the law (Google publishes a Transparency Report on delisting, showing it has, to-date, agreed to delist less than half of requests).

Individuals denied delisting can appeal to a national data protection agency, and indeed challenge a DPA decision in court — as in this case.

But Google’s lawyers said today that only a tiny fraction of rtbf request decisions are every appealed, and further claimed its decisions largely align with DPAs… 

There’s no fixed timeline for the ECJ to hand down a ruling on the two cases but a spokeswoman for the court told us that on average the opinion of the Advocate General comes 2 to 4 months after the hearing, and the Court’s judgment around 3 to 6 months after that. So a court verdict does not look likely before 2019.

Since the 2014 judgement, the EU has doubled down on the rtbf — extending the principle by baking it into its recently updated data protection framework, the GDPR, which gives EU citizens rights to ask data controllers to rectify or delete their personal information, for example.