Europe agrees platform rules to tackle unfair business practices

The European Union’s political institutions have reached agreement over new rules designed to boost transparency around online platform businesses and curb unfair practices to support traders and other businesses that rely on digital intermediaries for discovery and sales. The European Commission proposed a regulation for fairness and transparency in online platform trading last April. And late yesterday […]

The European Union’s political institutions have reached agreement over new rules designed to boost transparency around online platform businesses and curb unfair practices to support traders and other businesses that rely on digital intermediaries for discovery and sales.

The European Commission proposed a regulation for fairness and transparency in online platform trading last April. And late yesterday the European Parliament, Council of the EU and Commission reached a political deal on regulating the business environment of platforms, announcing the accord in a press release today.

The political agreement paves the way for adoption and publication of the regulation, likely later this year. The rules will apply 12 months after that point.

Online platform intermediaries such as ecommerce marketplaces and search engines are covered by the new rules if they provide services to businesses established in the EU and which offer goods or services to consumers located in the EU.

The Commission estimates there are some 7,000 such platforms and marketplaces which will be covered by the regulation, noting this includes “world giants as well as very small start-ups”.

Under the new rules, sudden and unexpected account suspensions will be banned — with the Commission saying platforms will have to provide “clear reasons” for any termination and also possibilities for appeal.

Terms and conditions must also be “easily available and provided in plain and intelligible language”.

There must also be advance notice of changes — of at least 15 days, with longer notice periods applying for more complex changes.

For search engines the focus is on ranking transparency. And on that front dominant search engine Google has attracted more than its fair share of criticism in Europe from a range of rivals (not all of whom are European).

In 2017, the search giant was also slapped with a $2.7BN antitrust fine related to its price comparison service, Google Shopping. The EC found Google had systematically given prominent placement to its own search comparison service while also demoting rival services in search results. (Google rejects the findings and is appealing.)

Given the history of criticism of Google’s platform business practices, and the multi-year regulatory tug of war over anti-competitive impacts, the new transparency provisions look intended to make it harder for a dominant search player to use its market power against rivals.

Changing the online marketplace

The importance of legislating for platform fairness was flagged by the Commission’s antitrust chief, Margrethe Vestager, last summer — when she handed Google another very large fine ($5BN) for anti-competitive behavior related to its mobile platform Android.

Vestager said then she wasn’t sure breaking Google up would be an effective competition fix, preferring to push for remedies to support “more players to have a real go”, as her Android decision attempts to do. But she also stressed the importance of “legislation that will ensure that you have transparency and fairness in the business to platform relationship”.

If businesses have legal means to find out why, for example, their traffic has stopped and what they can do to get it back that will “change the marketplace, and it will change the way we are protected as consumers but also as businesses”, she argued.

Just such a change is now in sight thanks to EU political accord on the issue.

The regulation represents the first such rules for online platforms in Europe and — commissioners’ contend — anywhere in the world.

“Our target is to outlaw some of the most unfair practices and create a benchmark for transparency, at the same time safeguarding the great advantages of online platforms both for consumers and for businesses,” said Andrus Ansip, VP for the EU’s Digital Single Market initiative in a statement.

Elżbieta Bieńkowska, commissioner for internal market, industry, entrepreneurship, and SMEs, added that the rules are “especially designed with the millions of SMEs in mind”.

“Many of them do not have the bargaining muscle to enter into a dispute with a big platform, but with these new rules they have a new safety net and will no longer worry about being randomly kicked off a platform, or intransparent ranking in search results,” she said in another supporting statement.

In a factsheet about the new rules, the Commission specifies they cover third-party ecommerce market places (e.g. Amazon Marketplace, eBay, Fnac Marketplace, etc.); app stores (e.g. Google Play, Apple App Store, Microsoft Store etc.); social media for business (e.g. Facebook pages, Instagram used by makers/artists etc.); and price comparison tools (e.g. Skyscanner, Google Shopping etc.).

The regulation does not target every online platform. For example, it does not cover online advertising (or b2b ad exchanges), payment services, SEO services or services that do not intermediate direct transactions between businesses and consumers.

The Commission also notes that online retailers that sell their own brand products and/or don’t rely on third party sellers on their own platform are also excluded from the regulation, such as retailers of brands or supermarkets.

Where transparency is concerned, the rules require that regulated marketplaces and search engines disclose the main parameters they use to rank goods and services on their site “to help sellers understand how to optimise their presence” — with the Commission saying the aim is to support sellers without allowing gaming of the ranking system.

Some platform business practices will also require mandatory disclosure — such as for platforms that not only provide a marketplace for sellers but sell on their platform themselves, as does Amazon for example.

The ecommerce giant’s use of merchant data remains under scrutiny in the EU. Vestager revealed a preliminary antitrust probe of Amazon last fall — when she said her department was gathering information to “try to get a full picture”. She said her concern is dual platforms could gain an unfair advantage as a consequence of access to merchants’ data.

And, again, the incoming transparency rules look intended to shrink that risk — requiring what the Commission couches as exhaustive disclosure of “any advantage” a platform may give to their own products over others.

“They must also disclose what data they collect, and how they use it — and in particular how such data is shared with other business partners they have,” it continues, noting also that: “Where personal data is concerned, the rules of the GDPR [General Data Protection Regulation] apply.”

(GDPR of course places further transparency requirements on platforms by, for example, empowering individuals to request any personal data held on them, as well as the reasons why their information is being processed.)

The platform regulation also includes new avenues for dispute resolution by requiring platforms set up an internal complaint-handling system to assist business users.

“Only the smallest platforms in terms of head count or turnover will be exempt from this obligation,” the Commission notes. (The exemption limit is set at fewer than 50 staff and less than €10M revenue.)

It also says: “Platforms will have to provide businesses with more options to resolve a potential problem through mediators. This will help resolve more issues out of court, saving businesses time and money.”

But, at the same time, the new rules allow business associations to take platforms to court to stop any non-compliance — mirroring a provision in the GDPR which also allows for collective enforcement and redress of individual privacy rights (where Member States adopt it).

“This will help overcome fear of retaliation, and lower the cost of court cases for individual businesses, when the new rules are not followed,” the Commission argues.

“In addition, Member States can appoint public authorities with enforcement powers, if they wish, and businesses can turn to those authorities.”

One component of the regulation that appears to be being left up to EU Member States to tackle is penalties for non-compliance — with no clear regime of fines set out (as there is in GDPR). So it’s not clear whether the platform regulation might not have rather more bark than bite, at least initially.

“Member States shall need to take measures that are sufficiently dissuasive to ensure that the online intermediation platforms and search engines comply with the requirements in the Regulation,” the Commission writes in a section of its factsheet dealing with how to make sure platforms respect the new rules.

It also points again to the provision allowing business associations or organisations to take action in national courts on behalf of members — saying this offers a legal route to “stop or prohibit non-compliance with one or more of the requirements of the Regulation”. So, er, expect lawsuits.

The Commission says the rules will be subject to review within 18 months after they come into force — in a bid to ensure the regulation keeps pace with fast-paced tech developments.

A dedicated Online Platform Observatory has been established in the EU for the purpose of “monitoring the evolution of the market and the effective implementation of the rules”, it adds.

Is Europe closing in on an antitrust fix for surveillance technologists?

The German Federal Cartel Office’s decision to order Facebook to change how it processes users’ personal data this week is a sign the antitrust tide could at last be turning against platform power. One European Commission source we spoke to, who was commenting in a personal capacity, described it as “clearly pioneering” and “a big deal”, […]

The German Federal Cartel Office’s decision to order Facebook to change how it processes users’ personal data this week is a sign the antitrust tide could at last be turning against platform power.

One European Commission source we spoke to, who was commenting in a personal capacity, described it as “clearly pioneering” and “a big deal”, even without Facebook being fined a dime.

The FCO’s decision instead bans the social network from linking user data across different platforms it owns, unless it gains people’s consent (nor can it make use of its services contingent on such consent). Facebook is also prohibited from gathering and linking data on users from third party websites, such as via its tracking pixels and social plugins.

The order is not yet in force, and Facebook is appealing, but should it come into force the social network faces being de facto shrunk by having its platforms siloed at the data level.

To comply with the order Facebook would have to ask users to freely consent to being data-mined — which the company does not do at present.

Yes, Facebook could still manipulate the outcome it wants from users but doing so would open it to further challenge under EU data protection law, as its current approach to consent is already being challenged.

The EU’s updated privacy framework, GDPR, requires consent to be specific, informed and freely given. That standard supports challenges to Facebook’s (still fixed) entry ‘price’ to its social services. To play you still have to agree to hand over your personal data so it can sell your attention to advertisers. But legal experts contend that’s neither privacy by design nor default.

The only ‘alternative’ Facebook offers is to tell users they can delete their account. Not that doing so would stop the company from tracking you around the rest of the mainstream web anyway. Facebook’s tracking infrastructure is also embedded across the wider Internet so it profiles non-users too.

EU data protection regulators are still investigating a very large number of consent-related GDPR complaints.

But the German FCO, which said it liaised with privacy authorities during its investigation of Facebook’s data-gathering, has dubbed this type of behavior “exploitative abuse”, having also deemed the social service to hold a monopoly position in the German market.

So there are now two lines of legal attack — antitrust and privacy law — threatening Facebook (and indeed other adtech companies’) surveillance-based business model across Europe.

A year ago the German antitrust authority also announced a probe of the online advertising sector, responding to concerns about a lack of transparency in the market. Its work here is by no means done.

Data limits

The lack of a big flashy fine attached to the German FCO’s order against Facebook makes this week’s story less of a major headline than recent European Commission antitrust fines handed to Google — such as the record-breaking $5BN penalty issued last summer for anticompetitive behaviour linked to the Android mobile platform.

But the decision is arguably just as, if not more, significant, because of the structural remedies being ordered upon Facebook. These remedies have been likened to an internal break-up of the company — with enforced internal separation of its multiple platform products at the data level.

This of course runs counter to (ad) platform giants’ preferred trajectory, which has long been to tear modesty walls down; pool user data from multiple internal (and indeed external sources), in defiance of the notion of informed consent; and mine all that personal (and sensitive) stuff to build identity-linked profiles to train algorithms that predict (and, some contend, manipulate) individual behavior.

Because if you can predict what a person is going to do you can choose which advert to serve to increase the chance they’ll click. (Or as Mark Zuckerberg puts it: ‘Senator, we run ads.’)

This means that a regulatory intervention that interferes with an ad tech giant’s ability to pool and process personal data starts to look really interesting. Because a Facebook that can’t join data dots across its sprawling social empire — or indeed across the mainstream web — wouldn’t be such a massive giant in terms of data insights. And nor, therefore, surveillance oversight.

Each of its platforms would be forced to be a more discrete (and, well, discreet) kind of business.

Competing against data-siloed platforms with a common owner — instead of a single interlinked mega-surveillance-network — also starts to sound almost possible. It suggests a playing field that’s reset, if not entirely levelled.

(Whereas, in the case of Android, the European Commission did not order any specific remedies — allowing Google to come up with ‘fixes’ itself; and so to shape the most self-serving ‘fix’ it can think of.)

Meanwhile, just look at where Facebook is now aiming to get to: A technical unification of the backend of its different social products.

Such a merger would collapse even more walls and fully enmesh platforms that started life as entirely separate products before were folded into Facebook’s empire (also, let’s not forget, via surveillance-informed acquisitions).

Facebook’s plan to unify its products on a single backend platform looks very much like an attempt to throw up technical barriers to antitrust hammers. It’s at least harder to imagine breaking up a company if its multiple, separate products are merged onto one unified backend which functions to cross and combine data streams.

Set against Facebook’s sudden desire to technically unify its full-flush of dominant social networks (Facebook Messenger; Instagram; WhatsApp) is a rising drum-beat of calls for competition-based scrutiny of tech giants.

This has been building for years, as the market power — and even democracy-denting potential — of surveillance capitalism’s data giants has telescoped into view.

Calls to break up tech giants no longer carry a suggestive punch. Regulators are routinely asked whether it’s time. As the European Commission’s competition chief, Margrethe Vestager, was when she handed down Google’s latest massive antitrust fine last summer.

Her response then was that she wasn’t sure breaking Google up is the right answer — preferring to try remedies that might allow competitors to have a go, while also emphasizing the importance of legislating to ensure “transparency and fairness in the business to platform relationship”.

But it’s interesting that the idea of breaking up tech giants now plays so well as political theatre, suggesting that wildly successful consumer technology companies — which have long dined out on shiny convenience-based marketing claims, made ever so saccharine sweet via the lure of ‘free’ services — have lost a big chunk of their populist pull, dogged as they have been by so many scandals.

From terrorist content and hate speech, to election interference, child exploitation, bullying, abuse. There’s also the matter of how they arrange their tax affairs.

The public perception of tech giants has matured as the ‘costs’ of their ‘free’ services have scaled into view. The upstarts have also become the establishment. People see not a new generation of ‘cuddly capitalists’ but another bunch of multinationals; highly polished but remote money-making machines that take rather more than they give back to the societies they feed off.

Google’s trick of naming each Android iteration after a different sweet treat makes for an interesting parallel to the (also now shifting) public perceptions around sugar, following closer attention to health concerns. What does its sickly sweetness mask? And after the sugar tax, we now have politicians calling for a social media levy.

Just this week the deputy leader of the main opposition party in the UK called for setting up a standalone Internet regulatory with the power to break up tech monopolies.

Talking about breaking up well-oiled, wealth-concentration machines is being seen as a populist vote winner. And companies that political leaders used to flatter and seek out for PR opportunities find themselves treated as political punchbags; Called to attend awkward grilling by hard-grafting committees, or taken to vicious task verbally at the highest profile public podia. (Though some non-democratic heads of state are still keen to press tech giant flesh.)

In Europe, Facebook’s repeat snubs of the UK parliament’s requests last year for Zuckerberg to face policymakers’ questions certainly did not go unnoticed.

Zuckerberg’s empty chair at the DCMS committee has become both a symbol of the company’s failure to accept wider societal responsibility for its products, and an indication of market failure; the CEO so powerful he doesn’t feel answerable to anyone; neither his most vulnerable users nor their elected representatives. Hence UK politicians on both sides of the aisle making political capital by talking about cutting tech giants down to size.

The political fallout from the Cambridge Analytica scandal looks far from done.

Quite how a UK regulator could successfully swing a regulatory hammer to break up a global Internet giant such as Facebook which is headquartered in the U.S. is another matter. But policymakers have already crossed the rubicon of public opinion and are relishing talking up having a go.

That represents a sea-change vs the neoliberal consensus that allowed competition regulators to sit on their hands for more than a decade as technology upstarts quietly hoovered up people’s data and bagged rivals, and basically went about transforming themselves from highly scalable startups into market-distorting giants with Internet-scale data-nets to snag users and buy or block competing ideas.

The political spirit looks willing to go there, and now the mechanism for breaking platforms’ distorting hold on markets may also be shaping up.

The traditional antitrust remedy of breaking a company along its business lines still looks unwieldy when faced with the blistering pace of digital technology. The problem is delivering such a fix fast enough that the business hasn’t already reconfigured to route around the reset. 

Commission antitrust decisions on the tech beat have stepped up impressively in pace on Vestager’s watch. Yet it still feels like watching paper pushers wading through treacle to try and catch a sprinter. (And Europe hasn’t gone so far as trying to impose a platform break up.) 

But the German FCO decision against Facebook hints at an alternative way forward for regulating the dominance of digital monopolies: Structural remedies that focus on controlling access to data which can be relatively swiftly configured and applied.

Vestager, whose term as EC competition chief may be coming to its end this year (even if other Commission roles remain in potential and tantalizing contention), has championed this idea herself.

In an interview on BBC Radio 4’s Today program in December she poured cold water on the stock question about breaking tech giants up — saying instead the Commission could look at how larger firms got access to data and resources as a means of limiting their power. Which is exactly what the German FCO has done in its order to Facebook. 

At the same time, Europe’s updated data protection framework has gained the most attention for the size of the financial penalties that can be issued for major compliance breaches. But the regulation also gives data watchdogs the power to limit or ban processing. And that power could similarly be used to reshape a rights-eroding business model or snuff out such business entirely.

The merging of privacy and antitrust concerns is really just a reflection of the complexity of the challenge regulators now face trying to rein in digital monopolies. But they’re tooling up to meet that challenge.

Speaking in an interview with TechCrunch last fall, Europe’s data protection supervisor, Giovanni Buttarelli, told us the bloc’s privacy regulators are moving towards more joint working with antitrust agencies to respond to platform power. “Europe would like to speak with one voice, not only within data protection but by approaching this issue of digital dividend, monopolies in a better way — not per sectors,” he said. “But first joint enforcement and better co-operation is key.”

The German FCO’s decision represents tangible evidence of the kind of regulatory co-operation that could — finally — crack down on tech giants.

Blogging in support of the decision this week, Buttarelli asserted: “It is not necessary for competition authorities to enforce other areas of law; rather they need simply to identity where the most powerful undertakings are setting a bad example and damaging the interests of consumers.  Data protection authorities are able to assist in this assessment.”

He also had a prediction of his own for surveillance technologists, warning: “This case is the tip of the iceberg — all companies in the digital information ecosystem that rely on tracking, profiling and targeting should be on notice.”

So perhaps, at long last, the regulators have figured out how to move fast and break things.

Facebook is launching political ad checks in Nigeria, Ukraine, EU and India in coming months

Facebook is launching some of its self-styled ‘election security’ initiatives into more markets in the coming months ahead of several major votes in countries around the world. In an interview with Reuters the social networking giant confirmed it’s launching checks on political adverts on its platform in Nigeria, Ukraine and the European Union, reiterating too that […]

Facebook is launching some of its self-styled ‘election security’ initiatives into more markets in the coming months ahead of several major votes in countries around the world.

In an interview with Reuters the social networking giant confirmed it’s launching checks on political adverts on its platform in Nigeria, Ukraine and the European Union, reiterating too that ad transparency measures will launch in India ahead of its general election.

Although it still hasn’t confirmed how it will respond in other countries with looming votes this year, including Australia, Indonesia, Israel and the Philippines.

Concern about election interference in the era of mass social media has stepped up sharply since revelations about the volume of disinformation targeted at the 2016 U.S. presidential election (and amplified by Facebook et al).

More than two years later Facebook’s approach to election security remains ad hoc, with different policy and transparency components being launched in different markets — as it says it’s still in a learning mode.

It also claims its variable approach reflects local laws and conversations with governments and civil society groups. Although it says it’s also hoping to have a set of tools that applies to advertisers globally by the end of June.

“Our goal was to get to a global solution. And so, until we can get to that in June, we had to look at the different elections and what we think we can do,” Facebook’s director of global politics and outreach, Katie Harbath told Reuters.

Many markets where Facebook’s platform operates also still have no limits on who can buy and target political ads, as too do many smaller elections, such as local elections.

Even as the checks and balances the company does offer in other markets remain partial and far from perfect. For instance Facebook does not always offer meaningful checks on issue-based political advertising because, in some markets, it narrowly draws the definition as related to parties and candidates only, thereby limiting the effectiveness of the policy.

(And plenty of Kremlin propaganda targeted at the 2016 US presidential election was focused on weaponizing issues to whip up social divisions, for example, such as by playing up racial tensions, rather than promoting or attacking particular candidates.)

Facebook told Reuters it’s launching an authorization process for political advertisers in Nigeria today, ahead of a presidential election on February 16, which requires those running political ads to be located in the country.

It said the same policy will apply to Ukraine next month, ahead of elections on March 31.

Facebook also reiterated that election security measures are incoming ahead of India’s general election last month. From next month it will launch a searchable online library for election ads in India which votes for parliament this spring. The ads will be held in the library for seven years.

It has already launched searchable political ad archives in the U.S., Brazil and the U.K. But again its narrow definition of what constitutes a political ad limits the scope of the transparency measure in the U.K., for example. (Whereas in the U.S. the archive can include ads about much debated issues such as immigration and climate change.)

The Indian archive will contain contact information for some ad buyers or official regulatory certificates, according to Reuters.

While, in the case of individuals buying political ads, Facebook said it would ensure their listed name matches government-issued identity documents.

The European Union, which goes to the polls in May to elect MEPs for the European Parliament, will also get a version of the Indian authorization and transparency system ahead of that vote.

The European Commission has stepped up pressure on tech platforms over election security, announcing a package of measures last month intended to combat democracy-denting disinformation which included pressing platforms to increase transparency around political ads and purge fake accounts.

The EC also said it would be monitoring platforms’ efforts — warning that it wants to see “real progress”, not more “excuses” and “foot-dragging”.

We contacted Facebook for further comment on its international election security efforts but at the time of writing it said it had nothing more to add.

Europe issues a deadline for US’ Privacy Shield compliance

The European Commission has finally given the U.S. a deadline related to the much criticized data transfer mechanism known as the EU-US Privacy Shield . But it’s only asking for the U.S. to nominate a permanent ombudsperson — to handle any EU citizens’ complaints — by February 28, 2019. If a permanent ombudsperson is not […]

The European Commission has finally given the U.S. a deadline related to the much criticized data transfer mechanism known as the EU-US Privacy Shield .

But it’s only asking for the U.S. to nominate a permanent ombudsperson — to handle any EU citizens’ complaints — by February 28, 2019.

If a permanent ombudsperson is not appointed by then the Commission says it will “consider taking appropriate measures, in accordance with the General Data Protection Regulation”.

So not an out-and-out threat to suspend the mechanism — which is what critics and MEPs have been calling for.

But still a fixed deadline at last.

“We now expect our American partners to nominate the Ombudsperson on a permanent basis, so we can make sure that our EU-US relations in data protection are fully trustworthy,” said Andrus Ansip, Commission VP for the Digital Single Market, in a statement.

“All elements of the Shield must be working at full speed, including the Ombudsperson,” added Věra Jourová, the commissioner for justice and consumers.

It’s the first sign the Commission is losing patience with its U.S. counterparts.

Although there’s no doubt the EC remains fully committed to the survival of the business-friendly mechanism which it spent years negotiating after the prior arrangement, Safe Harbor, was struck down by Europe’s top court following NSA whistleblower Edward Snowden’s disclosures of US government surveillance programs.

Its problem is it has to contend with Trump administration priorities — which naturally don’t align with privacy protection for non-US citizens.

While the EU-US Privacy Shield is over two years’ old at this point, president Trump has failed to nominate a permanent ombudsperson to a key oversight role.

The acting civil servant (Judith Garber, principal deputy assistant secretary for the Bureau of Oceans and International Environmental and Scientific Affairs) was also nominated as U.S. ambassador to Cyprus this summer, suggesting a hard limit to her already divided attention on EU citizens’ data privacy.

Despite this problematic wrinkle, the EU’s executive today professed itself otherwise satisfied that the mechanism is ensuring “an adequate level of protection for personal data”, announcing the conclusion of its second annual Privacy Shield review.

The data transfer mechanism is now used by more than 4,000 companies to simplify flows of EU citizens’ personal data to the US.

And the Commission clearly wants to avoid a repeat of the scramble that kicked off when, three years ago, Safe Harbor was struck down and businesses had to find alternative legal means for authorizing essential data flows.

But at the same time Privacy Shield has been under growing pressure. This summer the EU parliament called for the mechanism to be suspended until the U.S. comes into compliance.

The parliament’s Libe committee also called for better monitoring of data transfers was clearly required in light of the Cambridge Analytica Facebook data misuse scandal. (Both companies having been signed up to Privacy Shield.)

The mechanism has also been looped into a separate legal challenge to another data transfer tool after the Irish High Court referred a series of questions to the European Court of Justice — setting the stage for another high stakes legal drama if fundamental European privacy rights are again deemed incompatible with U.S. national security practices.

A decision on that referral remains for the future. But in the meanwhile the Commission looks to be doing everything it can to claim it’s ‘business as usual’ for EU-US data flows.

In a press release today, it lauds steps taken by the U.S. authorities to implement recommendations it made in last year’s Privacy Shield review — saying they have “improved the functioning of the framework”.

Albeit, the detail of these slated ‘improvements’ shows how very low its starting bar was set — with the Commission listing, for e.g.:

  • the strengthening by the Department of Commerce of the certification process and of its proactive oversight over the framework — including setting up mechanisms such as a system of spot checks (it says that 100 companies have been checked; and 21 had “issues that have now been solved” — suggesting a fifth of claimed compliance was, er, not actually compliance)
  • additional “compliance review procedures” such as analysis of Privacy Shield participants’ websites “to ensure that links to privacy policies are correct”; so previously we must assume no one in the U.S. was bothering to check
  • the Department of Commerce put in place a system to identify false claims which the Commission now claims “prevents companies from claiming their compliance with the Privacy Shield, when they have not been certified”; so again, prior to this system being set up certifications weren’t necessary worth the pixels they were painted in

The Commission also claims the Federal Trade Commission has shown “a more proactive approach” to enforcement by monitoring the principles of the Privacy Shield — noting that, for example, it has issued subpoenas to request information from participating companies.

Another change it commends — related to the sticky issue of access to personal data by U.S. public authorities for national security purposes (which is what did for Safe Harbor) — is the appointment of new members of the Privacy and Civil Liberties Oversight Board (PCLOB) — to restore the Board’s quorum.

The denuded PCLOB has been a long running bone of contention for Privacy Shield critics.

“The Board’s report on the implementation of Presidential Policy-Directive No. 28 (PPD-28, which provides for privacy protections for non-Americans) has been made publicly available,” the Commission writes, referring to a key Obama era directive that it has previously said the Shield depends upon. “It confirms that these privacy protections for non-Americans are implemented across the U.S. intelligence community.”

It says it also took into account relevant developments in the U.S. legal system in the area of privacy during the review, noting that: “The Department of Commerce launched a consultation on a federal approach to data privacy to which the Commission contributed and the US Federal Trade Commission is reflecting on its current powers in this area.”

“In the context of the Facebook/Cambridge Analytica scandal, the Commission noted the Federal Trade Commission’s confirmation that its investigation of this case is ongoing,” it adds, kicking the can down the road on that particular data scandal.

Meanwhile, as you’d expect, business groups have welcomed another green light for data to keep being passed.

In a statement responding to the conclusion of the review, the Computer & Communications Industry Association said: “We commend the European Commission for its thorough review. Privacy Shield is a robust framework, with strong data protections, that allows for the daily transfers of commercial data between the world’s two biggest trading partners.”

Europe dials up pressure on tech giants over election security

The European Union has announced a package of measures intended to step up efforts and pressure on tech giants to combat democracy-denting disinformation ahead of the EU parliament elections next May. The European Commission Action Plan, which was presented at a press briefing earlier today, has four areas of focus: 1) Improving detection of disinformation; 2) […]

The European Union has announced a package of measures intended to step up efforts and pressure on tech giants to combat democracy-denting disinformation ahead of the EU parliament elections next May.

The European Commission Action Plan, which was presented at a press briefing earlier today, has four areas of focus: 1) Improving detection of disinformation; 2) Greater co-ordination across EU Member States, including by sharing alerts about threats; 3) Increased pressure on online platforms, including to increase transparency around political ads and purge fake accounts; and 4) raising awareness and critical thinking among EU citizens.

The Commission says 67% of EU citizens are worried about their personal data being used for political targeting, and 80% want improved transparency around how much political parties spend to run campaigns on social media.

And it warned today that it wants to see rapid action from online platforms to deliver on pledges they’ve already made to fight fake news and election interference.

The EC’s plan follows a voluntary Code of Practice launched two months ago, which signed up tech giants including Facebook, Google and Twitter, along with some ad industry players, to some fairly fuzzy commitments to combat the spread of so-called ‘fake news’.

They also agreed to hike transparency around political advertising. But efforts so far remain piecemeal, with — for example — no EU-wide roll out of Facebook’s political ads disclosure system.

Facebook has only launched political ad identification checks plus an archive library of ads in the US and the UK so far, leaving the rest of the world to rely on the more limited ‘view ads’ functionality that it has rolled out globally.

The EC said it will be stepping up its monitoring of platforms’ efforts to combat election interference — with the new plan including “continuous” monitoring.

This will take the form of monthly progress reports, starting with a Commission progress report in January and then monthly reports thereafter (against what it slated as “very specific targets”) to ensure signatories are actually purging and disincentivizing bad actors and inauthentic content from their platform, not just saying they’re going to.

As we reported in September the Code of Practice looked to be a pretty dilute first effort. But ongoing progress reports could at least help concentrate minds — coupled with the ongoing threat of EU-wide legislation if platforms fail to effectively self-regulate.

Digital economy and society commissioner Mariya Gabriel said the EC would have “measurable and visible results very soon”, warning platforms: “We need greater transparency, greater responsibility both on the content, as well as the political approach.”

Security union commissioner, Julian King, came in even harder on tech firms — warning that the EC wants to see “real progress” from here on in.

“We need to see the Internet platforms step up and make some real progress on their commitments. This is stuff that we believe the platforms can and need to do now,” he said, accusing them of “excuses” and “foot-dragging”.

“The risks are real. We need to see urgent improvement in how adverts are placed,” he continued. “Greater transparency around sponsored content. Fake accounts rapidly and effectively identified and deleted.”

King pointed out Facebook admits that between 3% and 4% of its entire user-base is fake.

“That is somewhere between 60M and 90M face accounts,” he continued. “And some of those accounts are the most active accounts. A recent study found that 80% of the Twitter accounts that spread disinformation during the 2016 US election are still active today — publishing more than a million tweets a day. So we’ve got to get serious about this stuff.”

Twitter declined to comment on today’s developments but a spokesperson told us its “number one priority is improving the health of the public conversation”.

“Tackling co-ordinated disinformation campaigns is a key component of this. Disinformation is a complex, societal issue which merits a societal response,” Twitter’s statement said. “For our part, we are already working with our industry partners, Governments, academics and a range of civil society actors to develop collaborative solutions that have a meaningful impact for citizens. For example, Twitter recently announced a global partnership with UNESCO on media and information literacy to help equip citizens with the skills they need to critically analyse content they are engaging with online.”

We’ve also reached out to Facebook and Google for comment on the Commission plan.

King went on to press for “clearer rules around bots”, saying he would personally favor a ban on political content being “disseminated by machines”.

The Code of Practice does include a commitment to address both fake accounts and online bots, and “establish clear marking systems and rules for bots to ensure their activities cannot be confused with human interactions”. And Twitter has previously said it’s considering labelling bots; albeit with the caveat “as far as we can detect them”.

But action is still lacking.

“We need rapid corrections, which are given the same prominence and circulation as the original fake news. We need more effective promotion of alternative narratives. And we need to see overall greater clarity around how the algorithms are working,” King continued, banging the drum for algorithmic accountability.

“All of this should be subject to independent oversight and audit,” he added, suggesting the self-regulation leash here will be a very short one.

He said the Commission will make a “comprehensive assessment” of how the Code is working next year, warning: “If the necessary progress is not made we will not hesitate to reconsider our options — including, eventually, regulation.”

“We need to be honest about the risks, we need to be ready to act. We can’t afford an Internet that is the wild west where anything goes, so we won’t allow it,” he concluded.

Commissioner Vera Jourova also attended the briefing and used her time at the podium to press platforms to “immediately guarantee the transparency of political advertising”.

“This is a quick fix that is necessary and urgent,” she said. “It includes properly checking and clearly indicating who is behind online advertisement and who paid for it.”

In Spain regional elections took place in Andalusia on Sunday and — as noted above — while Facebook has launched a political ad authentication process and ad archive library in the US and the UK, the company confirmed to us that such a system was not up and running in Spain in time for that regional European election.

In the vote in Andalusia a tiny Far Right party, Vox, broke pollsters’ predictions to take twelve seats in the parliament — a first since the country’s return to democracy after the death of the dictator Francisco Franco in 1975.

Zooming in on election security risks, Jourova warned that “large-scale organized disinformation campaigns” have become “extremely efficient and spread with the speed of light” online. She also warned that non-transparent ads “will be massively used to influence opinions” in the run up to the EU elections.

Hence the pressing need for a transparency guarantee.

“When we allow the machines to massively influence free decisions of democracy I think that we have appeared in a bad science fiction,” she added. “The electoral campaign should be the competition of ideas, not the competition of dirty money, dirty methods, and hidden advertising where the people are not informed and don’t have a clue that they are influenced by some hidden powers.”

Jourova urged Member States to update their election laws so existing requirements on traditional media to observe a pre-election period also apply online.

“We all have roles to play, not only Member States, also social media platforms, but also traditional political parties. [They] need to make public the information on their expenditure for online activities as well as information on any targeting criteria used,” she concluded.

A report by the UK’s DCMS committee, which has been running an enquiry into online disinformation for the best part of this year, made similar recommendations in its preliminary report this summer.

Though the committee also went further — calling for a levy on social media to defend democracy. Albeit, the UK government did not leap into the recommended actions.

Also speaking at today’s presser, EC VP, Andrus Ansip, warned of the ongoing disinformation threat from Russia but said the EU does not intend to respond to the threat from propaganda outlets like RT, Sputnik and IRA troll farms by creating its own pro-EU propaganda machine.

Rather he said the plan is to focus efforts on accelerating collaboration and knowledge-sharing to improve detection and indeed debunking of disinformation campaigns.

“We need to work together and co-ordinate our efforts — in a European way, protecting our freedoms,” he said, adding that the plan sets out “how to fight back against the relentless propaganda and information weaponizing used against our democracies”.

Under the action plan, the budget of the European External Action Service (EEAS) — which bills itself as the EU’s diplomatic service — will more than double next year, to €5M, with the additional funds intended for strategic comms to “address disinformation and raise awareness about its adverse impact”, including beefing up headcount.

“This will help them to use new tools and technologies to fight disinformation,” Ansip suggested.

Another new measure announced today is a dedicated Rapid Alert System which the EC says will facilitate “the sharing of data and assessments of disinformation campaigns and to provide alerts on disinformation threats in real time”, with knowledge-sharing flowing between EU institutions and Member States.

The EC also says it will boost resource for national multidisciplinary teams of independent fact-checkers and researchers to detect and expose disinformation campaigns across social networks — working towards establishing a European network of fact-checkers.

“Their work is absolutely vital in order to combat disinformation,” said Gabriel, adding: “This is very much in line with our principles of pluralism of the media and freedom of expression.”

Investments will also go towards supporting media education and critical awareness, with Gabriel noting that the Commission will to run a European media education week, next March, to draw attention to the issue and gather ideas.

She said the overarching aim is to “give our citizens a whole array of tools that they can use to make a free choice”.

“It’s high time we give greater visibility to this problem because we face this on a day to day basis. We want to provide solutions — so we really need a bottom up approach,” she added. “It’s not up to the Commission to say what sort of initiatives should be adopted; we need to give stakeholders and citizens their possibility to share best practices.”

Google files appeal against Europe’s $5BN antitrust fine for Android

Google has lodged its legal appeal against the European Commission’s €4.34 billion (~$5BN) antitrust ruling against its Android mobile OS, according to Reuters — the first step in a process that could keep its lawyers busy for years to come. “We have now filed our appeal of the EC’s Android decision at the General Court of the […]

Google has lodged its legal appeal against the European Commission’s €4.34 billion (~$5BN) antitrust ruling against its Android mobile OS, according to Reuters — the first step in a process that could keep its lawyers busy for years to come.

“We have now filed our appeal of the EC’s Android decision at the General Court of the EU,” it told the news agency, via email.

We’ve reached out to Google for comment on the appeals process.

Rulings made by the EU’s General Court in Luxembourg can be appealed to the top court, the Court of Justice of the European Union, but only on points of law.

Europe’s competition commissioner, Margrethe Vestager, announced the record-breaking antitrust penalty for Android in July, following more than two years of investigation of the company’s practices around its smartphone operating system.

Vestager said Google had abused the regional dominance of its smartphone platform by requiring that manufacturers pre-install other Google apps as a condition for being able to license the Play Store.

She also found the company had made payments to some manufacturers and mobile network operators in exchange for them exclusively pre-installing Google Search on their devices, and used Google Play licensing to prevent manufacturers from selling devices based on Android forks — which would not have to include Google services and, in Vestager’s view, “could have provided a platform for rival search engines as well as other app developers to thrive”.

Google rejected the Commission’s findings and said it would appeal.

In a blog post at the time, Google CEO Sundar Pichai argued the contrary — claiming the Android ecosystem has “created more choice, not less” for consumers, and saying the Commission ruling “ignores the new breadth of choice and clear evidence about how people use their phones today”.

According to Reuters the company reiterated its earlier arguments in reference to the appeal.

A spokesperson for the EC told us simply: “The Commission will defend its decision in Court.”

Europe to push for one-hour takedown law for terrorist content

The European Union’s executive body is doubling down on its push for platforms to pre-filter the Internet, publishing a proposal today for all websites to monitor uploads in order to be able to quickly remove terrorist uploads. The Commission handed platforms an informal one-hour rule for removing terrorist content back in March. It’s now proposing turning […]

The European Union’s executive body is doubling down on its push for platforms to pre-filter the Internet, publishing a proposal today for all websites to monitor uploads in order to be able to quickly remove terrorist uploads.

The Commission handed platforms an informal one-hour rule for removing terrorist content back in March. It’s now proposing turning that into a law to prevent such content spreading its violent propaganda over the Internet.

For now the ‘rule of thumb’ regime continues to apply. But it’s putting meat on the bones of its thinking, fleshing out a more expansive proposal for a regulation aimed at “preventing the dissemination of terrorist content online”.

As per usual EU processes, the Commission’s proposal would need to gain the backing of Member States and the EU parliament before it could be cemented into law.

One major point to note here is that existing EU law does not allow Member States to impose a general obligation on hosting service providers to monitor the information that users transmit or store. But in the proposal the Commission argues that, given the “grave risks associated with the dissemination of terrorist content”, states could be allowed to “exceptionally derogate from this principle under an EU framework”.

So it’s essentially suggesting that Europeans’ fundamental rights might not, in fact, be so fundamental. (Albeit, European judges might well take a different view — and it’s very likely the proposals could face legal challenges should they be cast into law.)

What is being suggested would also apply to any hosting service provider that offers services in the EU — “regardless of their place of establishment or their size”. So, seemingly, not just large platforms, like Facebook or YouTube, but — for example — anyone hosting a blog that includes a free-to-post comment section.

Websites that fail to promptly take down terrorist content would face fines — with the level of penalties being determined by EU Member States (Germany has already legislated to enforce social media hate speech takedowns within 24 hours, setting the maximum fine at €50M).

“Penalties are necessary to ensure the effective implementation by hosting service providers of the obligations pursuant to this Regulation,” the Commission writes, envisaging the most severe penalties being reserved for systematic failures to remove terrorist material within one hour. 

It adds: “When determining whether or not financial penalties should be imposed, due account should be taken of the financial resources of the provider.” So — for example — individuals with websites who fail to moderate their comment section fast enough might not be served the very largest fines, presumably.

The proposal also encourages platforms to develop “automated detection tools” so they can take what it terms “proactive measures proportionate to the level of risk and to remove terrorist material from their services”.

So the Commission’s continued push for Internet pre-filtering is clear. (This is also a feature of the its copyright reform — which is being voted on by MEPs later today.)

Albeit, it’s not alone on that front. Earlier this year the UK government went so far as to pay an AI company to develop a terrorist propaganda detection tool that used machine learning algorithms trained to automatically detect propaganda produced by the Islamic State terror group — with a claimed “extremely high degree of accuracy”. (At the time it said it had not ruled out forcing tech giants to use it.)

What is terrorist content for the purposes of this proposals? The Commission refers to an earlier EU directive on combating terrorism — which defines the material as “information which is used to incite and glorify the commission of terrorist offences, encouraging the contribution to and providing instructions for committing terrorist offences as well as promoting participation in terrorist groups”.

And on that front you do have to wonder whether, for example, some of U.S. president Donald Trump’s comments last year after the far right rally in Charlottesville where a counter protestor was murdered by a white supremacist — in which he suggested there were “fine people” among those same murderous and violent white supremacists might not fall under that ‘glorifying the commission of terrorist offences’ umbrella, should, say, someone repost them to a comment section that was viewable in the EU…

Safe to say, even terrorist propaganda can be subjective. And the proposed regime will inevitably encourage borderline content to be taken down — having a knock-on impact upon online freedom of expression.

The Commission also wants websites and platforms to share information with law enforcement and other relevant authorities and with each other — suggesting the use of “standardised templates”, “response forms” and “authenticated submission channels” to facilitate “cooperation and the exchange of information”.

It tackles the problem of what it refers to as “erroneous removal” — i.e. content that’s removed after being reported or erroneously identified as terrorist propaganda but which is subsequently, under requested review, determined not to be — by placing an obligation on providers to have “remedies and complaint mechanisms to ensure that users can challenge the removal of their content”.

So platforms and websites will be obligated to police and judge speech — which they already do do, of course but the proposal doubles down on turning online content hosters into judges and arbiters of that same content.

The regulation also includes transparency obligations on the steps being taken against terrorist content by hosting service providers — which the Commission claims will ensure “accountability towards users, citizens and public authorities”. 

Other perspectives are of course available… 

The Commission envisages all taken down content being retained by the host for a period of six months so that it could be reinstated if required, i.e. after a valid complaint — to ensure what it couches as “the effectiveness of complaint and review procedures in view of protecting freedom of expression and information”.

It also sees the retention of takedowns helping law enforcement — meaning platforms and websites will continue to be co-opted into state law enforcement and intelligence regimes, getting further saddled with the burden and cost of having to safely store and protect all this sensitive data.

(On that the EC just says: “Hosting service providers need to put in place technical and organisational safeguards to ensure the data is not used for other purposes.”)

The Commission would also create a system for monitoring the monitoring it’s proposing platforms and websites undertake — thereby further extending the proposed bureaucracy, saying it would establish a “detailed programme for monitoring the outputs, results and impacts” within one year of the regulation being applied; and report on the implementation and the transparency elements within two years; evaluating the entire functioning of it four years after it’s coming into force.

The executive body says it consulted widely ahead of forming the proposals — including running an open public consultation, carrying out a survey of 33,500 EU residents, and talking to Member States’ authorities and hosting service providers.

“By and large, most stakeholders expressed that terrorist content online is a serious societal problem affecting internet users and business models of hosting service providers,” the Commission writes. “More generally, 65% of respondent to the Eurobarometer survey considered that the internet is not safe for its users and 90% of the respondents consider it important to limit the spread of illegal content online.

“Consultations with Member States revealed that while voluntary arrangements are producing results, many see the need for binding obligations on terrorist content, a sentiment echoed in the European Council Conclusions of June 2018. While overall, the hosting service providers were in favour of the continuation of voluntary measures, they noted the potential negative effects of emerging legal fragmentation in the Union.

“Many stakeholders also noted the need to ensure that any regulatory measures for removal of content, particularly proactive measures and strict timeframes, should be balanced with safeguards for fundamental rights, notably freedom of speech. Stakeholders noted a number of necessary measures relating to transparency, accountability as well as the need for human review in deploying automated tools.”