Europe agrees platform rules to tackle unfair business practices

The European Union’s political institutions have reached agreement over new rules designed to boost transparency around online platform businesses and curb unfair practices to support traders and other businesses that rely on digital intermediaries for discovery and sales. The European Commission proposed a regulation for fairness and transparency in online platform trading last April. And late yesterday […]

The European Union’s political institutions have reached agreement over new rules designed to boost transparency around online platform businesses and curb unfair practices to support traders and other businesses that rely on digital intermediaries for discovery and sales.

The European Commission proposed a regulation for fairness and transparency in online platform trading last April. And late yesterday the European Parliament, Council of the EU and Commission reached a political deal on regulating the business environment of platforms, announcing the accord in a press release today.

The political agreement paves the way for adoption and publication of the regulation, likely later this year. The rules will apply 12 months after that point.

Online platform intermediaries such as ecommerce marketplaces and search engines are covered by the new rules if they provide services to businesses established in the EU and which offer goods or services to consumers located in the EU.

The Commission estimates there are some 7,000 such platforms and marketplaces which will be covered by the regulation, noting this includes “world giants as well as very small start-ups”.

Under the new rules, sudden and unexpected account suspensions will be banned — with the Commission saying platforms will have to provide “clear reasons” for any termination and also possibilities for appeal.

Terms and conditions must also be “easily available and provided in plain and intelligible language”.

There must also be advance notice of changes — of at least 15 days, with longer notice periods applying for more complex changes.

For search engines the focus is on ranking transparency. And on that front dominant search engine Google has attracted more than its fair share of criticism in Europe from a range of rivals (not all of whom are European).

In 2017, the search giant was also slapped with a $2.7BN antitrust fine related to its price comparison service, Google Shopping. The EC found Google had systematically given prominent placement to its own search comparison service while also demoting rival services in search results. (Google rejects the findings and is appealing.)

Given the history of criticism of Google’s platform business practices, and the multi-year regulatory tug of war over anti-competitive impacts, the new transparency provisions look intended to make it harder for a dominant search player to use its market power against rivals.

Changing the online marketplace

The importance of legislating for platform fairness was flagged by the Commission’s antitrust chief, Margrethe Vestager, last summer — when she handed Google another very large fine ($5BN) for anti-competitive behavior related to its mobile platform Android.

Vestager said then she wasn’t sure breaking Google up would be an effective competition fix, preferring to push for remedies to support “more players to have a real go”, as her Android decision attempts to do. But she also stressed the importance of “legislation that will ensure that you have transparency and fairness in the business to platform relationship”.

If businesses have legal means to find out why, for example, their traffic has stopped and what they can do to get it back that will “change the marketplace, and it will change the way we are protected as consumers but also as businesses”, she argued.

Just such a change is now in sight thanks to EU political accord on the issue.

The regulation represents the first such rules for online platforms in Europe and — commissioners’ contend — anywhere in the world.

“Our target is to outlaw some of the most unfair practices and create a benchmark for transparency, at the same time safeguarding the great advantages of online platforms both for consumers and for businesses,” said Andrus Ansip, VP for the EU’s Digital Single Market initiative in a statement.

Elżbieta Bieńkowska, commissioner for internal market, industry, entrepreneurship, and SMEs, added that the rules are “especially designed with the millions of SMEs in mind”.

“Many of them do not have the bargaining muscle to enter into a dispute with a big platform, but with these new rules they have a new safety net and will no longer worry about being randomly kicked off a platform, or intransparent ranking in search results,” she said in another supporting statement.

In a factsheet about the new rules, the Commission specifies they cover third-party ecommerce market places (e.g. Amazon Marketplace, eBay, Fnac Marketplace, etc.); app stores (e.g. Google Play, Apple App Store, Microsoft Store etc.); social media for business (e.g. Facebook pages, Instagram used by makers/artists etc.); and price comparison tools (e.g. Skyscanner, Google Shopping etc.).

The regulation does not target every online platform. For example, it does not cover online advertising (or b2b ad exchanges), payment services, SEO services or services that do not intermediate direct transactions between businesses and consumers.

The Commission also notes that online retailers that sell their own brand products and/or don’t rely on third party sellers on their own platform are also excluded from the regulation, such as retailers of brands or supermarkets.

Where transparency is concerned, the rules require that regulated marketplaces and search engines disclose the main parameters they use to rank goods and services on their site “to help sellers understand how to optimise their presence” — with the Commission saying the aim is to support sellers without allowing gaming of the ranking system.

Some platform business practices will also require mandatory disclosure — such as for platforms that not only provide a marketplace for sellers but sell on their platform themselves, as does Amazon for example.

The ecommerce giant’s use of merchant data remains under scrutiny in the EU. Vestager revealed a preliminary antitrust probe of Amazon last fall — when she said her department was gathering information to “try to get a full picture”. She said her concern is dual platforms could gain an unfair advantage as a consequence of access to merchants’ data.

And, again, the incoming transparency rules look intended to shrink that risk — requiring what the Commission couches as exhaustive disclosure of “any advantage” a platform may give to their own products over others.

“They must also disclose what data they collect, and how they use it — and in particular how such data is shared with other business partners they have,” it continues, noting also that: “Where personal data is concerned, the rules of the GDPR [General Data Protection Regulation] apply.”

(GDPR of course places further transparency requirements on platforms by, for example, empowering individuals to request any personal data held on them, as well as the reasons why their information is being processed.)

The platform regulation also includes new avenues for dispute resolution by requiring platforms set up an internal complaint-handling system to assist business users.

“Only the smallest platforms in terms of head count or turnover will be exempt from this obligation,” the Commission notes. (The exemption limit is set at fewer than 50 staff and less than €10M revenue.)

It also says: “Platforms will have to provide businesses with more options to resolve a potential problem through mediators. This will help resolve more issues out of court, saving businesses time and money.”

But, at the same time, the new rules allow business associations to take platforms to court to stop any non-compliance — mirroring a provision in the GDPR which also allows for collective enforcement and redress of individual privacy rights (where Member States adopt it).

“This will help overcome fear of retaliation, and lower the cost of court cases for individual businesses, when the new rules are not followed,” the Commission argues.

“In addition, Member States can appoint public authorities with enforcement powers, if they wish, and businesses can turn to those authorities.”

One component of the regulation that appears to be being left up to EU Member States to tackle is penalties for non-compliance — with no clear regime of fines set out (as there is in GDPR). So it’s not clear whether the platform regulation might not have rather more bark than bite, at least initially.

“Member States shall need to take measures that are sufficiently dissuasive to ensure that the online intermediation platforms and search engines comply with the requirements in the Regulation,” the Commission writes in a section of its factsheet dealing with how to make sure platforms respect the new rules.

It also points again to the provision allowing business associations or organisations to take action in national courts on behalf of members — saying this offers a legal route to “stop or prohibit non-compliance with one or more of the requirements of the Regulation”. So, er, expect lawsuits.

The Commission says the rules will be subject to review within 18 months after they come into force — in a bid to ensure the regulation keeps pace with fast-paced tech developments.

A dedicated Online Platform Observatory has been established in the EU for the purpose of “monitoring the evolution of the market and the effective implementation of the rules”, it adds.

Facebook urged to offer an API for political ad transparency research

Facebook has been called upon to provide good faith researchers with an API to enable them to study how political ads are spreading and being amplified on its platform. A coalition of European academics, technologists and human and digital rights groups, led by Mozilla, has signed an open letter to the company demanding far greater […]

Facebook has been called upon to provide good faith researchers with an API to enable them to study how political ads are spreading and being amplified on its platform.

A coalition of European academics, technologists and human and digital rights groups, led by Mozilla, has signed an open letter to the company demanding far greater transparency about how Facebook’s platform distributes and amplifies political ads ahead of elections to the European parliament which will take place in May.

We’ve reached out to Facebook for a reaction to the open letter.

The company had already announced it will launch some of its self-styled ‘election security’ measures in the EU before then — specifically an authorization and transparency system for political ads.

Last month its new global comms guy — former European politician and one time UK deputy prime minister, Nick Clegg — also announced that, from next month, it will have human-staffed operations centers up and running to monitor how localised political news gets distributed on its platform, with one of the centers located within the EU, in Dublin, Ireland.

But signatories to the letter argue the company’s heavily PR’ed political ad transparency measures don’t go far enough.

They also point out that some of the steps Facebook has taken have blocked independent efforts to monitor its political ad transparency claims.

Last month the Guardian reported on changes Facebook had made to its platform that restricted the ability of an external political transparency campaign group, called WhoTargetsMe, to monitor and track the flow of political ads on its platform.

The UK-based campaign group is one of more than 30 groups that have signed the open letter — calling for Facebook to stop what they couch as “harassment of good faith researchers who are building tools to provide greater transparency into the advertising on your platform”.

Other signatories include the Center for Democracy and Technology, the Open Data Institute and Reporters Without Borders.

“By restricting access to advertising transparency tools available to Facebook users, you are undermining transparencyeliminating the choice of your users to install tools that help them analyse political ads, and wielding control over good faith researchers who try to review data on the platform,” they write.

“Your alternative to these third party tools provides simple keyword search functionality and does not provide the level of data access necessary for meaningful transparency.”

The letter calls on Facebook to roll out “a functional, open Ad Archive API that enables advanced research and development of tools that analyse political ads served to Facebook users in the EU” — and do so by April 1, to enable external developers to have enough time to build transparency tools before the EU elections.

Signatories also urge the company to ensure that all political ads are “clearly distinguished from other content”, as well as being accompanied by “key targeting criteria such as sponsor identity and amount spent on the platform in all EU countries”.

Last year UK policymakers investigating the democratic impacts of online disinformation pressed Facebook on the issue of what the information it provides users about the targeting criteria for political ads. They also asked the company why it doesn’t offer users a complete opt-out from receiving political ads. Facebook’s CTO Mike Schroepfer was unable — or unwilling — to provide clear answers, instead choosing to deflect questions by reiterating the tidbits of data that Facebook has decided it will provide.

Close to a year later and Facebook users in the majority of European markets are still waiting for even a basic layer of political transparency, as the company has been allowed to continue self regulating at its own pace and — crucially — by getting to define what ‘transparency’ means (and therefore how much of the stuff users get).

Facebook launched some of these self-styled political ad transparency measures in the UK last fall — adding ‘paid for by’ disclaimers, and saying ads would be retained in an archive for seven years. (Though its verification checks had to be revised after they were quickly shown to be trivially easy to circumvent.)

Earlier in the year it also briefly suspended accepting ads paid for by foreign entities during a referendum on abortion in Ireland.

However other European elections — such as regional elections — have taken place without Facebook users getting access to any information about the political ads they’re seeing or who’s paying for them.

The EU’s executive body has its eye on the issue. Late last month the European Commission published the first batch of monthly ‘progress reports’ from platforms and ad companies that signed up to a voluntary code of conduct on political disinformation that was announced last December — saying all signatories need to do a lot more and fast.

On Facebook specifically, the Commission said it needs to provide “greater clarity” on how it will deploy consumer empowerment tools, and also boost its cooperation with fact-checkers and the research community across the whole EU — with commissioner Julian King singling the company out for failing to provide independent researchers with access to its data.

Today’s open letter from academics and researchers backs up the Commission’s assessment of feeble first efforts from Facebook and offers further fuel to feed its next monthly assessment.

The Commission has continued to warn it could legislate on the issue if platforms fail to step up their efforts to tackle political disinformation voluntarily.

Pressuring platforms to self-regulate has its own critics too, of course — who point out that it does nothing to tackle the core underlying problem of platforms having too much power in the first place…

Facebook urged to offer an API for political ad transparency research

Facebook has been called upon to provide good faith researchers with an API to enable them to study how political ads are spreading and being amplified on its platform. A coalition of European academics, technologists and human and digital rights groups, led by Mozilla, has signed an open letter to the company demanding far greater […]

Facebook has been called upon to provide good faith researchers with an API to enable them to study how political ads are spreading and being amplified on its platform.

A coalition of European academics, technologists and human and digital rights groups, led by Mozilla, has signed an open letter to the company demanding far greater transparency about how Facebook’s platform distributes and amplifies political ads ahead of elections to the European parliament which will take place in May.

We’ve reached out to Facebook for a reaction to the open letter.

The company had already announced it will launch some of its self-styled ‘election security’ measures in the EU before then — specifically an authorization and transparency system for political ads.

Last month its new global comms guy — former European politician and one time UK deputy prime minister, Nick Clegg — also announced that, from next month, it will have human-staffed operations centers up and running to monitor how localised political news gets distributed on its platform, with one of the centers located within the EU, in Dublin, Ireland.

But signatories to the letter argue the company’s heavily PR’ed political ad transparency measures don’t go far enough.

They also point out that some of the steps Facebook has taken have blocked independent efforts to monitor its political ad transparency claims.

Last month the Guardian reported on changes Facebook had made to its platform that restricted the ability of an external political transparency campaign group, called WhoTargetsMe, to monitor and track the flow of political ads on its platform.

The UK-based campaign group is one of more than 30 groups that have signed the open letter — calling for Facebook to stop what they couch as “harassment of good faith researchers who are building tools to provide greater transparency into the advertising on your platform”.

Other signatories include the Center for Democracy and Technology, the Open Data Institute and Reporters Without Borders.

“By restricting access to advertising transparency tools available to Facebook users, you are undermining transparencyeliminating the choice of your users to install tools that help them analyse political ads, and wielding control over good faith researchers who try to review data on the platform,” they write.

“Your alternative to these third party tools provides simple keyword search functionality and does not provide the level of data access necessary for meaningful transparency.”

The letter calls on Facebook to roll out “a functional, open Ad Archive API that enables advanced research and development of tools that analyse political ads served to Facebook users in the EU” — and do so by April 1, to enable external developers to have enough time to build transparency tools before the EU elections.

Signatories also urge the company to ensure that all political ads are “clearly distinguished from other content”, as well as being accompanied by “key targeting criteria such as sponsor identity and amount spent on the platform in all EU countries”.

Last year UK policymakers investigating the democratic impacts of online disinformation pressed Facebook on the issue of what the information it provides users about the targeting criteria for political ads. They also asked the company why it doesn’t offer users a complete opt-out from receiving political ads. Facebook’s CTO Mike Schroepfer was unable — or unwilling — to provide clear answers, instead choosing to deflect questions by reiterating the tidbits of data that Facebook has decided it will provide.

Close to a year later and Facebook users in the majority of European markets are still waiting for even a basic layer of political transparency, as the company has been allowed to continue self regulating at its own pace and — crucially — by getting to define what ‘transparency’ means (and therefore how much of the stuff users get).

Facebook launched some of these self-styled political ad transparency measures in the UK last fall — adding ‘paid for by’ disclaimers, and saying ads would be retained in an archive for seven years. (Though its verification checks had to be revised after they were quickly shown to be trivially easy to circumvent.)

Earlier in the year it also briefly suspended accepting ads paid for by foreign entities during a referendum on abortion in Ireland.

However other European elections — such as regional elections — have taken place without Facebook users getting access to any information about the political ads they’re seeing or who’s paying for them.

The EU’s executive body has its eye on the issue. Late last month the European Commission published the first batch of monthly ‘progress reports’ from platforms and ad companies that signed up to a voluntary code of conduct on political disinformation that was announced last December — saying all signatories need to do a lot more and fast.

On Facebook specifically, the Commission said it needs to provide “greater clarity” on how it will deploy consumer empowerment tools, and also boost its cooperation with fact-checkers and the research community across the whole EU — with commissioner Julian King singling the company out for failing to provide independent researchers with access to its data.

Today’s open letter from academics and researchers backs up the Commission’s assessment of feeble first efforts from Facebook and offers further fuel to feed its next monthly assessment.

The Commission has continued to warn it could legislate on the issue if platforms fail to step up their efforts to tackle political disinformation voluntarily.

Pressuring platforms to self-regulate has its own critics too, of course — who point out that it does nothing to tackle the core underlying problem of platforms having too much power in the first place…

Is Europe closing in on an antitrust fix for surveillance technologists?

The German Federal Cartel Office’s decision to order Facebook to change how it processes users’ personal data this week is a sign the antitrust tide could at last be turning against platform power. One European Commission source we spoke to, who was commenting in a personal capacity, described it as “clearly pioneering” and “a big deal”, […]

The German Federal Cartel Office’s decision to order Facebook to change how it processes users’ personal data this week is a sign the antitrust tide could at last be turning against platform power.

One European Commission source we spoke to, who was commenting in a personal capacity, described it as “clearly pioneering” and “a big deal”, even without Facebook being fined a dime.

The FCO’s decision instead bans the social network from linking user data across different platforms it owns, unless it gains people’s consent (nor can it make use of its services contingent on such consent). Facebook is also prohibited from gathering and linking data on users from third party websites, such as via its tracking pixels and social plugins.

The order is not yet in force, and Facebook is appealing, but should it come into force the social network faces being de facto shrunk by having its platforms siloed at the data level.

To comply with the order Facebook would have to ask users to freely consent to being data-mined — which the company does not do at present.

Yes, Facebook could still manipulate the outcome it wants from users but doing so would open it to further challenge under EU data protection law, as its current approach to consent is already being challenged.

The EU’s updated privacy framework, GDPR, requires consent to be specific, informed and freely given. That standard supports challenges to Facebook’s (still fixed) entry ‘price’ to its social services. To play you still have to agree to hand over your personal data so it can sell your attention to advertisers. But legal experts contend that’s neither privacy by design nor default.

The only ‘alternative’ Facebook offers is to tell users they can delete their account. Not that doing so would stop the company from tracking you around the rest of the mainstream web anyway. Facebook’s tracking infrastructure is also embedded across the wider Internet so it profiles non-users too.

EU data protection regulators are still investigating a very large number of consent-related GDPR complaints.

But the German FCO, which said it liaised with privacy authorities during its investigation of Facebook’s data-gathering, has dubbed this type of behavior “exploitative abuse”, having also deemed the social service to hold a monopoly position in the German market.

So there are now two lines of legal attack — antitrust and privacy law — threatening Facebook (and indeed other adtech companies’) surveillance-based business model across Europe.

A year ago the German antitrust authority also announced a probe of the online advertising sector, responding to concerns about a lack of transparency in the market. Its work here is by no means done.

Data limits

The lack of a big flashy fine attached to the German FCO’s order against Facebook makes this week’s story less of a major headline than recent European Commission antitrust fines handed to Google — such as the record-breaking $5BN penalty issued last summer for anticompetitive behaviour linked to the Android mobile platform.

But the decision is arguably just as, if not more, significant, because of the structural remedies being ordered upon Facebook. These remedies have been likened to an internal break-up of the company — with enforced internal separation of its multiple platform products at the data level.

This of course runs counter to (ad) platform giants’ preferred trajectory, which has long been to tear modesty walls down; pool user data from multiple internal (and indeed external sources), in defiance of the notion of informed consent; and mine all that personal (and sensitive) stuff to build identity-linked profiles to train algorithms that predict (and, some contend, manipulate) individual behavior.

Because if you can predict what a person is going to do you can choose which advert to serve to increase the chance they’ll click. (Or as Mark Zuckerberg puts it: ‘Senator, we run ads.’)

This means that a regulatory intervention that interferes with an ad tech giant’s ability to pool and process personal data starts to look really interesting. Because a Facebook that can’t join data dots across its sprawling social empire — or indeed across the mainstream web — wouldn’t be such a massive giant in terms of data insights. And nor, therefore, surveillance oversight.

Each of its platforms would be forced to be a more discrete (and, well, discreet) kind of business.

Competing against data-siloed platforms with a common owner — instead of a single interlinked mega-surveillance-network — also starts to sound almost possible. It suggests a playing field that’s reset, if not entirely levelled.

(Whereas, in the case of Android, the European Commission did not order any specific remedies — allowing Google to come up with ‘fixes’ itself; and so to shape the most self-serving ‘fix’ it can think of.)

Meanwhile, just look at where Facebook is now aiming to get to: A technical unification of the backend of its different social products.

Such a merger would collapse even more walls and fully enmesh platforms that started life as entirely separate products before were folded into Facebook’s empire (also, let’s not forget, via surveillance-informed acquisitions).

Facebook’s plan to unify its products on a single backend platform looks very much like an attempt to throw up technical barriers to antitrust hammers. It’s at least harder to imagine breaking up a company if its multiple, separate products are merged onto one unified backend which functions to cross and combine data streams.

Set against Facebook’s sudden desire to technically unify its full-flush of dominant social networks (Facebook Messenger; Instagram; WhatsApp) is a rising drum-beat of calls for competition-based scrutiny of tech giants.

This has been building for years, as the market power — and even democracy-denting potential — of surveillance capitalism’s data giants has telescoped into view.

Calls to break up tech giants no longer carry a suggestive punch. Regulators are routinely asked whether it’s time. As the European Commission’s competition chief, Margrethe Vestager, was when she handed down Google’s latest massive antitrust fine last summer.

Her response then was that she wasn’t sure breaking Google up is the right answer — preferring to try remedies that might allow competitors to have a go, while also emphasizing the importance of legislating to ensure “transparency and fairness in the business to platform relationship”.

But it’s interesting that the idea of breaking up tech giants now plays so well as political theatre, suggesting that wildly successful consumer technology companies — which have long dined out on shiny convenience-based marketing claims, made ever so saccharine sweet via the lure of ‘free’ services — have lost a big chunk of their populist pull, dogged as they have been by so many scandals.

From terrorist content and hate speech, to election interference, child exploitation, bullying, abuse. There’s also the matter of how they arrange their tax affairs.

The public perception of tech giants has matured as the ‘costs’ of their ‘free’ services have scaled into view. The upstarts have also become the establishment. People see not a new generation of ‘cuddly capitalists’ but another bunch of multinationals; highly polished but remote money-making machines that take rather more than they give back to the societies they feed off.

Google’s trick of naming each Android iteration after a different sweet treat makes for an interesting parallel to the (also now shifting) public perceptions around sugar, following closer attention to health concerns. What does its sickly sweetness mask? And after the sugar tax, we now have politicians calling for a social media levy.

Just this week the deputy leader of the main opposition party in the UK called for setting up a standalone Internet regulatory with the power to break up tech monopolies.

Talking about breaking up well-oiled, wealth-concentration machines is being seen as a populist vote winner. And companies that political leaders used to flatter and seek out for PR opportunities find themselves treated as political punchbags; Called to attend awkward grilling by hard-grafting committees, or taken to vicious task verbally at the highest profile public podia. (Though some non-democratic heads of state are still keen to press tech giant flesh.)

In Europe, Facebook’s repeat snubs of the UK parliament’s requests last year for Zuckerberg to face policymakers’ questions certainly did not go unnoticed.

Zuckerberg’s empty chair at the DCMS committee has become both a symbol of the company’s failure to accept wider societal responsibility for its products, and an indication of market failure; the CEO so powerful he doesn’t feel answerable to anyone; neither his most vulnerable users nor their elected representatives. Hence UK politicians on both sides of the aisle making political capital by talking about cutting tech giants down to size.

The political fallout from the Cambridge Analytica scandal looks far from done.

Quite how a UK regulator could successfully swing a regulatory hammer to break up a global Internet giant such as Facebook which is headquartered in the U.S. is another matter. But policymakers have already crossed the rubicon of public opinion and are relishing talking up having a go.

That represents a sea-change vs the neoliberal consensus that allowed competition regulators to sit on their hands for more than a decade as technology upstarts quietly hoovered up people’s data and bagged rivals, and basically went about transforming themselves from highly scalable startups into market-distorting giants with Internet-scale data-nets to snag users and buy or block competing ideas.

The political spirit looks willing to go there, and now the mechanism for breaking platforms’ distorting hold on markets may also be shaping up.

The traditional antitrust remedy of breaking a company along its business lines still looks unwieldy when faced with the blistering pace of digital technology. The problem is delivering such a fix fast enough that the business hasn’t already reconfigured to route around the reset. 

Commission antitrust decisions on the tech beat have stepped up impressively in pace on Vestager’s watch. Yet it still feels like watching paper pushers wading through treacle to try and catch a sprinter. (And Europe hasn’t gone so far as trying to impose a platform break up.) 

But the German FCO decision against Facebook hints at an alternative way forward for regulating the dominance of digital monopolies: Structural remedies that focus on controlling access to data which can be relatively swiftly configured and applied.

Vestager, whose term as EC competition chief may be coming to its end this year (even if other Commission roles remain in potential and tantalizing contention), has championed this idea herself.

In an interview on BBC Radio 4’s Today program in December she poured cold water on the stock question about breaking tech giants up — saying instead the Commission could look at how larger firms got access to data and resources as a means of limiting their power. Which is exactly what the German FCO has done in its order to Facebook. 

At the same time, Europe’s updated data protection framework has gained the most attention for the size of the financial penalties that can be issued for major compliance breaches. But the regulation also gives data watchdogs the power to limit or ban processing. And that power could similarly be used to reshape a rights-eroding business model or snuff out such business entirely.

The merging of privacy and antitrust concerns is really just a reflection of the complexity of the challenge regulators now face trying to rein in digital monopolies. But they’re tooling up to meet that challenge.

Speaking in an interview with TechCrunch last fall, Europe’s data protection supervisor, Giovanni Buttarelli, told us the bloc’s privacy regulators are moving towards more joint working with antitrust agencies to respond to platform power. “Europe would like to speak with one voice, not only within data protection but by approaching this issue of digital dividend, monopolies in a better way — not per sectors,” he said. “But first joint enforcement and better co-operation is key.”

The German FCO’s decision represents tangible evidence of the kind of regulatory co-operation that could — finally — crack down on tech giants.

Blogging in support of the decision this week, Buttarelli asserted: “It is not necessary for competition authorities to enforce other areas of law; rather they need simply to identity where the most powerful undertakings are setting a bad example and damaging the interests of consumers.  Data protection authorities are able to assist in this assessment.”

He also had a prediction of his own for surveillance technologists, warning: “This case is the tip of the iceberg — all companies in the digital information ecosystem that rely on tracking, profiling and targeting should be on notice.”

So perhaps, at long last, the regulators have figured out how to move fast and break things.

German antitrust office limits Facebook’s data-gathering

A lengthy antitrust probe into how Facebook gathers data on users has resulted in Germany’s competition watchdog banning the social network giant from combining data on users across its own suite of social platforms without their consent. The investigation of Facebook data-gathering practices began in March 2016. The decision by Germany’s Federal Cartel Office, announced […]

A lengthy antitrust probe into how Facebook gathers data on users has resulted in Germany’s competition watchdog banning the social network giant from combining data on users across its own suite of social platforms without their consent.

The investigation of Facebook data-gathering practices began in March 2016.

The decision by Germany’s Federal Cartel Office, announced today, also prohibits Facebook from gathering data on users from third party websites — such as via tracking pixels and social plug-ins — without their consent.

Although the decision does not yet have legal force and Facebook has said it’s appealing. The BBC reports that the company has a month to challenge the decision before it comes into force in Germany.

In both cases — i.e. Facebook collecting and linking user data from its own suite of services; and from third party websites — the Bundeskartellamt asserts that consent to data processing must be voluntary, so cannot be made a precondition of using Facebook’s service.

The company must therefore “adapt its terms of service and data processing accordingly”, it warns.

“Facebook’s terms of service and the manner and extent to which it collects and uses data are in violation of the European data protection rules to the detriment of users. The Bundeskartellamt closely cooperated with leading data protection authorities in clarifying the data protection issues involved,” it writes, couching Facebook’s conduct as “exploitative abuse”.

“Dominant companies may not use exploitative practices to the detriment of the opposite side of the market, i.e. in this case the consumers who use Facebook. This applies above all if the exploitative practice also impedes competitors that are not able to amass such a treasure trove of data,” it continues.

“This approach based on competition law is not a new one, but corresponds to the case-law of the Federal Court of Justice under which not only excessive prices, but also inappropriate contractual terms and conditions constitute exploitative abuse (so-called exploitative business terms).”

Commenting further in a statement, Andreas Mundt, president of the Bundeskartellamt, added: “In future, Facebook will no longer be allowed to force its users to agree to the practically unrestricted collection and assigning of non-Facebook data to their Facebook user accounts.

“The combination of data sources substantially contributed to the fact that Facebook was able to build a unique database for each individual user and thus to gain market power. In future, consumers can prevent Facebook from unrestrictedly collecting and using their data. The previous practice of combining all data in a Facebook user account, practically without any restriction, will now be subject to the voluntary consent given by the users.

“Voluntary consent means that the use of Facebook’s services must not be subject to the users’ consent to their data being collected and combined in this way. If users do not consent, Facebook may not exclude them from its services and must refrain from collecting and merging data from different sources.”

“With regard to Facebook’s future data processing policy, we are carrying out what can be seen as an internal divestiture of Facebook’s data,” Mundt added. 

Facebook has responded to the Bundeskartellamt’s decision with a blog post setting out why it disagrees. The company did not respond to specific questions we put to it.

One key consideration is that Facebook also tracks non-users via third party websites. Aka, the controversial issue of ‘shadow profiles’ — which both US and EU politicians questioned founder Mark Zuckerberg about last year.

Which raises the question of how it could comply with the decision on that front, if its appeal fails, given it has no obvious conduit for seeking consent from non-users to gather their data. (Facebook’s tracking of non-users has already previously been judged illegal elsewhere in Europe.)

The German watchdog says that if Facebook intends to continue collecting data from outside its own social network to combine with users’ accounts without consent it “must be substantially restricted”, suggesting a number of different criteria are feasible — such as restrictions including on the amount of data; purpose of use; type of data processing; additional control options for users; anonymization; processing only upon instruction by third party providers; and limitations on data storage periods.

Should the decision come to be legally enforced, the Bundeskartellamt says Facebook will be obliged to develop proposals for possible solutions and submit them to the authority which would then examine whether or not they fulfil its requirements.

While there’s lots to concern Facebook in this decision — which, it recently emerged, has plans to unify the technical infrastructure of its messaging platforms — it isn’t all bad for the company. Or, rather, it could have been worse.

The authority makes a point of saying the social network can continue to make the use of each of its messaging platforms subject to the processing of data generated by their use, writing: “It must be generally acknowledged that the provision of a social network aiming at offering an efficient, data-based business model funded by advertising requires the processing of personal data. This is what the user expects.”

Although it also does not close the door on further scrutiny of that dynamic, either under data protection law (as indeed, there is a current challenge to so called ‘forced consent‘ under Europe’s GDPR); or indeed under competition law.

“The issue of whether these terms can still result in a violation of data protection rules and how this would have to be assessed under competition law has been left open,” it emphasizes.

It also notes that it did not investigate how Facebook subsidiaries WhatsApp and Instagram collect and use user data — leaving the door open for additional investigations of those services.

On the wider EU competition law front, in recent years the European Commission’s competition chief has voiced concerns about data monopolies — going so far as to suggest, in an interview with the BBC last December, that restricting access to data might be a more appropriate solution to addressing monopolistic platform power vs breaking companies up.

In its blog post rejecting the German Federal Cartel Office’s decision, Facebook’s Yvonne Cunnane, head of data protection for its international business, Facebook Ireland, and Nikhil Shanbhag, director and associate general counsel, make three points to counter the decision, writing that: “The Bundeskartellamt underestimates the fierce competition we face in Germany, misinterprets our compliance with GDPR and undermines the mechanisms European law provides for ensuring consistent data protection standards across the EU.”

On the competition point, Facebook claims in the blog post that “popularity is not dominance” — suggesting the Bundeskartellamt found 40 per cent of social media users in Germany don’t use Facebook. (Not that that would stop Facebook from tracking those non-users around the mainstream Internet, of course.)

Although, in its announcement of the decision today, the Federal Cartel Office emphasizes that it found Facebook to have a dominant position in the Germany market — with (as of December 2018) 23M daily active users and 32M monthly active users, which it said constitutes a market share of more than 95 per cent (daily active users) and more than 80 per cent (monthly active users).

It also says it views social services such as Snapchat, YouTube and Twitter, and professional networks like LinkedIn and Xing, as only offering “parts of the services of a social network” — saying it therefore excluded them from its consideration of the market.

Though it adds that “even if these services were included in the relevant market, the Facebook group with its subsidiaries Instagram and WhatsApp would still achieve very high market shares that would very likely be indicative of a monopolisation process”.

The mainstay of Facebook’s argument against the Bundeskartellamt decision appears to fix on the GDPR — with the company both seeking to claim it’s in compliance with the pan-EU data-protection framework (although its business faces multiple complaints under GDPR), while simultaneously arguing that the privacy regulation supersedes regional competition authorities.

So, as ever, Facebook is underlining that its regulator of choice is the Irish Data Protection Commission.

“The GDPR specifically empowers data protection regulators – not competition authorities – to determine whether companies are living up to their responsibilities. And data protection regulators certainly have the expertise to make those conclusions,” Facebook writes.

“The GDPR also harmonizes data protection laws across Europe, so everyone lives by the same rules of the road and regulators can consistently apply the law from country to country. In our case, that’s the Irish Data Protection Commission. The Bundeskartellamt’s order threatens to undermine this, providing different rights to people based on the size of the companies they do business with.”

The final plank of Facebook’s rebuttal focuses on pushing the notion that pooling data across services enhances the consumer experience and increases “safety and security” — the latter point being the same argument Zuckerberg used last year to defend ‘shadow profiles’ (not that he called them that) — with the company claiming now that it needs to pool user data across services to identify abusive behavior online; and disable accounts link to terrorism; child exploitation; and election interference.

So the company is essentially seeking to leverage (you could say ‘legally weaponize’) a smorgasbord of antisocial problems many of which have scaled to become major societal issues in recent years, at least in part as a consequence of the size and scale of Facebook’s social empire, as arguments for defending the size and operational sprawl of its business. Go figure.

In a statement provided to us last month ahead of the ruling, Facebook also said: “Since 2016, we have been in regular contact with the Bundeskartellamt and have responded to their requests. As we outlined publicly in 2017, we disagree with their views and the conflation of data protection laws and antitrust laws, and will continue to defend our position.” 

Separately, a 2016 privacy policy reversal by WhatsApp to link user data with Facebook accounts, including for marketing purposes, attracted the ire of EU privacy regulations — and most of these data flows remain suspended in the region.

An investigation by the UK’s data watchdog was only closed last year after Facebook committed not to link user data across the two services until it could do so in a way that complies with the GDPR.

Although the company does still share data for business intelligence and security purposes — which has drawn continued scrutiny from the French data watchdog.

German antitrust office limits Facebook’s data-gathering

A lengthy antitrust probe into how Facebook gathers data on users has resulted in Germany’s competition watchdog banning the social network giant from combining data on users across its own suite of social platforms without their consent. The investigation of Facebook data-gathering practices began in March 2016. The decision by Germany’s Federal Cartel Office, announced […]

A lengthy antitrust probe into how Facebook gathers data on users has resulted in Germany’s competition watchdog banning the social network giant from combining data on users across its own suite of social platforms without their consent.

The investigation of Facebook data-gathering practices began in March 2016.

The decision by Germany’s Federal Cartel Office, announced today, also prohibits Facebook from gathering data on users from third party websites — such as via tracking pixels and social plug-ins — without their consent.

Although the decision does not yet have legal force and Facebook has said it’s appealing. The BBC reports that the company has a month to challenge the decision before it comes into force in Germany.

In both cases — i.e. Facebook collecting and linking user data from its own suite of services; and from third party websites — the Bundeskartellamt asserts that consent to data processing must be voluntary, so cannot be made a precondition of using Facebook’s service.

The company must therefore “adapt its terms of service and data processing accordingly”, it warns.

“Facebook’s terms of service and the manner and extent to which it collects and uses data are in violation of the European data protection rules to the detriment of users. The Bundeskartellamt closely cooperated with leading data protection authorities in clarifying the data protection issues involved,” it writes, couching Facebook’s conduct as “exploitative abuse”.

“Dominant companies may not use exploitative practices to the detriment of the opposite side of the market, i.e. in this case the consumers who use Facebook. This applies above all if the exploitative practice also impedes competitors that are not able to amass such a treasure trove of data,” it continues.

“This approach based on competition law is not a new one, but corresponds to the case-law of the Federal Court of Justice under which not only excessive prices, but also inappropriate contractual terms and conditions constitute exploitative abuse (so-called exploitative business terms).”

Commenting further in a statement, Andreas Mundt, president of the Bundeskartellamt, added: “In future, Facebook will no longer be allowed to force its users to agree to the practically unrestricted collection and assigning of non-Facebook data to their Facebook user accounts.

“The combination of data sources substantially contributed to the fact that Facebook was able to build a unique database for each individual user and thus to gain market power. In future, consumers can prevent Facebook from unrestrictedly collecting and using their data. The previous practice of combining all data in a Facebook user account, practically without any restriction, will now be subject to the voluntary consent given by the users.

“Voluntary consent means that the use of Facebook’s services must not be subject to the users’ consent to their data being collected and combined in this way. If users do not consent, Facebook may not exclude them from its services and must refrain from collecting and merging data from different sources.”

“With regard to Facebook’s future data processing policy, we are carrying out what can be seen as an internal divestiture of Facebook’s data,” Mundt added. 

Facebook has responded to the Bundeskartellamt’s decision with a blog post setting out why it disagrees. The company did not respond to specific questions we put to it.

One key consideration is that Facebook also tracks non-users via third party websites. Aka, the controversial issue of ‘shadow profiles’ — which both US and EU politicians questioned founder Mark Zuckerberg about last year.

Which raises the question of how it could comply with the decision on that front, if its appeal fails, given it has no obvious conduit for seeking consent from non-users to gather their data. (Facebook’s tracking of non-users has already previously been judged illegal elsewhere in Europe.)

The German watchdog says that if Facebook intends to continue collecting data from outside its own social network to combine with users’ accounts without consent it “must be substantially restricted”, suggesting a number of different criteria are feasible — such as restrictions including on the amount of data; purpose of use; type of data processing; additional control options for users; anonymization; processing only upon instruction by third party providers; and limitations on data storage periods.

Should the decision come to be legally enforced, the Bundeskartellamt says Facebook will be obliged to develop proposals for possible solutions and submit them to the authority which would then examine whether or not they fulfil its requirements.

While there’s lots to concern Facebook in this decision — which, it recently emerged, has plans to unify the technical infrastructure of its messaging platforms — it isn’t all bad for the company. Or, rather, it could have been worse.

The authority makes a point of saying the social network can continue to make the use of each of its messaging platforms subject to the processing of data generated by their use, writing: “It must be generally acknowledged that the provision of a social network aiming at offering an efficient, data-based business model funded by advertising requires the processing of personal data. This is what the user expects.”

Although it also does not close the door on further scrutiny of that dynamic, either under data protection law (as indeed, there is a current challenge to so called ‘forced consent‘ under Europe’s GDPR); or indeed under competition law.

“The issue of whether these terms can still result in a violation of data protection rules and how this would have to be assessed under competition law has been left open,” it emphasizes.

It also notes that it did not investigate how Facebook subsidiaries WhatsApp and Instagram collect and use user data — leaving the door open for additional investigations of those services.

On the wider EU competition law front, in recent years the European Commission’s competition chief has voiced concerns about data monopolies — going so far as to suggest, in an interview with the BBC last December, that restricting access to data might be a more appropriate solution to addressing monopolistic platform power vs breaking companies up.

In its blog post rejecting the German Federal Cartel Office’s decision, Facebook’s Yvonne Cunnane, head of data protection for its international business, Facebook Ireland, and Nikhil Shanbhag, director and associate general counsel, make three points to counter the decision, writing that: “The Bundeskartellamt underestimates the fierce competition we face in Germany, misinterprets our compliance with GDPR and undermines the mechanisms European law provides for ensuring consistent data protection standards across the EU.”

On the competition point, Facebook claims in the blog post that “popularity is not dominance” — suggesting the Bundeskartellamt found 40 per cent of social media users in Germany don’t use Facebook. (Not that that would stop Facebook from tracking those non-users around the mainstream Internet, of course.)

Although, in its announcement of the decision today, the Federal Cartel Office emphasizes that it found Facebook to have a dominant position in the Germany market — with (as of December 2018) 23M daily active users and 32M monthly active users, which it said constitutes a market share of more than 95 per cent (daily active users) and more than 80 per cent (monthly active users).

It also says it views social services such as Snapchat, YouTube and Twitter, and professional networks like LinkedIn and Xing, as only offering “parts of the services of a social network” — saying it therefore excluded them from its consideration of the market.

Though it adds that “even if these services were included in the relevant market, the Facebook group with its subsidiaries Instagram and WhatsApp would still achieve very high market shares that would very likely be indicative of a monopolisation process”.

The mainstay of Facebook’s argument against the Bundeskartellamt decision appears to fix on the GDPR — with the company both seeking to claim it’s in compliance with the pan-EU data-protection framework (although its business faces multiple complaints under GDPR), while simultaneously arguing that the privacy regulation supersedes regional competition authorities.

So, as ever, Facebook is underlining that its regulator of choice is the Irish Data Protection Commission.

“The GDPR specifically empowers data protection regulators – not competition authorities – to determine whether companies are living up to their responsibilities. And data protection regulators certainly have the expertise to make those conclusions,” Facebook writes.

“The GDPR also harmonizes data protection laws across Europe, so everyone lives by the same rules of the road and regulators can consistently apply the law from country to country. In our case, that’s the Irish Data Protection Commission. The Bundeskartellamt’s order threatens to undermine this, providing different rights to people based on the size of the companies they do business with.”

The final plank of Facebook’s rebuttal focuses on pushing the notion that pooling data across services enhances the consumer experience and increases “safety and security” — the latter point being the same argument Zuckerberg used last year to defend ‘shadow profiles’ (not that he called them that) — with the company claiming now that it needs to pool user data across services to identify abusive behavior online; and disable accounts link to terrorism; child exploitation; and election interference.

So the company is essentially seeking to leverage (you could say ‘legally weaponize’) a smorgasbord of antisocial problems many of which have scaled to become major societal issues in recent years, at least in part as a consequence of the size and scale of Facebook’s social empire, as arguments for defending the size and operational sprawl of its business. Go figure.

In a statement provided to us last month ahead of the ruling, Facebook also said: “Since 2016, we have been in regular contact with the Bundeskartellamt and have responded to their requests. As we outlined publicly in 2017, we disagree with their views and the conflation of data protection laws and antitrust laws, and will continue to defend our position.” 

Separately, a 2016 privacy policy reversal by WhatsApp to link user data with Facebook accounts, including for marketing purposes, attracted the ire of EU privacy regulations — and most of these data flows remain suspended in the region.

An investigation by the UK’s data watchdog was only closed last year after Facebook committed not to link user data across the two services until it could do so in a way that complies with the GDPR.

Although the company does still share data for business intelligence and security purposes — which has drawn continued scrutiny from the French data watchdog.

Online platforms still not clear enough about hate speech takedowns: EC

In its latest monitoring report of a voluntary Code of Conduct on illegal hate speech, which platforms including Facebook, Twitter and YouTube signed up to in Europe back in 2016, the European Commission has said progress is being made on speeding up takedowns but tech firms are still lagging when it comes to providing feedback […]

In its latest monitoring report of a voluntary Code of Conduct on illegal hate speech, which platforms including Facebook, Twitter and YouTube signed up to in Europe back in 2016, the European Commission has said progress is being made on speeding up takedowns but tech firms are still lagging when it comes to providing feedback and transparency around their decisions.

Tech companies are now assessing 89% of flagged content within 24 hours, with 72% of content deemed to be illegal hate speech being removed, according to the Commission — compared to just 40% and 28% respectively when the Code was first launched more than two years ago.

However it said today that platforms still aren’t giving users enough feedback vis-a-vis reports, and has urged more transparency from platforms — pressing for progress “in the coming months”, warning it could still legislate for a pan-EU regulation if it believes it’s necessary.

Giving her assessment of how the (still) voluntary code on hate speech takedowns is operating at a press briefing today, commissioner Vera Jourova said: “The only real gap that remains is transparency and the feedback to users who sent notifications [of hate speech].

“On average about a third of the notifications do not receive a feedback detailing the decision taken. Only Facebook has a very high standard, sending feedback systematically to all users. So we would like to see progress on this in the coming months. Likewise the companies should be more transparent towards the general public about what is happening in their platforms. We would like to see them make more data available about the notices and removals.”

“The fight against illegal hate speech online is not over. And we have no signs that such content has decreased on social media platforms,” she added. “Let me be very clear: The good results of this monitoring exercise don’t mean the companies are off the hook. We will continue to monitor this very closely and we can always consider additional measures if efforts slow down.”

Jourova flagged additional steps taken by the Commission to support the overarching goal of clearing what she dubbed a “sewage of words” off of online platforms, such as facilitating data-sharing between tech companies and police forces to help investigations and prosecutions of hate speech purveyors move forward.

She also noted it continues to provide Member States’ justice ministers with briefings on how the voluntary code is operating, warning again: “We always discuss that we will continue but if it slows down or it stops delivering the results we will consider some kind of regulation.”

Germany passed its own social media hate speech takedown law back in 2016, with the so-called ‘NetzDG’ law coming into force in early 2017. The law provides for fines as high as €50M for companies that fail to remove illegal hate speech within 24 hours and has led to social media platforms like Facebook to plough greater resource into locally sited moderation teams.

While, in the UK, the government announced a plan to legislate around safety and social media last year. Although it has yet to publish a White Paper setting out the detail of its policy plan.

Last week a UK parliamentary committee which has been investigating the impacts of social media and screen use among children recommended the government legislate to place a legal ‘duty of care’ on platforms to protect minors.

The committee also called for platforms to be more transparent, urging them to provide bona fide researchers with access to high quality anonymized data to allow for robust interrogation of social media’s effects on children and other vulnerable users.

Debate about the risks and impacts of social media platforms for children has intensified in the UK in recent weeks, following reports of the suicide of a 14 year old schoolgirl — whose father blamed Instagram for exposing her to posts encouraging self harm, saying he had no doubt content she’d been exposed to on the platform had helped kill her.

During today’s press conference, Jourova was asked whether the Commission intends to extend the Code of Conduct on illegal hate speech to other types of content that’s attracting concern, such as bullying and suicide. But she said the executive body is not intending to expand into such areas.

She said the Commission’s focus remains on addressing content that’s judged illegal under existing European legislation on racism and xenophobia — saying it’s a matter for individual Member States to choose to legislate in additional areas if they feel a need.

“We are following what the Member States are doing because we see… to some extent a fragmented picture of different problems in different countries,” she noted. “We are focusing on what is our obligation to promote the compliance with the European law. Which is the framework decision against racism and xenophobia.

“But we have the group of experts from the Member States, in the so-called Internet forum, where we speak about other crimes or sources of hatred online. And we see the determination on the side of the Member States to take proactive measures against these matters. So we expect that if there is such a worrying trend in some Member State that will address it by means of their national legislation.”

“I will always tell you I don’t like the fragmentation of the legal framework, especially when it comes to digital because we are faced with, more or less, the same problems in all the Member States,” she added. “But it’s true that when you [take a closer look] you see there are specific issues in the Member States, also maybe related with their history or culture, which at some moment the national authorities find necessary to react on by regulation. And the Commission is not hindering this process.

“This is the sovereign decision of the Member States.”

Four more tech platforms joined the voluntary code of conduct on illegal hate speech last year: — namely Google+, Instagram, Snapchat, Dailymotion. While French gaming platform Webedia (jeuxvideo.com) also announced their participation today.

Drilling down into the performance of specific platforms, the Commission’s monitoring exercise found that Facebook assessed hate speech reports in less than 24 hours in 92.6% of the cases and 5.1% in less than 48 hours. The corresponding performance figures for YouTube were 83.8 % and 7.9%; and for Twitter 88.3% and 7.3%, respectively.

While Instagram managed 77.4 % of notifications assessed in less than 24 hours. And Google+, which will in any case closes to consumers this April, managed to assess just 60%.

In terms of removals, the Commission found YouTube removed 85.4% of reported content, Facebook 82.4% and Twitter 43.5% (the latter constituting a slight decrease in performance vs last year). While Google+ removed 80.0% of the content and Instagram 70.6%.

It argues that despite social media platforms removing illegal content “more and more rapidly”, as a result of the code, this has not led to an “over-removal” of content — pointing to variable removal rates as an indication that “the review made by the companies continues to respect freedom of expression”.

“Removal rates varied depending on the severity of hateful content,” the Commission writes. “On average, 85.5% of content calling for murder or violence against specific groups was removed, while content using defamatory words or pictures to name certain groups was removed in 58.5 % of the cases.”

“This suggest that the reviewers assess the content scrupulously and with full regard to protected speech,” it adds.

It is also crediting the code with helping foster partnerships between civil society organisations, national authorities and tech platforms — on key issues such as awareness raising and education activities.

Facebook warned over privacy risks of merging messaging platforms

Facebook’s lead data protection regulator in Europe has asked the company for an “urgent briefing” regarding plans to integrate the underlying infrastructure of its three social messaging platforms. In a statement posted to its website late last week the Irish Data Protection Commission writes: “Previous proposals to share data between Facebook companies have given rise […]

Facebook’s lead data protection regulator in Europe has asked the company for an “urgent briefing” regarding plans to integrate the underlying infrastructure of its three social messaging platforms.

In a statement posted to its website late last week the Irish Data Protection Commission writes: “Previous proposals to share data between Facebook companies have given rise to significant data protection concerns and the Irish DPC will be seeking early assurances that all such concerns will be fully taken into account by Facebook in further developing this proposal.”

Last week the New York Times broke the news that Facebook intends to unify the backend infrastructure of its three separate products, couching it as Facebook founder Mark Zuckerberg asserting control over acquisitions whose founders have since left the building.

Instagram founders, Kevin Systrom and Mike Krieger, left Facebook last year, as a result of rising tensions over reduced independence, according to our sources.

While WhatsApp’s founders left Facebook earlier, with Brian Acton departing in late 2017 and Jan Koum sticking it out until spring 2018. The pair reportedly clashed with Facebook execs over user privacy and differences over how to monetize the end-to-end encrypted platform.

Acton later said Facebook had coached him to tell European regulators assessing whether to approve the 2014 merger that it would be “really difficult” for the company to combine WhatsApp and Facebook user data.

In the event, Facebook went on to link accounts across the two platforms just two years after the acquisition closed. It was later hit with a $122M penalty from the European Commission for providing “incorrect or misleading” information at the time of the merger. Though Facebook claimed it had made unintentional “errors” in the 2014 filing.

A further couple of years on and Facebook has now graduated to seeking full platform unification of separate messaging products.

“We want to build the best messaging experiences we can; and people want messaging to be fast, simple, reliable and private,” a spokesperson told us when we asked for a response to the NYT report. “We’re working on making more of our messaging products end-to-end encrypted and considering ways to make it easier to reach friends and family across networks.”

“As you would expect, there is a lot of discussion and debate as we begin the long process of figuring out all the details of how this will work,” the spokesperson added, confirming the substance of the NYT report.

There certainly would be a lot of detail to be worked out. Not least the feasibility of legally merging user data across distinct products in Europe, where a controversial 2016 privacy u-turn by WhatsApp — when it suddenly announced it would after all share user data with parent company Facebook (despite previously saying it would never do so), including sharing data for marketing purposes — triggered swift regulatory intervention.

Facebook was forced to suspend marketing-related data flows in Europe. Though it has continued sharing data between WhatsApp and Facebook for security and business intelligence purposes, leading to the French data watchdog to issue a formal notice at the end of 2017 warning the latter transfers also lack a legal basis.

A court in Hamburg, Germany, also officially banned Facebook from using WhatsApp user data for its own purposes.

Early last year, following an investigation into the data-sharing u-turn, the UK’s data watchdog obtained an undertaking from WhatsApp that it would not share personal data with Facebook until the two services could do so in a way that’s compliant with the region’s strict privacy framework, the General Data Protection Regulation (GDPR).

Facebook only avoided a fine from the UK regulator because it froze data flows after the regulatory intervention. But the company clearly remains on watch — and any fresh moves to further integrate the platforms would trigger instant scrutiny, evidenced by the shot across the bows from the DPC in Ireland (Facebook’s international HQ is based in the country).

The 2016 WhatsApp-Facebook privacy u-turn also occurred prior to Europe’s GDPR coming into force. And the updated privacy framework includes a regime of substantially larger maximum fines for any violations.

Under the regulation watchdogs also have the power to ban companies from processing data. Which, in the case of a revenue-rich data-mining giant like Facebook, could be a far more potent disincentive than even a billion dollar fine.

We’ve reached out to Facebook for comment on the Irish DPC’s statement and will update this report with any response.

Here’s the full statement from the Irish watchdog:

While we understand that Facebook’s proposal to integrate the Facebook, WhatsApp and Instagram platforms is at a very early conceptual stage of development, the Irish DPC has asked Facebook Ireland for an urgent briefing on what is being proposed. The Irish DPC will be very closely scrutinising Facebook’s plans as they develop, particularly insofar as they involve the sharing and merging of personal data between different Facebook companies. Previous proposals to share data between Facebook companies have given rise to significant data protection concerns and the Irish DPC will be seeking early assurances that all such concerns will be fully taken into account by Facebook in further developing this proposal. It must be emphasised that ultimately the proposed integration can only occur in the EU if it is capable of meeting all of the requirements of the GDPR.

Facebook may be hoping that extending end-to-end encryption to Instagram as part of its planned integration effort, per the NYT report, could offer a technical route to stop any privacy regulators’ hammers from falling.

Though use of e2e encryption still does not shield metadata from being harvested. And metadata offers a rich source of inferences about individuals which, under EU law, would certainly constitute personal data. So even with robust encryption across the board of Instagram, Facebook and WhatsApp the unified messaging platforms could still collectively leak plenty of personal data to their data-mining parent.

Facebook’s apps are also not open source. So even WhatsApp, which uses the respected Signal Protocol for its e2e encryption, remains under its control — with no ability for external audits to verify exactly what happens to data inside the app (such as checking what data gets sent back to Facebook). Users still have to trust Facebook’s implementation but regulators might demand actual proof of bona fide messaging privacy.

Nonetheless, the push by Facebook to integrate separate messaging products onto a single unified platform could be a defensive strategy — intended to throw dust in the face of antitrust regulators as political scrutiny of its market position and power continues to crank up. Though it would certainly be an aggressive defence to more tightly knit separate platforms together.

But if the risk Facebook is trying to shrink is being forced, by competition regulators, to sell off one or two of its messaging platforms it may feel it has nothing to lose by making it technically harder to break its business apart.

At the time of the acquisitions of Instagram and WhatsApp Facebook promised autonomy to their founders. Zuckerberg has since changed his view, according to the NYT — believing integrating all three will increase the utility of each and thus provide a disincentive for users to abandon each service.

It may also be a hedge against any one of the three messaging platforms decreasing in popularity by furnishing the business with internal levers it can throw to try to artifically juice activity across a less popular app by encouraging cross-platform usage.

And given the staggering size of the Facebook messaging empire, which globally sprawls to 2.5BN+ humans, user resistance to centralized manipulation via having their buttons pushed to increase cross-platform engagement across Facebook’s business may be futile without regulatory intervention.

Facebook warned over privacy risks of merging messaging platforms

Facebook’s lead data protection regulator in Europe has asked the company for an “urgent briefing” regarding plans to integrate the underlying infrastructure of its three social messaging platforms. In a statement posted to its website late last week the Irish Data Protection Commission writes: “Previous proposals to share data between Facebook companies have given rise […]

Facebook’s lead data protection regulator in Europe has asked the company for an “urgent briefing” regarding plans to integrate the underlying infrastructure of its three social messaging platforms.

In a statement posted to its website late last week the Irish Data Protection Commission writes: “Previous proposals to share data between Facebook companies have given rise to significant data protection concerns and the Irish DPC will be seeking early assurances that all such concerns will be fully taken into account by Facebook in further developing this proposal.”

Last week the New York Times broke the news that Facebook intends to unify the backend infrastructure of its three separate products, couching it as Facebook founder Mark Zuckerberg asserting control over acquisitions whose founders have since left the building.

Instagram founders, Kevin Systrom and Mike Krieger, left Facebook last year, as a result of rising tensions over reduced independence, according to our sources.

While WhatsApp’s founders left Facebook earlier, with Brian Acton departing in late 2017 and Jan Koum sticking it out until spring 2018. The pair reportedly clashed with Facebook execs over user privacy and differences over how to monetize the end-to-end encrypted platform.

Acton later said Facebook had coached him to tell European regulators assessing whether to approve the 2014 merger that it would be “really difficult” for the company to combine WhatsApp and Facebook user data.

In the event, Facebook went on to link accounts across the two platforms just two years after the acquisition closed. It was later hit with a $122M penalty from the European Commission for providing “incorrect or misleading” information at the time of the merger. Though Facebook claimed it had made unintentional “errors” in the 2014 filing.

A further couple of years on and Facebook has now graduated to seeking full platform unification of separate messaging products.

“We want to build the best messaging experiences we can; and people want messaging to be fast, simple, reliable and private,” a spokesperson told us when we asked for a response to the NYT report. “We’re working on making more of our messaging products end-to-end encrypted and considering ways to make it easier to reach friends and family across networks.”

“As you would expect, there is a lot of discussion and debate as we begin the long process of figuring out all the details of how this will work,” the spokesperson added, confirming the substance of the NYT report.

There certainly would be a lot of detail to be worked out. Not least the feasibility of legally merging user data across distinct products in Europe, where a controversial 2016 privacy u-turn by WhatsApp — when it suddenly announced it would after all share user data with parent company Facebook (despite previously saying it would never do so), including sharing data for marketing purposes — triggered swift regulatory intervention.

Facebook was forced to suspend marketing-related data flows in Europe. Though it has continued sharing data between WhatsApp and Facebook for security and business intelligence purposes, leading to the French data watchdog to issue a formal notice at the end of 2017 warning the latter transfers also lack a legal basis.

A court in Hamburg, Germany, also officially banned Facebook from using WhatsApp user data for its own purposes.

Early last year, following an investigation into the data-sharing u-turn, the UK’s data watchdog obtained an undertaking from WhatsApp that it would not share personal data with Facebook until the two services could do so in a way that’s compliant with the region’s strict privacy framework, the General Data Protection Regulation (GDPR).

Facebook only avoided a fine from the UK regulator because it froze data flows after the regulatory intervention. But the company clearly remains on watch — and any fresh moves to further integrate the platforms would trigger instant scrutiny, evidenced by the shot across the bows from the DPC in Ireland (Facebook’s international HQ is based in the country).

The 2016 WhatsApp-Facebook privacy u-turn also occurred prior to Europe’s GDPR coming into force. And the updated privacy framework includes a regime of substantially larger maximum fines for any violations.

Under the regulation watchdogs also have the power to ban companies from processing data. Which, in the case of a revenue-rich data-mining giant like Facebook, could be a far more potent disincentive than even a billion dollar fine.

We’ve reached out to Facebook for comment on the Irish DPC’s statement and will update this report with any response.

Here’s the full statement from the Irish watchdog:

While we understand that Facebook’s proposal to integrate the Facebook, WhatsApp and Instagram platforms is at a very early conceptual stage of development, the Irish DPC has asked Facebook Ireland for an urgent briefing on what is being proposed. The Irish DPC will be very closely scrutinising Facebook’s plans as they develop, particularly insofar as they involve the sharing and merging of personal data between different Facebook companies. Previous proposals to share data between Facebook companies have given rise to significant data protection concerns and the Irish DPC will be seeking early assurances that all such concerns will be fully taken into account by Facebook in further developing this proposal. It must be emphasised that ultimately the proposed integration can only occur in the EU if it is capable of meeting all of the requirements of the GDPR.

Facebook may be hoping that extending end-to-end encryption to Instagram as part of its planned integration effort, per the NYT report, could offer a technical route to stop any privacy regulators’ hammers from falling.

Though use of e2e encryption still does not shield metadata from being harvested. And metadata offers a rich source of inferences about individuals which, under EU law, would certainly constitute personal data. So even with robust encryption across the board of Instagram, Facebook and WhatsApp the unified messaging platforms could still collectively leak plenty of personal data to their data-mining parent.

Facebook’s apps are also not open source. So even WhatsApp, which uses the respected Signal Protocol for its e2e encryption, remains under its control — with no ability for external audits to verify exactly what happens to data inside the app (such as checking what data gets sent back to Facebook). Users still have to trust Facebook’s implementation but regulators might demand actual proof of bona fide messaging privacy.

Nonetheless, the push by Facebook to integrate separate messaging products onto a single unified platform could be a defensive strategy — intended to throw dust in the face of antitrust regulators as political scrutiny of its market position and power continues to crank up. Though it would certainly be an aggressive defence to more tightly knit separate platforms together.

But if the risk Facebook is trying to shrink is being forced, by competition regulators, to sell off one or two of its messaging platforms it may feel it has nothing to lose by making it technically harder to break its business apart.

At the time of the acquisitions of Instagram and WhatsApp Facebook promised autonomy to their founders. Zuckerberg has since changed his view, according to the NYT — believing integrating all three will increase the utility of each and thus provide a disincentive for users to abandon each service.

It may also be a hedge against any one of the three messaging platforms decreasing in popularity by furnishing the business with internal levers it can throw to try to artifically juice activity across a less popular app by encouraging cross-platform usage.

And given the staggering size of the Facebook messaging empire, which globally sprawls to 2.5BN+ humans, user resistance to centralized manipulation via having their buttons pushed to increase cross-platform engagement across Facebook’s business may be futile without regulatory intervention.

Facebook, Google and Twitter told to do more to fight fake news ahead of European elections

A first batch of monthly progress reports from tech giants and advertising companies on what they’re doing to help fight online disinformation have been published by the European Commission. Platforms including Facebook, Google and Twitter signed up to a voluntary EU code of practice on the issue last year. The first reports cover measures taken […]

A first batch of monthly progress reports from tech giants and advertising companies on what they’re doing to help fight online disinformation have been published by the European Commission.

Platforms including Facebook, Google and Twitter signed up to a voluntary EU code of practice on the issue last year.

The first reports cover measures taken by platforms up to December 31, 2018.

The implementation reports are intended to detail progress towards the goal of putting the squeeze on disinformation — such as by proactively identifying and removing fake accounts — but the European Commission has today called for tech firms to intensify their efforts, warning that more needs to be done in the run up to the 2019 European Parliament elections, which take place in May.

The Commission announced a multi-pronged action plan on disinformation two months ago, urging greater co-ordination on the issue between EU Member States and pushing for efforts to raise awareness and encourage critical thinking among the region’s people.

But it also heaped pressure on tech companies, especially, warning it wanted to see rapid action and progress.

A month on and it sounds less than impressed with tech giants’ ‘progress’ on the issue.

Mozilla also signed up to the voluntary Code of Practice, and all the signatories committed to take broad-brush action to try to combat disinformation.

Although, as we reported at the time, the code suffered from a failure to nail down terms and requirements — suggesting not only that measuring progress would be tricky but that progress itself might prove an elusive and slippery animal.

The first response certainly looks to be a mixed bag. Which is perhaps expected given the overarching difficulty of attacking a complex and multi-faceted problem like disinformation quickly.

Though there’s also little doubt that opaque platforms used to getting their own way with data and content are going to be dragged kicking and screaming towards greater transparency. Hence it suits their purpose to be able to produce multi-page chronicles of ‘steps taken’, which allows them to project an aura of action — while continuing to indulge in their preferred foot-drag.

The Guardian reports especially critical comments made by the Commission vis-a-vis Facebook’s response, for example — with Julian King saying at today’s press conference that the company still hasn’t given independent researchers access to its data.

“We need to do something about that,” he added.

Here’s the Commission’s brief rundown of what’s been done by tech firms but with emphasis firmly placed on what’s yet to be done:

  • Facebook has taken or is taking measures towards the implementation of all of the commitments but now needs to provide greater clarity on how the social network will deploy its consumer empowerment tools and boost cooperation with fact-checkers and the research community across the whole EU.
  • Google has taken steps to implement all its commitments, in particular those designed to improve the scrutiny of ad placements, transparency of political advertisement and providing users with information, tools and support to empower them in their online experience. However some tools are only available in a small number of Member States. The Commission also calls on the online search engine to support research actions on a wider scale.
  • Twitter has prioritised actions against malicious actors, closing fake or suspicious accounts and automated systems/bots. Still, more information is needed on how this will restrict persistent purveyors of disinformation from promoting their tweets.
  • Mozilla is about to launch an upgraded version of its browser to block cross-site tracking by default but the online browser should be more concrete on how this will limit the information revealed about users’ browsing activities, which could potentially be used for disinformation campaigns.

Commenting in a statement, Mariya Gabriel, commissioner for digital economy and society, said: “Today’s reports rightly focus on urgent actions, such as taking down fake accounts. It is a good start. Now I expect the signatories to intensify their monitoring and reporting and increase their cooperation with fact-checkers and research community. We need to ensure our citizens’ access to quality and objective information allowing them to make informed choices.”

Strip out the diplomatic fillip and the message boils down to: Must do better, fast.

All of which explains why Facebook got out ahead of the Commission’s publication of the reports by putting its fresh-in-post European politician turned head of global comms, Nick Clegg, on a podium in Brussels yesterday — in an attempt to control the PR message about what it’s doing (or rather not doing, as the EC sees it) to boot fake activity into touch.

Clegg (re)announced more controls around the placement of political ads, and said Facebook would set up new human-staffed operations centers — in Dublin and Singapore — to monitor how localised political news is distributed on its network.

Although the centers won’t launch until March. So, again, not something Facebook has done.

The staged press event with Clegg making his maiden public speech for his new employer may have backfired a bit because he managed to be incredibly boring. Although making a hot button political issue as tedious as possible is probably a key Facebook strategy.

Anything to drain public outrage to make the real policymakers go away.

(The Commission’s brandished stick remains that if it doesn’t see enough voluntary progress from platforms, via the Code, is to say it could move towards regulating to tackle disinformation.)

Advertising groups are also signed up to the voluntary code. And the World Federation of Advertisers (WFA), European Association of Communication Agencies and Interactive Advertising Bureau Europe have also submitted reports today.

In its report, the WFA writes that the issue of disinformation has been incorporated into its Global Media Charter, which it says identifies “key issues within the digital advertising ecosystem”, as its members see it. It adds that the charter makes the following two obligation statements:

We [advertisers] understand that advertising can fuel and sustain sites which misuse and infringe upon Intellectual Property (IP) laws. Equally advertising revenue may be used to sustain sites responsible for ‘fake news’ content or ‘disinformation’. Advertisers commit to avoiding (and support their partners in the avoidance of) the funding of actors seeking to influence division or seeking to inflict reputational harm on business or society and politics at large through content that appears false and/or misleading.

While the Code of Practice doesn’t contain a great deal of quantifiable substance, some have read its tea-leaves as a sign that signatories are committing to bot detection and identification — by promising to “establish clear marking systems and rules for bots to ensure their activities cannot be confused with human interactions”.

But while Twitter has previously suggested it’s working on a system for badging bots on its platform (i.e. to help distinguish them from human users) nothing of the kind has yet seen the light of day as an actual Twitter feature. (The company is busy experimenting with other kinds of stuff.) So it looks like it also needs to provide more info on that front.

We reached out to the tech companies for comment on the Commission’s response to their implementation reports.

Google emailed us the following statement, attributed to Lie Junius, its director of public policy: 

Supporting elections in Europe and around the world is hugely important to us. We’ll continue to work in partnership with the EU through its Code of Practice on Disinformation, including by publishing regular reports about our work to prevent abuse, as well as with governments, law enforcement, others in our industry and the NGO community to strengthen protections around elections, protect users, and help combat disinformation.

A Twitter spokesperson also told us:

Disinformation is a societal problem and therefore requires a societal response. We continue to work closely with the European Commission to play our part in tackling it. We’ve formed a global partnership with UNESCO on media literacy, updated our fake accounts policy, and invested in better tools to proactively detect malicious activity. We’ve also provided users with more granular choices when reporting platform manipulation, including flagging a potentially fake account.

At the time of writing Facebook had not responded to a request for comment.