Is Europe closing in on an antitrust fix for surveillance technologists?

The German Federal Cartel Office’s decision to order Facebook to change how it processes users’ personal data this week is a sign the antitrust tide could at last be turning against platform power. One European Commission source we spoke to, who was commenting in a personal capacity, described it as “clearly pioneering” and “a big deal”, […]

The German Federal Cartel Office’s decision to order Facebook to change how it processes users’ personal data this week is a sign the antitrust tide could at last be turning against platform power.

One European Commission source we spoke to, who was commenting in a personal capacity, described it as “clearly pioneering” and “a big deal”, even without Facebook being fined a dime.

The FCO’s decision instead bans the social network from linking user data across different platforms it owns, unless it gains people’s consent (nor can it make use of its services contingent on such consent). Facebook is also prohibited from gathering and linking data on users from third party websites, such as via its tracking pixels and social plugins.

The order is not yet in force, and Facebook is appealing, but should it come into force the social network faces being de facto shrunk by having its platforms siloed at the data level.

To comply with the order Facebook would have to ask users to freely consent to being data-mined — which the company does not do at present.

Yes, Facebook could still manipulate the outcome it wants from users but doing so would open it to further challenge under EU data protection law, as its current approach to consent is already being challenged.

The EU’s updated privacy framework, GDPR, requires consent to be specific, informed and freely given. That standard supports challenges to Facebook’s (still fixed) entry ‘price’ to its social services. To play you still have to agree to hand over your personal data so it can sell your attention to advertisers. But legal experts contend that’s neither privacy by design nor default.

The only ‘alternative’ Facebook offers is to tell users they can delete their account. Not that doing so would stop the company from tracking you around the rest of the mainstream web anyway. Facebook’s tracking infrastructure is also embedded across the wider Internet so it profiles non-users too.

EU data protection regulators are still investigating a very large number of consent-related GDPR complaints.

But the German FCO, which said it liaised with privacy authorities during its investigation of Facebook’s data-gathering, has dubbed this type of behavior “exploitative abuse”, having also deemed the social service to hold a monopoly position in the German market.

So there are now two lines of legal attack — antitrust and privacy law — threatening Facebook (and indeed other adtech companies’) surveillance-based business model across Europe.

A year ago the German antitrust authority also announced a probe of the online advertising sector, responding to concerns about a lack of transparency in the market. Its work here is by no means done.

Data limits

The lack of a big flashy fine attached to the German FCO’s order against Facebook makes this week’s story less of a major headline than recent European Commission antitrust fines handed to Google — such as the record-breaking $5BN penalty issued last summer for anticompetitive behaviour linked to the Android mobile platform.

But the decision is arguably just as, if not more, significant, because of the structural remedies being ordered upon Facebook. These remedies have been likened to an internal break-up of the company — with enforced internal separation of its multiple platform products at the data level.

This of course runs counter to (ad) platform giants’ preferred trajectory, which has long been to tear modesty walls down; pool user data from multiple internal (and indeed external sources), in defiance of the notion of informed consent; and mine all that personal (and sensitive) stuff to build identity-linked profiles to train algorithms that predict (and, some contend, manipulate) individual behavior.

Because if you can predict what a person is going to do you can choose which advert to serve to increase the chance they’ll click. (Or as Mark Zuckerberg puts it: ‘Senator, we run ads.’)

This means that a regulatory intervention that interferes with an ad tech giant’s ability to pool and process personal data starts to look really interesting. Because a Facebook that can’t join data dots across its sprawling social empire — or indeed across the mainstream web — wouldn’t be such a massive giant in terms of data insights. And nor, therefore, surveillance oversight.

Each of its platforms would be forced to be a more discrete (and, well, discreet) kind of business.

Competing against data-siloed platforms with a common owner — instead of a single interlinked mega-surveillance-network — also starts to sound almost possible. It suggests a playing field that’s reset, if not entirely levelled.

(Whereas, in the case of Android, the European Commission did not order any specific remedies — allowing Google to come up with ‘fixes’ itself; and so to shape the most self-serving ‘fix’ it can think of.)

Meanwhile, just look at where Facebook is now aiming to get to: A technical unification of the backend of its different social products.

Such a merger would collapse even more walls and fully enmesh platforms that started life as entirely separate products before were folded into Facebook’s empire (also, let’s not forget, via surveillance-informed acquisitions).

Facebook’s plan to unify its products on a single backend platform looks very much like an attempt to throw up technical barriers to antitrust hammers. It’s at least harder to imagine breaking up a company if its multiple, separate products are merged onto one unified backend which functions to cross and combine data streams.

Set against Facebook’s sudden desire to technically unify its full-flush of dominant social networks (Facebook Messenger; Instagram; WhatsApp) is a rising drum-beat of calls for competition-based scrutiny of tech giants.

This has been building for years, as the market power — and even democracy-denting potential — of surveillance capitalism’s data giants has telescoped into view.

Calls to break up tech giants no longer carry a suggestive punch. Regulators are routinely asked whether it’s time. As the European Commission’s competition chief, Margrethe Vestager, was when she handed down Google’s latest massive antitrust fine last summer.

Her response then was that she wasn’t sure breaking Google up is the right answer — preferring to try remedies that might allow competitors to have a go, while also emphasizing the importance of legislating to ensure “transparency and fairness in the business to platform relationship”.

But it’s interesting that the idea of breaking up tech giants now plays so well as political theatre, suggesting that wildly successful consumer technology companies — which have long dined out on shiny convenience-based marketing claims, made ever so saccharine sweet via the lure of ‘free’ services — have lost a big chunk of their populist pull, dogged as they have been by so many scandals.

From terrorist content and hate speech, to election interference, child exploitation, bullying, abuse. There’s also the matter of how they arrange their tax affairs.

The public perception of tech giants has matured as the ‘costs’ of their ‘free’ services have scaled into view. The upstarts have also become the establishment. People see not a new generation of ‘cuddly capitalists’ but another bunch of multinationals; highly polished but remote money-making machines that take rather more than they give back to the societies they feed off.

Google’s trick of naming each Android iteration after a different sweet treat makes for an interesting parallel to the (also now shifting) public perceptions around sugar, following closer attention to health concerns. What does its sickly sweetness mask? And after the sugar tax, we now have politicians calling for a social media levy.

Just this week the deputy leader of the main opposition party in the UK called for setting up a standalone Internet regulatory with the power to break up tech monopolies.

Talking about breaking up well-oiled, wealth-concentration machines is being seen as a populist vote winner. And companies that political leaders used to flatter and seek out for PR opportunities find themselves treated as political punchbags; Called to attend awkward grilling by hard-grafting committees, or taken to vicious task verbally at the highest profile public podia. (Though some non-democratic heads of state are still keen to press tech giant flesh.)

In Europe, Facebook’s repeat snubs of the UK parliament’s requests last year for Zuckerberg to face policymakers’ questions certainly did not go unnoticed.

Zuckerberg’s empty chair at the DCMS committee has become both a symbol of the company’s failure to accept wider societal responsibility for its products, and an indication of market failure; the CEO so powerful he doesn’t feel answerable to anyone; neither his most vulnerable users nor their elected representatives. Hence UK politicians on both sides of the aisle making political capital by talking about cutting tech giants down to size.

The political fallout from the Cambridge Analytica scandal looks far from done.

Quite how a UK regulator could successfully swing a regulatory hammer to break up a global Internet giant such as Facebook which is headquartered in the U.S. is another matter. But policymakers have already crossed the rubicon of public opinion and are relishing talking up having a go.

That represents a sea-change vs the neoliberal consensus that allowed competition regulators to sit on their hands for more than a decade as technology upstarts quietly hoovered up people’s data and bagged rivals, and basically went about transforming themselves from highly scalable startups into market-distorting giants with Internet-scale data-nets to snag users and buy or block competing ideas.

The political spirit looks willing to go there, and now the mechanism for breaking platforms’ distorting hold on markets may also be shaping up.

The traditional antitrust remedy of breaking a company along its business lines still looks unwieldy when faced with the blistering pace of digital technology. The problem is delivering such a fix fast enough that the business hasn’t already reconfigured to route around the reset. 

Commission antitrust decisions on the tech beat have stepped up impressively in pace on Vestager’s watch. Yet it still feels like watching paper pushers wading through treacle to try and catch a sprinter. (And Europe hasn’t gone so far as trying to impose a platform break up.) 

But the German FCO decision against Facebook hints at an alternative way forward for regulating the dominance of digital monopolies: Structural remedies that focus on controlling access to data which can be relatively swiftly configured and applied.

Vestager, whose term as EC competition chief may be coming to its end this year (even if other Commission roles remain in potential and tantalizing contention), has championed this idea herself.

In an interview on BBC Radio 4’s Today program in December she poured cold water on the stock question about breaking tech giants up — saying instead the Commission could look at how larger firms got access to data and resources as a means of limiting their power. Which is exactly what the German FCO has done in its order to Facebook. 

At the same time, Europe’s updated data protection framework has gained the most attention for the size of the financial penalties that can be issued for major compliance breaches. But the regulation also gives data watchdogs the power to limit or ban processing. And that power could similarly be used to reshape a rights-eroding business model or snuff out such business entirely.

The merging of privacy and antitrust concerns is really just a reflection of the complexity of the challenge regulators now face trying to rein in digital monopolies. But they’re tooling up to meet that challenge.

Speaking in an interview with TechCrunch last fall, Europe’s data protection supervisor, Giovanni Buttarelli, told us the bloc’s privacy regulators are moving towards more joint working with antitrust agencies to respond to platform power. “Europe would like to speak with one voice, not only within data protection but by approaching this issue of digital dividend, monopolies in a better way — not per sectors,” he said. “But first joint enforcement and better co-operation is key.”

The German FCO’s decision represents tangible evidence of the kind of regulatory co-operation that could — finally — crack down on tech giants.

Blogging in support of the decision this week, Buttarelli asserted: “It is not necessary for competition authorities to enforce other areas of law; rather they need simply to identity where the most powerful undertakings are setting a bad example and damaging the interests of consumers.  Data protection authorities are able to assist in this assessment.”

He also had a prediction of his own for surveillance technologists, warning: “This case is the tip of the iceberg — all companies in the digital information ecosystem that rely on tracking, profiling and targeting should be on notice.”

So perhaps, at long last, the regulators have figured out how to move fast and break things.

German antitrust office limits Facebook’s data-gathering

A lengthy antitrust probe into how Facebook gathers data on users has resulted in Germany’s competition watchdog banning the social network giant from combining data on users across its own suite of social platforms without their consent. The investigation of Facebook data-gathering practices began in March 2016. The decision by Germany’s Federal Cartel Office, announced […]

A lengthy antitrust probe into how Facebook gathers data on users has resulted in Germany’s competition watchdog banning the social network giant from combining data on users across its own suite of social platforms without their consent.

The investigation of Facebook data-gathering practices began in March 2016.

The decision by Germany’s Federal Cartel Office, announced today, also prohibits Facebook from gathering data on users from third party websites — such as via tracking pixels and social plug-ins — without their consent.

Although the decision does not yet have legal force and Facebook has said it’s appealing. The BBC reports that the company has a month to challenge the decision before it comes into force in Germany.

In both cases — i.e. Facebook collecting and linking user data from its own suite of services; and from third party websites — the Bundeskartellamt asserts that consent to data processing must be voluntary, so cannot be made a precondition of using Facebook’s service.

The company must therefore “adapt its terms of service and data processing accordingly”, it warns.

“Facebook’s terms of service and the manner and extent to which it collects and uses data are in violation of the European data protection rules to the detriment of users. The Bundeskartellamt closely cooperated with leading data protection authorities in clarifying the data protection issues involved,” it writes, couching Facebook’s conduct as “exploitative abuse”.

“Dominant companies may not use exploitative practices to the detriment of the opposite side of the market, i.e. in this case the consumers who use Facebook. This applies above all if the exploitative practice also impedes competitors that are not able to amass such a treasure trove of data,” it continues.

“This approach based on competition law is not a new one, but corresponds to the case-law of the Federal Court of Justice under which not only excessive prices, but also inappropriate contractual terms and conditions constitute exploitative abuse (so-called exploitative business terms).”

Commenting further in a statement, Andreas Mundt, president of the Bundeskartellamt, added: “In future, Facebook will no longer be allowed to force its users to agree to the practically unrestricted collection and assigning of non-Facebook data to their Facebook user accounts.

“The combination of data sources substantially contributed to the fact that Facebook was able to build a unique database for each individual user and thus to gain market power. In future, consumers can prevent Facebook from unrestrictedly collecting and using their data. The previous practice of combining all data in a Facebook user account, practically without any restriction, will now be subject to the voluntary consent given by the users.

“Voluntary consent means that the use of Facebook’s services must not be subject to the users’ consent to their data being collected and combined in this way. If users do not consent, Facebook may not exclude them from its services and must refrain from collecting and merging data from different sources.”

“With regard to Facebook’s future data processing policy, we are carrying out what can be seen as an internal divestiture of Facebook’s data,” Mundt added. 

Facebook has responded to the Bundeskartellamt’s decision with a blog post setting out why it disagrees. The company did not respond to specific questions we put to it.

One key consideration is that Facebook also tracks non-users via third party websites. Aka, the controversial issue of ‘shadow profiles’ — which both US and EU politicians questioned founder Mark Zuckerberg about last year.

Which raises the question of how it could comply with the decision on that front, if its appeal fails, given it has no obvious conduit for seeking consent from non-users to gather their data. (Facebook’s tracking of non-users has already previously been judged illegal elsewhere in Europe.)

The German watchdog says that if Facebook intends to continue collecting data from outside its own social network to combine with users’ accounts without consent it “must be substantially restricted”, suggesting a number of different criteria are feasible — such as restrictions including on the amount of data; purpose of use; type of data processing; additional control options for users; anonymization; processing only upon instruction by third party providers; and limitations on data storage periods.

Should the decision come to be legally enforced, the Bundeskartellamt says Facebook will be obliged to develop proposals for possible solutions and submit them to the authority which would then examine whether or not they fulfil its requirements.

While there’s lots to concern Facebook in this decision — which, it recently emerged, has plans to unify the technical infrastructure of its messaging platforms — it isn’t all bad for the company. Or, rather, it could have been worse.

The authority makes a point of saying the social network can continue to make the use of each of its messaging platforms subject to the processing of data generated by their use, writing: “It must be generally acknowledged that the provision of a social network aiming at offering an efficient, data-based business model funded by advertising requires the processing of personal data. This is what the user expects.”

Although it also does not close the door on further scrutiny of that dynamic, either under data protection law (as indeed, there is a current challenge to so called ‘forced consent‘ under Europe’s GDPR); or indeed under competition law.

“The issue of whether these terms can still result in a violation of data protection rules and how this would have to be assessed under competition law has been left open,” it emphasizes.

It also notes that it did not investigate how Facebook subsidiaries WhatsApp and Instagram collect and use user data — leaving the door open for additional investigations of those services.

On the wider EU competition law front, in recent years the European Commission’s competition chief has voiced concerns about data monopolies — going so far as to suggest, in an interview with the BBC last December, that restricting access to data might be a more appropriate solution to addressing monopolistic platform power vs breaking companies up.

In its blog post rejecting the German Federal Cartel Office’s decision, Facebook’s Yvonne Cunnane, head of data protection for its international business, Facebook Ireland, and Nikhil Shanbhag, director and associate general counsel, make three points to counter the decision, writing that: “The Bundeskartellamt underestimates the fierce competition we face in Germany, misinterprets our compliance with GDPR and undermines the mechanisms European law provides for ensuring consistent data protection standards across the EU.”

On the competition point, Facebook claims in the blog post that “popularity is not dominance” — suggesting the Bundeskartellamt found 40 per cent of social media users in Germany don’t use Facebook. (Not that that would stop Facebook from tracking those non-users around the mainstream Internet, of course.)

Although, in its announcement of the decision today, the Federal Cartel Office emphasizes that it found Facebook to have a dominant position in the Germany market — with (as of December 2018) 23M daily active users and 32M monthly active users, which it said constitutes a market share of more than 95 per cent (daily active users) and more than 80 per cent (monthly active users).

It also says it views social services such as Snapchat, YouTube and Twitter, and professional networks like LinkedIn and Xing, as only offering “parts of the services of a social network” — saying it therefore excluded them from its consideration of the market.

Though it adds that “even if these services were included in the relevant market, the Facebook group with its subsidiaries Instagram and WhatsApp would still achieve very high market shares that would very likely be indicative of a monopolisation process”.

The mainstay of Facebook’s argument against the Bundeskartellamt decision appears to fix on the GDPR — with the company both seeking to claim it’s in compliance with the pan-EU data-protection framework (although its business faces multiple complaints under GDPR), while simultaneously arguing that the privacy regulation supersedes regional competition authorities.

So, as ever, Facebook is underlining that its regulator of choice is the Irish Data Protection Commission.

“The GDPR specifically empowers data protection regulators – not competition authorities – to determine whether companies are living up to their responsibilities. And data protection regulators certainly have the expertise to make those conclusions,” Facebook writes.

“The GDPR also harmonizes data protection laws across Europe, so everyone lives by the same rules of the road and regulators can consistently apply the law from country to country. In our case, that’s the Irish Data Protection Commission. The Bundeskartellamt’s order threatens to undermine this, providing different rights to people based on the size of the companies they do business with.”

The final plank of Facebook’s rebuttal focuses on pushing the notion that pooling data across services enhances the consumer experience and increases “safety and security” — the latter point being the same argument Zuckerberg used last year to defend ‘shadow profiles’ (not that he called them that) — with the company claiming now that it needs to pool user data across services to identify abusive behavior online; and disable accounts link to terrorism; child exploitation; and election interference.

So the company is essentially seeking to leverage (you could say ‘legally weaponize’) a smorgasbord of antisocial problems many of which have scaled to become major societal issues in recent years, at least in part as a consequence of the size and scale of Facebook’s social empire, as arguments for defending the size and operational sprawl of its business. Go figure.

In a statement provided to us last month ahead of the ruling, Facebook also said: “Since 2016, we have been in regular contact with the Bundeskartellamt and have responded to their requests. As we outlined publicly in 2017, we disagree with their views and the conflation of data protection laws and antitrust laws, and will continue to defend our position.” 

Separately, a 2016 privacy policy reversal by WhatsApp to link user data with Facebook accounts, including for marketing purposes, attracted the ire of EU privacy regulations — and most of these data flows remain suspended in the region.

An investigation by the UK’s data watchdog was only closed last year after Facebook committed not to link user data across the two services until it could do so in a way that complies with the GDPR.

Although the company does still share data for business intelligence and security purposes — which has drawn continued scrutiny from the French data watchdog.

German antitrust office limits Facebook’s data-gathering

A lengthy antitrust probe into how Facebook gathers data on users has resulted in Germany’s competition watchdog banning the social network giant from combining data on users across its own suite of social platforms without their consent. The investigation of Facebook data-gathering practices began in March 2016. The decision by Germany’s Federal Cartel Office, announced […]

A lengthy antitrust probe into how Facebook gathers data on users has resulted in Germany’s competition watchdog banning the social network giant from combining data on users across its own suite of social platforms without their consent.

The investigation of Facebook data-gathering practices began in March 2016.

The decision by Germany’s Federal Cartel Office, announced today, also prohibits Facebook from gathering data on users from third party websites — such as via tracking pixels and social plug-ins — without their consent.

Although the decision does not yet have legal force and Facebook has said it’s appealing. The BBC reports that the company has a month to challenge the decision before it comes into force in Germany.

In both cases — i.e. Facebook collecting and linking user data from its own suite of services; and from third party websites — the Bundeskartellamt asserts that consent to data processing must be voluntary, so cannot be made a precondition of using Facebook’s service.

The company must therefore “adapt its terms of service and data processing accordingly”, it warns.

“Facebook’s terms of service and the manner and extent to which it collects and uses data are in violation of the European data protection rules to the detriment of users. The Bundeskartellamt closely cooperated with leading data protection authorities in clarifying the data protection issues involved,” it writes, couching Facebook’s conduct as “exploitative abuse”.

“Dominant companies may not use exploitative practices to the detriment of the opposite side of the market, i.e. in this case the consumers who use Facebook. This applies above all if the exploitative practice also impedes competitors that are not able to amass such a treasure trove of data,” it continues.

“This approach based on competition law is not a new one, but corresponds to the case-law of the Federal Court of Justice under which not only excessive prices, but also inappropriate contractual terms and conditions constitute exploitative abuse (so-called exploitative business terms).”

Commenting further in a statement, Andreas Mundt, president of the Bundeskartellamt, added: “In future, Facebook will no longer be allowed to force its users to agree to the practically unrestricted collection and assigning of non-Facebook data to their Facebook user accounts.

“The combination of data sources substantially contributed to the fact that Facebook was able to build a unique database for each individual user and thus to gain market power. In future, consumers can prevent Facebook from unrestrictedly collecting and using their data. The previous practice of combining all data in a Facebook user account, practically without any restriction, will now be subject to the voluntary consent given by the users.

“Voluntary consent means that the use of Facebook’s services must not be subject to the users’ consent to their data being collected and combined in this way. If users do not consent, Facebook may not exclude them from its services and must refrain from collecting and merging data from different sources.”

“With regard to Facebook’s future data processing policy, we are carrying out what can be seen as an internal divestiture of Facebook’s data,” Mundt added. 

Facebook has responded to the Bundeskartellamt’s decision with a blog post setting out why it disagrees. The company did not respond to specific questions we put to it.

One key consideration is that Facebook also tracks non-users via third party websites. Aka, the controversial issue of ‘shadow profiles’ — which both US and EU politicians questioned founder Mark Zuckerberg about last year.

Which raises the question of how it could comply with the decision on that front, if its appeal fails, given it has no obvious conduit for seeking consent from non-users to gather their data. (Facebook’s tracking of non-users has already previously been judged illegal elsewhere in Europe.)

The German watchdog says that if Facebook intends to continue collecting data from outside its own social network to combine with users’ accounts without consent it “must be substantially restricted”, suggesting a number of different criteria are feasible — such as restrictions including on the amount of data; purpose of use; type of data processing; additional control options for users; anonymization; processing only upon instruction by third party providers; and limitations on data storage periods.

Should the decision come to be legally enforced, the Bundeskartellamt says Facebook will be obliged to develop proposals for possible solutions and submit them to the authority which would then examine whether or not they fulfil its requirements.

While there’s lots to concern Facebook in this decision — which, it recently emerged, has plans to unify the technical infrastructure of its messaging platforms — it isn’t all bad for the company. Or, rather, it could have been worse.

The authority makes a point of saying the social network can continue to make the use of each of its messaging platforms subject to the processing of data generated by their use, writing: “It must be generally acknowledged that the provision of a social network aiming at offering an efficient, data-based business model funded by advertising requires the processing of personal data. This is what the user expects.”

Although it also does not close the door on further scrutiny of that dynamic, either under data protection law (as indeed, there is a current challenge to so called ‘forced consent‘ under Europe’s GDPR); or indeed under competition law.

“The issue of whether these terms can still result in a violation of data protection rules and how this would have to be assessed under competition law has been left open,” it emphasizes.

It also notes that it did not investigate how Facebook subsidiaries WhatsApp and Instagram collect and use user data — leaving the door open for additional investigations of those services.

On the wider EU competition law front, in recent years the European Commission’s competition chief has voiced concerns about data monopolies — going so far as to suggest, in an interview with the BBC last December, that restricting access to data might be a more appropriate solution to addressing monopolistic platform power vs breaking companies up.

In its blog post rejecting the German Federal Cartel Office’s decision, Facebook’s Yvonne Cunnane, head of data protection for its international business, Facebook Ireland, and Nikhil Shanbhag, director and associate general counsel, make three points to counter the decision, writing that: “The Bundeskartellamt underestimates the fierce competition we face in Germany, misinterprets our compliance with GDPR and undermines the mechanisms European law provides for ensuring consistent data protection standards across the EU.”

On the competition point, Facebook claims in the blog post that “popularity is not dominance” — suggesting the Bundeskartellamt found 40 per cent of social media users in Germany don’t use Facebook. (Not that that would stop Facebook from tracking those non-users around the mainstream Internet, of course.)

Although, in its announcement of the decision today, the Federal Cartel Office emphasizes that it found Facebook to have a dominant position in the Germany market — with (as of December 2018) 23M daily active users and 32M monthly active users, which it said constitutes a market share of more than 95 per cent (daily active users) and more than 80 per cent (monthly active users).

It also says it views social services such as Snapchat, YouTube and Twitter, and professional networks like LinkedIn and Xing, as only offering “parts of the services of a social network” — saying it therefore excluded them from its consideration of the market.

Though it adds that “even if these services were included in the relevant market, the Facebook group with its subsidiaries Instagram and WhatsApp would still achieve very high market shares that would very likely be indicative of a monopolisation process”.

The mainstay of Facebook’s argument against the Bundeskartellamt decision appears to fix on the GDPR — with the company both seeking to claim it’s in compliance with the pan-EU data-protection framework (although its business faces multiple complaints under GDPR), while simultaneously arguing that the privacy regulation supersedes regional competition authorities.

So, as ever, Facebook is underlining that its regulator of choice is the Irish Data Protection Commission.

“The GDPR specifically empowers data protection regulators – not competition authorities – to determine whether companies are living up to their responsibilities. And data protection regulators certainly have the expertise to make those conclusions,” Facebook writes.

“The GDPR also harmonizes data protection laws across Europe, so everyone lives by the same rules of the road and regulators can consistently apply the law from country to country. In our case, that’s the Irish Data Protection Commission. The Bundeskartellamt’s order threatens to undermine this, providing different rights to people based on the size of the companies they do business with.”

The final plank of Facebook’s rebuttal focuses on pushing the notion that pooling data across services enhances the consumer experience and increases “safety and security” — the latter point being the same argument Zuckerberg used last year to defend ‘shadow profiles’ (not that he called them that) — with the company claiming now that it needs to pool user data across services to identify abusive behavior online; and disable accounts link to terrorism; child exploitation; and election interference.

So the company is essentially seeking to leverage (you could say ‘legally weaponize’) a smorgasbord of antisocial problems many of which have scaled to become major societal issues in recent years, at least in part as a consequence of the size and scale of Facebook’s social empire, as arguments for defending the size and operational sprawl of its business. Go figure.

In a statement provided to us last month ahead of the ruling, Facebook also said: “Since 2016, we have been in regular contact with the Bundeskartellamt and have responded to their requests. As we outlined publicly in 2017, we disagree with their views and the conflation of data protection laws and antitrust laws, and will continue to defend our position.” 

Separately, a 2016 privacy policy reversal by WhatsApp to link user data with Facebook accounts, including for marketing purposes, attracted the ire of EU privacy regulations — and most of these data flows remain suspended in the region.

An investigation by the UK’s data watchdog was only closed last year after Facebook committed not to link user data across the two services until it could do so in a way that complies with the GDPR.

Although the company does still share data for business intelligence and security purposes — which has drawn continued scrutiny from the French data watchdog.

Facebook has poached the DoJ’s Silicon Valley antitrust chief

Facebook has recruited Kate Patchen, a veteran of the U.S. Department of Justice who led its antitrust office in Silicon Valley, to be a director and associate general counsel of litigation. Patchen takes up her post amid ongoing scandals and reputation crises for her new employer, joining Facebook this month, according to her LinkedIn profile. The […]

Facebook has recruited Kate Patchen, a veteran of the U.S. Department of Justice who led its antitrust office in Silicon Valley, to be a director and associate general counsel of litigation.

Patchen takes up her post amid ongoing scandals and reputation crises for her new employer, joining Facebook this month, according to her LinkedIn profile.

The move was spotted earlier by the FT, which reports that Facebook also posted a job listing on LinkedIn for a “lead counsel” in Washington to handle competition issues two weeks ago — suggesting a broader effort to bulk up its in-house expertise.

Patchen brings to her new employer a wealth of experience on the antitrust topic, having spent 16 years at the DoJ, where she began as a trial attorney before becoming an assistant chief in the antitrust division in 2014. Two years later she was made chief.

We reached out to Facebook about the hire and it acknowledged our email but did not immediately provide comment on its decision to recruit a specialist in antitrust enforcement.

The social media giant certainly has plenty playing on its mind on this front.

In 2016 it landed firmly on lawmakers’ radar and in hot political waters when the extent of Kremlin-funded election interference activity on the platform first emerged. Since then a string of security and data misuse scandals have only dialed up the political pressure on Facebook.

Domestic lawmakers are now most actively discussing how to regulate social media. Although competition scrutiny is increasing on big tech in general, with calls from some quarters to break up platform giants as a fix for a range of damaging impacts.

The FT notes, for example, that democratic lawmakers recently introduced legislation to address “the threat of economic concentration.” And the sight of Democrats pushing for tougher competition enforcement suggests the party’s love affair with Silicon Valley tech giants is well and truly over.

In Europe, competition regulators have already moved against big tech, issuing two very large fines in recent years against Google products, with more investigations ongoing.

Amazon is also now on the Commission’s radar. At a national level, EU competition regulators have been paying increasing attention to how the adtech industry is dominated by the duopoly of Google and Facebook.

Patchen, meanwhile, joins Facebook at the same time as some long-serving veterans are headed out the door — including public policy chief Elliot Schrage.

Schrage’s departure has been in train for some months, but a leaked internal memo we obtained this week suggests he’s being packaged up as a convenient fall guy for a freshly cracked public relations scandal.

Last month Facebook announced it was hiring more new blood: Former deputy prime minister of the U.K., Nick Clegg, to be its new head of global policy and comms — with Schrage slated then to be staying on in an advisory capacity.

In other recent senior leadership moves, Facebook CSO Alex Stamos also left the company this summer, while chief legal officer Colin Stretch announced he would leave at the end of the year.

But according to a Recode report this month, Stretch has now put his exit on hold — until at least next summer — apparently deciding to stay to help out with ongoing legal and political crises.

Big tech must not reframe digital ethics in its image

Facebook founder Mark Zuckerberg’s visage loomed large over the European parliament this week, both literally and figuratively, as global privacy regulators gathered in Brussels to interrogate the human impacts of technologies that derive their power and persuasiveness from our data. The eponymous social network has been at the center of a privacy storm this year. And […]

Facebook founder Mark Zuckerberg’s visage loomed large over the European parliament this week, both literally and figuratively, as global privacy regulators gathered in Brussels to interrogate the human impacts of technologies that derive their power and persuasiveness from our data.

The eponymous social network has been at the center of a privacy storm this year. And every fresh Facebook content concern — be it about discrimination or hate speech or cultural insensitivity — adds to a damaging flood.

The overarching discussion topic at the privacy and data protection confab, both in the public sessions and behind closed doors, was ethics: How to ensure engineers, technologists and companies operate with a sense of civic duty and build products that serve the good of humanity.

So, in other words, how to ensure people’s information is used ethically — not just in compliance with the law. Fundamental rights are increasingly seen by European regulators as a floor not the ceiling. Ethics are needed to fill the gaps where new uses of data keep pushing in.

As the EU’s data protection supervisor, Giovanni Buttarelli, told delegates at the start of the public portion of the International Conference of Data Protection and Privacy Commissioners: “Not everything that is legally compliant and technically feasible is morally sustainable.”

As if on cue Zuckerberg kicked off a pre-recorded video message to the conference with another apology. Albeit this was only for not being there to give an address in person. Which is not the kind of regret many in the room are now looking for, as fresh data breaches and privacy incursions keep being stacked on top of Facebook’s Cambridge Analytica data misuse scandal like an unpalatable layer cake that never stops being baked.

Evidence of a radical shift of mindset is what champions of civic tech are looking for — from Facebook in particular and adtech in general.

But there was no sign of that in Zuckerberg’s potted spiel. Rather he displayed the kind of masterfully slick PR manoeuvering that’s associated with politicians on the campaign trail. It’s the natural patter for certain big tech CEOs too, these days, in a sign of our sociotechnical political times.

(See also: Facebook hiring ex-UK deputy PM, Nick Clegg, to further expand its contacts database of European lawmakers.)

And so the Facebook founder seized on the conference’s discussion topic of big data ethics and tried to zoom right back out again. Backing away from talk of tangible harms and damaging platform defaults — aka the actual conversational substance of the conference (from talk of how dating apps are impacting how much sex people have and with whom they’re doing it; to shiny new biometric identity systems that have rebooted discriminatory caste systems) — to push the idea of a need to “strike a balance between speech, security, privacy and safety”.

This was Facebook trying reframe the idea of digital ethics — to make it so very big-picture-y that it could embrace his people-tracking ad-funded business model as a fuzzily wide public good, with a sort of ‘oh go on then’ shrug.

“Every day people around the world use our services to speak up for things they believe in. More than 80 million small businesses use our services, supporting millions of jobs and creating a lot of opportunity,” said Zuckerberg, arguing for a ‘both sides’ view of digital ethics. “We believe we have an ethical responsibility to support these positive uses too.”

Indeed, he went further, saying Facebook believes it has an “ethical obligation to protect good uses of technology”.

And from that self-serving perspective almost anything becomes possible — as if Facebook is arguing that breaking data protection law might really be the ‘ethical’ thing to do. (Or, as the existentialists might put it: ‘If god is dead, then everything is permitted’.)

It’s an argument that radically elides some very bad things, though. And glosses over problems that are systemic to Facebook’s ad platform.

A little later, Google’s CEO Sundar Pichai also dropped into the conference in video form, bringing much the same message.

“The conversation about ethics is important. And we are happy to be a part of it,” he began, before an instant hard pivot into referencing Google’s founding mission of “organizing the world’s information — for everyone” (emphasis his), before segwaying — via “knowledge is empowering” — to asserting that “a society with more information is better off than one with less”.

Is having access to more information of unknown and dubious or even malicious provenance better than having access to some verified information? Google seems to think so.

SAN FRANCISCO, CA – OCTOBER 04: Pichai Sundararajan, known as Sundar Pichai, CEO of Google Inc. speaks during an event to introduce Google Pixel phone and other Google products on October 4, 2016 in San Francisco, California. The Google Pixel is intended to challenge the Apple iPhone in the premium smartphone category. (Photo by Ramin Talaie/Getty Images)

The pre-recorded Pichai didn’t have to concern himself with all the mental ellipses bubbling up in the thoughts of the privacy and rights experts in the room.

“Today that mission still applies to everything we do at Google,” his digital image droned on, without mentioning what Google is thinking of doing in China. “It’s clear that technology can be a positive force in our lives. It has the potential to give us back time and extend opportunity to people all over the world.

“But it’s equally clear that we need to be responsible in how we use technology. We want to make sound choices and build products that benefit society that’s why earlier this year we worked with our employees to develop a set of AI principles that clearly state what types of technology applications we will pursue.”

Of course it sounds fine. Yet Pichai made no mention of the staff who’ve actually left Google because of ethical misgivings. Nor the employees still there and still protesting its ‘ethical’ choices.

It’s not almost as if the Internet’s adtech duopoly is singing from the same ‘ads for greater good trumping the bad’ hymn sheet; the Internet’s adtech’s duopoly is doing exactly that.

The ‘we’re not perfect and have lots more to learn’ line that also came from both CEOs seems mostly intended to manage regulatory expectation vis-a-vis data protection — and indeed on the wider ethics front.

They’re not promising to do no harm. Nor to always protect people’s data. They’re literally saying they can’t promise that. Ouch.

Meanwhile, another common FaceGoog message — an intent to introduce ‘more granular user controls’ — just means they’re piling even more responsibility onto individuals to proactively check (and keep checking) that their information is not being horribly abused.

This is a burden neither company can speak to in any other fashion. Because the solution is that their platforms not hoard people’s data in the first place.

The other ginormous elephant in the room is big tech’s massive size; which is itself skewing the market and far more besides.

Neither Zuckerberg nor Pichai directly addressed the notion of overly powerful platforms themselves causing structural societal harms, such as by eroding the civically minded institutions that are essential to defend free societies and indeed uphold the rule of law.

Of course it’s an awkward conversation topic for tech giants if vital institutions and societal norms are being undermined because of your cut-throat profiteering on the unregulated cyber seas.

A great tech fix to avoid answering awkward questions is to send a video message in your CEO’s stead. And/or a few minions. Facebook VP and chief privacy officer, Erin Egan, and Google’s SVP of global affairs Kent Walker, were duly dispatched and gave speeches in person.

They also had a handful of audience questions put to them by an on stage moderator. So it fell to Walker, not Pichai, to speak to Google’s contradictory involvement in China in light of its foundational claim to be a champion of the free flow of information.

“We absolutely believe in the maximum amount of information available to people around the world,” Walker said on that topic, after being allowed to intone on Google’s goodness for almost half an hour. “We have said that we are exploring the possibility of ways of engaging in China to see if there are ways to follow that mission while complying with laws in China.

“That’s an exploratory project — and we are not in a position at this point to have an answer to the question yet. But we continue to work.”

Egan, meanwhile, batted away her trio of audience concerns — about Facebook’s lack of privacy by design/default; and how the company could ever address ethical concerns without dramatically changing its business model — by saying it has a new privacy and data use team sitting horizontally across the business, as well as a data protection officer (an oversight role mandated by the EU’s GDPR; into which Facebook plugged its former global deputy chief privacy officer, Stephen Deadman, earlier this year).

She also said the company continues to invest in AI for content moderation purposes. So, essentially, more trust us. And trust our tech.

She also replied in the affirmative when asked whether Facebook will “unequivocally” support a strong federal privacy law in the US — with protections “equivalent” to those in Europe’s data protection framework.

But of course Zuckerberg has said much the same thing before — while simultaneously advocating for weaker privacy standards domestically. So who now really wants to take Facebook at its word on that? Or indeed on anything of human substance.

Not the EU parliament, for one. MEPs sitting in the parliament’s other building, in Strasbourg, this week adopted a resolution calling for Facebook to agree to an external audit by regional oversight bodies.

But of course Facebook prefers to run its own audit. And in a response statement the company claims it’s “working relentlessly to ensure the transparency, safety and security” of people who use its service (so bad luck if you’re one of those non-users it also tracks then). Which is a very long-winded way of saying ‘no, we’re not going to voluntarily let the inspectors in’.

Facebook’s problem now is that trust, once burnt, takes years and mountains’ worth of effort to restore.

This is the flip side of ‘move fast and break things’. (Indeed, one of the conference panels was entitled ‘move fast and fix things’.) It’s also the hard-to-shift legacy of an unapologetically blind ~decade-long dash for growth regardless of societal cost.

Given the, it looks unlikely that Zuckerberg’s attempt to paint a portrait of digital ethics in his company’s image will do much to restore trust in Facebook.

Not so long as the platform retains the power to cause damage at scale.

It was left to everyone else at the conference to discuss the hollowing out of democratic institutions, societal norms, humans interactions and so on — as a consequence of data (and market capital) being concentrated in the hands of the ridiculously powerful few.

“Today we face the gravest threat to our democracy, to our individual liberty in Europe since the war and the United States perhaps since the civil war,” said Barry Lynn, a former journalist and senior fellow at the Google-backed New America Foundation think tank in Washington, D.C., where he had directed the Open Markets Program — until it was shut down after he wrote critically about, er, Google.

“This threat is the consolidation of power — mainly by Google, Facebook and Amazon — over how we speak to one another, over how we do business with one another.”

Meanwhile the original architect of the World Wide Web, Tim Berners-Lee, who has been warning about the crushing impact of platform power for years now is working on trying to decentralize the net’s data hoarders via new technologies intended to give users greater agency over their data.

On the democratic damage front, Lynn pointed to how news media is being hobbled by an adtech duopoly now sucking hundreds of billion of ad dollars out of the market annually — by renting out what he dubbed their “manipulation machines”.

Not only do they sell access to these ad targeting tools to mainstream advertisers — to sell the usual products, like soap and diapers — they’re also, he pointed out, taking dollars from “autocrats and would be autocrats and other social disruptors to spread propaganda and fake news to a variety of ends, none of them good”.

The platforms’ unhealthy market power is the result of a theft of people’s attention, argued Lynn. “We cannot have democracy if we don’t have a free and robustly funded press,” he warned.

His solution to the society-deforming might of platform power? Not a newfangled decentralization tech but something much older: Market restructuring via competition law.

“The basic problem is how we structure or how we have failed to structure markets in the last generation. How we have licensed or failed to license monopoly corporations to behave.

“In this case what we see here is this great mass of data. The problem is the combination of this great mass of data with monopoly power in the form of control over essential pathways to the market combined with a license to discriminate in the pricing and terms of service. That is the problem.”

“The result is to centralize,” he continued. “To pick and choose winners and losers. In other words the power to reward those who heed the will of the master, and to punish those who defy or question the master — in the hands of Google, Facebook and Amazon… That is destroying the rule of law in our society and is replacing rule of law with rule by power.”

For an example of an entity that’s currently being punished by Facebook’s grip on the social digital sphere you need look no further than Snapchat.

Also on the stage in person: Apple’s CEO Tim Cook, who didn’t mince his words either — attacking what he dubbed a “data industrial complex” which he said is “weaponizing” people’s person data against them for private profit.

The adtech modeus operandi sums to “surveillance”, Cook asserted.

Cook called this a “crisis”, painting a picture of technologies being applied in an ethics-free vacuum to “magnify our worst human tendencies… deepen divisions, incite violence and even undermine our shared sense of what is true and what is false” — by “taking advantage of user trust”.

“This crisis is real… And those of us who believe in technology’s potential for good must not shrink from this moment,” he warned, telling the assembled regulators that Apple is aligned with their civic mission.

Of course Cook’s position also aligns with Apple’s hardware-dominated business model — in which the company makes most of its money by selling premium priced, robustly encrypted devices, rather than monopolizing people’s attention to sell their eyeballs to advertisers.

The growing public and political alarm over how big data platforms stoke addiction and exploit people’s trust and information — and the idea that an overarching framework of not just laws but digital ethics might be needed to control this stuff — dovetails neatly with the alternative track that Apple has been pounding for years.

So for Cupertino it’s easy to argue that the ‘collect it all’ approach of data-hungry platforms is both lazy thinking and irresponsible engineering, as Cook did this week.

“For artificial intelligence to be truly smart it must respect human values — including privacy,” he said. “If we get this wrong, the dangers are profound. We can achieve both great artificial intelligence and great privacy standards. It is not only a possibility — it is a responsibility.”

Yet Apple is not only a hardware business. In recent years the company has been expanding and growing its services business. It even involves itself in (a degree of) digital advertising. And it does business in China.

It is, after all, still a for-profit business — not a human rights regulator. So we shouldn’t be looking to Apple to spec out a digital ethical framework for us, either.

No profit making entity should be used as the model for where the ethical line should lie.

Apple sets a far higher standard than other tech giants, certainly, even as its grip on the market is far more partial because it doesn’t give its stuff away for free. But it’s hardly perfect where privacy is concerned.

One inconvenient example for Apple is that it takes money from Google to make the company’s search engine the default for iOS users — even as it offers iOS users a choice of alternatives (if they go looking to switch) which includes pro-privacy search engine DuckDuckGo.

DDG is a veritable minnow vs Google, and Apple builds products for the consumer mainstream, so it is supporting privacy by putting a niche search engine alongside a behemoth like Google — as one of just four choices it offers.

But defaults are hugely powerful. So Google search being the iOS default means most of Apple’s mobile users will have their queries fed straight into Google’s surveillance database, even as Apple works hard to keep its own servers clear of user data by not collecting their stuff in the first place.

There is a contradiction there. So there is a risk for Apple in amping up its rhetoric against a “data industrial complex” — and making its naturally pro-privacy preference sound like a conviction principle — because it invites people to dial up critical lenses and point out where its defence of personal data against manipulation and exploitation does not live up to its own rhetoric.

One thing is clear: In the current data-based ecosystem all players are conflicted and compromised.

Though only a handful of tech giants have built unchallengeably massive tracking empires via the systematic exploitation of other people’s data.

And as the apparatus of their power gets exposed, these attention-hogging adtech giants are making a dumb show of papering over the myriad ways their platforms pound on people and societies — offering paper-thin promises to ‘do better next time — when ‘better’ is not even close to being enough.

Call for collective action

Increasingly powerful data-mining technologies must be sensitive to human rights and human impacts, that much is crystal clear. Nor is it enough to be reactive to problems after or even at the moment they arise. No engineer or system designer should feel it’s their job to manipulate and trick their fellow humans.

Dark pattern designs should be repurposed into a guidebook of what not to do and how not to transact online. (If you want a mission statement for thinking about this it really is simple: Just don’t be a dick.)

Sociotechnical Internet technologies must always be designed with people and societies in mind — a key point that was hammered home in a keynote by Berners-Lee, the inventor of the World Wide Web, and the tech guy now trying to defang the Internet’s occupying corporate forces via decentralization.

“As we’re designing the system, we’re designing society,” he told the conference. “Ethical rules that we choose to put in that design [impact society]… Nothing is self evident. Everything has to be put out there as something that we think we will be a good idea as a component of our society.”

The penny looks to be dropping for privacy watchdogs in Europe. The idea that assessing fairness — not just legal compliance — must be a key component of their thinking, going forward, and so the direction of regulatory travel.

Watchdogs like the UK’s ICO — which just fined Facebook the maximum possible penalty for the Cambridge Analytica scandal — said so this week. “You have to do your homework as a company to think about fairness,” said Elizabeth Denham, when asked ‘who decides what’s fair’ in a data ethics context. “At the end of the day if you are working, providing services in Europe then the regulator’s going to have something to say about fairness — which we have in some cases.”

“Right now, we’re working with some Oxford academics on transparency and algorithmic decision making. We’re also working on our own tool as a regulator on how we are going to audit algorithms,” she added. “I think in Europe we’re leading the way — and I realize that’s not the legal requirement in the rest of the world but I believe that more and more companies are going to look to the high standard that is now in place with the GDPR.

“The answer to the question is ‘is this fair?’ It may be legal — but is this fair?”

So the short version is data controllers need to prepare themselves to consult widely — and examine their consciences closely.

Rising automation and AI makes ethical design choices even more imperative, as technologies become increasingly complex and intertwined, thanks to the massive amounts of data being captured, processed and used to model all sorts of human facets and functions.

The closed session of the conference produced a declaration on ethics and data in artificial intelligence — setting out a list of guiding principles to act as “core values to preserve human rights” in the developing AI era — which included concepts like fairness and responsible design.

Few would argue that a powerful AI-based technology such as facial recognition isn’t inherently in tension with a fundamental human right like privacy.

Nor that such powerful technologies aren’t at huge risk of being misused and abused to discriminate and/or suppress rights at vast and terrifying scale. (See, for example, China’s push to install a social credit system.)

Biometric ID systems might start out with claims of the very best intentions — only to shift function and impact later. The dangers to human rights of function creep on this front are very real indeed. And are already being felt in places like India — where the country’s Aadhaar biometric ID system has been accused of rebooting ancient prejudices by promoting a digital caste system, as the conference also heard.

The consensus from the event is it’s not only possible but vital to engineer ethics into system design from the start whenever you’re doing things with other people’s data. And that routes to market must be found that don’t require dispensing with a moral compass to get there.

The notion of data-processing platforms becoming information fiduciaries — i.e. having a legal duty of care towards their users, as a doctor or lawyer does — was floated several times during public discussions. Though such a step would likely require more legislation, not just adequately rigorous self examination.

In the meanwhile civic society must get to grips, and grapple proactively, with technologies like AI so that people and societies can come to collective agreement about a digital ethics framework. This is vital work to defend the things that matter to communities so that the anthropogenic platforms Berners-Lee referenced are shaped by collective human values, not the other way around.

It’s also essential that public debate about digital ethics does not get hijacked by corporate self interest.

Tech giants are not only inherently conflicted on the topic but — right across the board — they lack the internal diversity to offer a broad enough perspective.

People and civic society must teach them.

A vital closing contribution came from the French data watchdog’s Isabelle Falque-Pierrotin, who summed up discussions that had taken place behind closed doors as the community of global data protection commissioners met to plot next steps.

She explained that members had adopted a roadmap for the future of the conference to evolve beyond a mere talking shop and take on a more visible, open governance structure — to allow it to be a vehicle for collective, international decision-making on ethical standards, and so alight on and adopt common positions and principles that can push tech in a human direction.

The initial declaration document on ethics and AI is intended to be just the start, she said — warning that “if we can’t act we will not be able to collectively control our future”, and couching ethics as “no longer an option, it is an obligation”.

She also said it’s essential that regulators get with the program and enforce current privacy laws — to “pave the way towards a digital ethics” — echoing calls from many speakers at the event for regulators to get on with the job of enforcement.

This is vital work to defend values and rights against the overreach of the digital here and now.

“Without ethics, without an adequate enforcement of our values and rules our societal models are at risk,” Falque-Pierrotin also warned. “We must act… because if we fail, there won’t be any winners. Not the people, nor the companies. And certainly not human rights and democracy.”

If the conference had one short sharp message it was this: Society must wake up to technology — and fast.

“We’ve got a lot of work to do, and a lot of discussion — across the boundaries of individuals, companies and governments,” agreed Berners-Lee. “But very important work.

“We have to get commitments from companies to make their platforms constructive and we have to get commitments from governments to look at whenever they see that a new technology allows people to be taken advantage of, allows a new form of crime to get onto it by producing new forms of the law. And to make sure that the policies that they do are thought about in respect to every new technology as they come out.”

This work is also an opportunity for civic society to define and reaffirm what’s important. So it’s not only about mitigating risks.

But, equally, not doing the job is unthinkable — because there’s no putting the AI genii back in the bottle.

Audit Facebook and overhaul competition law, say MEPs responding to breach scandals

After holding a series of hearings in the wake of the Facebook -Cambridge Analytica data misuse scandal this summer, and attending a meeting with Mark Zuckerberg himself in May, the European Union parliament’s civil liberties committee has called for an update to competition rules to reflect what it dubs “the digital reality”, urging EU institutions […]

After holding a series of hearings in the wake of the Facebook -Cambridge Analytica data misuse scandal this summer, and attending a meeting with Mark Zuckerberg himself in May, the European Union parliament’s civil liberties committee has called for an update to competition rules to reflect what it dubs “the digital reality”, urging EU institutions to look into the “possible monopoly” of big tech social media platforms.

Top level EU competition law has not touched on the social media axis of big tech yet, with the Commission concentrating recent attention on mobile chips (Qualcomm); and mobile and ecommerce platforms (mostly Google; but Amazon’s use of merchant data is in its sights too); as well as probing Apple’s tax structure in Ireland.

But last week Europe’s data protection supervisor, Giovanni Buttarelli, told us that closer working between privacy regulators and the EU’s Competition Commission is on the cards, as regional lawmakers look to evolve their oversight frameworks to respond to growing ethical concerns about use and abuse of big data, and indeed to be better positioned to respond to fast-paced technology-fuelled change.

Local EU antitrust regulators, including in Germany and France, have also been investigating the Google, Facebook adtech duopoly on several fronts in recent years.

The Libe committee’s call is the latest political call to spin up and scale up antitrust effort and attention around social media. 

The committee also says it wants to see much greater accountability and transparency on “algorithmic-processed data by any actor, be it private or public” — signalling a belief that GDPR does not go far enough on that front.

Libe committee chair and rapporteur, MEP Claude Moraes, has previously suggested the Facebook Cambridge Analytica scandal could help inform and shape an update to Europe’s ePrivacy rules, which remain at the negotiation stage with disagreements over scope and proportionality.

But every big tech data breach and security scandal lends weight to the argument that stronger privacy rules are indeed required.

In yesterday’s resolution, the Libe committee also called for an audit of the advertising industry on social media — echoing a call made by the UK’s data protection watchdog, the ICO, this summer for an ‘ethical pause‘ on the use of online ads for political purposes.

The ICO made that call right after announcing it planned to issue Facebook with the maximum fine possible under UK data protection law — again for the Cambridge Analytica breach.

While the Cambridge Analytica scandal — in which the personal information of as many as 87 million Facebook users was extracted from the platform without the knowledge or consent of every person, and passed to the now defunct political consultancy (which used it to create psychographic profiles of US voters for election campaigning purposes) — has triggered this latest round of political scrutiny of the social media behemoth, last month Facebook revealed another major data breach, affecting at least 50M users — underlining the ongoing challenge it has to live up to claims of having ‘locked the platform down’.

In light of both breaches, the Libe committee has now called for EU bodies to be allowed to fully audit Facebook — to independently assess its data protection and security practices.

Buttarelli also told us last week that it’s his belief none of the tech giants are directing adequate resource at keeping user data safe.

And with Facebook having already revealed a second breach that’s potentially even larger than Cambridge Analytica fresh focus and political attention is falling on the substance of its security practices, not just its claims.

While the Libe committee’s MEPs say they have taken note of steps Facebook made in the wake of the Cambridge Analytica scandal to try to improve user privacy, they point out it has still not yet carried out the promised full internal audit.

Facebook has never said how long this historical app audit will take. Though it has given some progress reports, such as detailing additional suspicious activity it has found to date, with 400 apps suspended at the last count. (One app, called myPersonality, also got banned for improper data controls.)

The Libe committee is now urging Facebook to allow the EU Agency for Network and Information Security (ENISA) and the European Data Protection Board, which plays a key role in applying the region’s data protection rules, to carry out “a full and independent audit” — and present the findings to the European Commission and Parliament and national parliaments.

It has also recommended that Facebook makes “substantial modifications to its platform” to comply with EU data protection law.

We’ve reached out to Facebook for comment on the recommendations — including specifically asking the company whether it’s open to an external audit of its platform.

At the time of writing Facebook had not responded to our question but we’ll update this report with any response.

Commenting in a statement, Libe chair Moraes said: “This resolution makes clear that we expect measures to be taken to protect citizens’ right to private life, data protection and freedom of expression. Improvements have been made since the scandal, but, as the Facebook data breach of 50 million accounts showed just last month, these do not go far enough.”

The committee has also made a series of proposals for reducing the risk of social media being used as an attack vector for election interference — including:

  • applying conventional “off-line” electoral safeguards, such as rules on transparency and limits to spending, respect for silence periods and equal treatment of candidates;
  • making it easy to recognize online political paid advertisements and the organisation behind them;
  • banning profiling for electoral purposes, including use of online behaviour that may reveal political preferences;
  • social media platforms should label content shared by bots and speed up the process of removing fake accounts;
  • compulsory post-campaign audits to ensure personal data are deleted;
  • investigations by member states with the support of Eurojust if necessary, into alleged misuse of the online political space by foreign forces.

A couple of weeks ago, the Commission outted a voluntary industry Code of Practice aimed at tackling online disinformation which several tech platforms and adtech companies had agreed to sign up to, and which also presses for action in some of the same areas — including fake accounts and bots.

However the code is not only voluntary but does not bind signatories to any specific policy steps or processes so it looks like its effectiveness will be as difficult to quantify as its accountability will lack bite.

A UK parliamentary committee which has also been probing political disinformation this year also put out a report this summer with a package of proposed measures — with some similar ideas but also suggesting a levy on social media to ‘defend democracy’.

Meanwhile Facebook itself has been working on increasing transparency around advertisers on its platform, and putting in place some authorization requirements for political advertisers (though starting in the US first).

But few politicians appear ready to trust that the steps Facebook is taking will be enough to avoid a repeat of, for example, the mass Kremlin propaganda smear campaign that targeted the 2016 US presidential election.

The Libe committee has also urged all EU institutions, agencies and bodies to verify that their social media pages, and any analytical and marketing tools they use, “should not by any means put at risk the personal data of citizens”.

And it goes as far as suggesting that EU bodies could even “consider closing their Facebook accounts” — as a measure to protect the personal data of every individual contacting them.

The committee’s full resolution was passed by 41 votes to 10 and 1 abstention. And will be put to a vote by the full EU Parliament during the next plenary session later this month.

In it, the Libe also renews its call for the suspension of the EU-US Privacy Shield.

The data transfer arrangement, which is used by thousands of businesses to authorize transfers of EU users’ personal data across the Atlantic, is under growing pressure ahead of an annual review this month, as the Trump administration has failed entirely to respond as EU lawmakers had hoped their US counterparts would at the time of the agreement being inked in the Obama era, back in 2016.

The EU parliament also called for Privacy Shield to be suspended this summer. And while the Commission did not act on those calls, pressure has continued to mount from MEPs and EU consumer and digital and civil rights bodies.

During the Privacy Shield review process this month the Commission will be pressuring US counterparts to try to gain concessions that it can sell back home as ‘compliance’.

But without very major concessions — and who would bank on that, given the priorities of the current US administration — the future of the precariously placed mechanism looks increasingly uncertain.

Even as more oversight coming down the pipe to rule social media platforms looks all but inevitable in Europe.

Google files appeal against Europe’s $5BN antitrust fine for Android

Google has lodged its legal appeal against the European Commission’s €4.34 billion (~$5BN) antitrust ruling against its Android mobile OS, according to Reuters — the first step in a process that could keep its lawyers busy for years to come. “We have now filed our appeal of the EC’s Android decision at the General Court of the […]

Google has lodged its legal appeal against the European Commission’s €4.34 billion (~$5BN) antitrust ruling against its Android mobile OS, according to Reuters — the first step in a process that could keep its lawyers busy for years to come.

“We have now filed our appeal of the EC’s Android decision at the General Court of the EU,” it told the news agency, via email.

We’ve reached out to Google for comment on the appeals process.

Rulings made by the EU’s General Court in Luxembourg can be appealed to the top court, the Court of Justice of the European Union, but only on points of law.

Europe’s competition commissioner, Margrethe Vestager, announced the record-breaking antitrust penalty for Android in July, following more than two years of investigation of the company’s practices around its smartphone operating system.

Vestager said Google had abused the regional dominance of its smartphone platform by requiring that manufacturers pre-install other Google apps as a condition for being able to license the Play Store.

She also found the company had made payments to some manufacturers and mobile network operators in exchange for them exclusively pre-installing Google Search on their devices, and used Google Play licensing to prevent manufacturers from selling devices based on Android forks — which would not have to include Google services and, in Vestager’s view, “could have provided a platform for rival search engines as well as other app developers to thrive”.

Google rejected the Commission’s findings and said it would appeal.

In a blog post at the time, Google CEO Sundar Pichai argued the contrary — claiming the Android ecosystem has “created more choice, not less” for consumers, and saying the Commission ruling “ignores the new breadth of choice and clear evidence about how people use their phones today”.

According to Reuters the company reiterated its earlier arguments in reference to the appeal.

A spokesperson for the EC told us simply: “The Commission will defend its decision in Court.”

Europe is drawing fresh battle lines around the ethics of big data

It’s been just over four months since Europe’s tough new privacy framework came into force. You might believe that little of substance has changed for big tech’s data-hungry smooth operators since then — beyond firing out a wave of privacy policy update spam, and putting up a fresh cluster of consent pop-ups that are just […]

It’s been just over four months since Europe’s tough new privacy framework came into force. You might believe that little of substance has changed for big tech’s data-hungry smooth operators since then — beyond firing out a wave of privacy policy update spam, and putting up a fresh cluster of consent pop-ups that are just as aggressively keen for your data.

But don’t be fooled. This is the calm before the storm, according to the European Union’s data protection supervisor, Giovanni Buttarelli, who says the law is being systematically flouted on a number of fronts right now — and that enforcement is coming.

“I’m expecting, before the end of the year, concrete results,” he tells TechCrunch, sounding angry on every consumer’s behalf.

Though he chalks up some early wins for the General Data Protection Regulation (GDPR) too, suggesting its 72 hour breach notification requirement is already bearing fruit.

He also points to geopolitical pull, with privacy regulation rising up the political agenda outside Europe — describing, for example, California’s recently passed privacy law, which is not at all popular with tech giants, as having “a lot of similarities to GDPR”; as well as noting “a new appetite for a federal law” in the U.S.

Yet he’s also already looking beyond GDPR — to the wider question of how European regulation needs to keep evolving to respond to platform power and its impacts on people.

Next May, on the anniversary of GDPR coming into force, Buttarelli says he will publish a manifesto for a next-generation framework that envisages active collaboration between Europe’s privacy overseers and antitrust regulators. Which will probably send a shiver down the tech giant spine.

Notably, the Commission’s antitrust chief, Margrethe Vestager — who has shown an appetite to take on big tech, and has so far fined Google twice ($2.7BN for Google Shopping and staggering $5BN for Android), and who is continuing to probe its business on a number of fronts while simultaneously eyeing other platforms’ use of data — is scheduled to give a keynote at an annual privacy commissioners’ conference that Buttarelli is co-hosting in Brussels later this month.

Her presence hints at the potential of joint-working across historically separate regulatory silos that have nonetheless been showing increasingly overlapping concerns of late.

See, for example, Germany’s Federal Cartel Office accusing Facebook of using its size to strong-arm users into handing over data. And the French Competition Authority probing the online ad market — aka Facebook and Google — and identifying a raft of problematic behaviors. Last year the Italian Competition Authority also opened a sector inquiry into big data.

Traditional competition law theories of harm would need to be reworked to accommodate data-based anticompetitive conduct — essentially the idea that data holdings can bestow an unfair competitive advantage if they cannot be matched. Which clearly isn’t the easiest stinging jellyfish to nail to the wall. But Europe’s antitrust regulators are paying increasing mind to big data; looking actively at whether and even how data advantages are exclusionary or exploitative.

In recent years, Vestager has been very public with her concerns about dominant tech platforms and the big data they accrue as a consequence, saying, for example in 2016, that: “If a company’s use of data is so bad for competition that it outweighs the benefits, we may have to step in to restore a level playing field.”

Buttarelli’s belief is that EU privacy regulators will be co-opted into that wider antitrust fight by “supporting and feeding” competition investigations in the future. A future that can be glimpsed right now, with the EC’s antitrust lens swinging around to zoom in on what Amazon is doing with merchant data.

“Europe would like to speak with one voice, not only within data protection but by approaching this issue of digital dividend, monopolies in a better way — not per sectors,” Buttarelli tells TechCrunch. 

“Monopolies are quite recent. And therefore once again, as it was the case with social networks, we have been surprised,” he adds, when asked whether the law can hope to keep pace. “And therefore the legal framework has been implemented in a way to do our best but it’s not in my view robust enough to consider all the relevant implications… So there is space for different solutions. But first joint enforcement and better co-operation is key.”

From a regulatory point of view, competition law is hampered by the length of time investigations take. A characteristic of the careful work required to probe and prove out competitive harms that’s nonetheless especially problematic set against the blistering pace of technological innovation and disruption. The law here is very much the polar opposite of ‘move fast and break things’.

But on the privacy front at least, there will be no 12 year wait for the first GDPR enforcements, as Buttarelli notes was the case when Europe’s competition rules were originally set down in 1957’s Treaty of Rome.

He says the newly formed European Data Protection Board (EDPB), which is in charge of applying GDPR consistently across the bloc, is fixed on delivering results “much more quickly”. And so the first enforcements are penciled in for around half a year after GDPR ‘Day 1’.

“I think that people are right to feel more impassioned about enforcement,” he says. “We see awareness and major problems with how the data is treated — which are systemic. There is also a question with regard to the business model, not only compliance culture.

“I’m expecting concrete first results, in terms of implementation, before the end of this year.”

“No blackmailing”

Tens of thousands of consumers have already filed complaints under Europe’s new privacy regime. The GDPR updates the EU’s longstanding data protection rules, bringing proper enforcement for the first time in the form of much larger fines for violations — to prevent privacy being the bit of the law companies felt they could safely ignore.

The EDPB tells us that more than 42,230 complaints have been lodged across the bloc since the regulation began applying, on May 25. The board is made up of the heads of EU Member State’s national data protection agencies, with Buttarelli serving as its current secretariat.

“I did not appreciate the tsunami of legalistic notices landing on the account of millions of users, written in an obscure language, and many of them were entirely useless, and in a borderline even with spamming, to ask for unnecessary agreements with a new privacy policy,” he tells us. “Which, in a few cases, appear to be in full breach of the GDPR — not only in terms of spirit.”

He also professes himself “not surprised” about Facebook’s latest security debacle — describing the massive new data breach the company revealed on Friday as “business as usual” for the tech giant. And indeed for “all the tech giants” — none of whom he believes are making adequate investments in security.

“In terms of security there are much less investments than expected,” he also says of Facebook specifically. “Lot of investments about profiling people, about creating clusters, but much less in preserving the [security] of communications. GDPR is a driver for a change — even with regard to security.”

Asked what systematic violations of the framework he’s seen so far, from his pan-EU oversight position, Buttarelli highlights instances where service operators are relying on consent as their legal basis to collect user data — saying this must allow for a free choice.

Or “no blackmailing”, as he puts it.

Facebook, for example, does not offer any of its users, even its users in Europe, the option to opt out of targeted advertising. Yet it leans on user consent, gathered via dark pattern consent flows of its own design, to sanction its harvesting of personal data — claiming people can just stop using its service if they don’t agree to its ads.

It also claims to be GDPR compliant.

It’s pretty easy to see the disconnect between those two positions.

WASHINGTON, DC – APRIL 11: Facebook co-founder, Chairman and CEO Mark Zuckerberg prepares to testify before the House Energy and Commerce Committee in the Rayburn House Office Building on Capitol Hill April 11, 2018 in Washington, DC. This is the second day of testimony before Congress by Zuckerberg, 33, after it was reported that 87 million Facebook users had their personal information harvested by Cambridge Analytica, a British political consulting firm linked to the Trump campaign. (Photo by Chip Somodevilla/Getty Images)

“In cases in which it is indispensable to build on consent it should be much more than in the past based on exhaustive information; much more details, written in a comprehensive and simple language, accessible to an average user, and it should be really freely given — so no blackmailing,” says Buttarelli, not mentioning any specific tech firms by name as he reels off this list. “It should be really freely revoked, and without expecting that the contract is terminated because of this.

“This is not respectful of at least the spirit of the GDPR and, in a few cases, even of the legal framework.”

His remarks — which chime with what we’ve heard before from privacy experts — suggest the first wave of complaints filed by veteran European data protection campaigner and lawyer, Max Schrems, via his consumer focused data protection non-profit noyb, will bear fruit. And could force tech giants to offer a genuine opt-out of profiling.

The first noyb complaints target so-called ‘forced consent‘, arguing that Facebook; Facebook-owned Instagram; Facebook-owned WhatsApp; and Google’s Android are operating non-compliant consent flows in order to keep processing Europeans’ personal data because they do not offer the aforementioned ‘free choice’ opt-out of data collection.

Schrems also contends that this behavior is additionally problematic because dominant tech giants are gaining an unfair advantage over small businesses — which simply cannot throw their weight around in the same way to get what they want. So that’s another spark being thrown in on the competition front.

Discussing GDPR enforcement generally, Buttarelli confirms he expects to see financial penalties not just investigatory outcomes before the year is out — so once DPAs have worked through the first phase of implementation (and got on top of their rising case loads).

Of course it will be up to local data protection agencies to issue any fines. But the EDPB and Buttarelli are the glue between Europe’s (currently) 28 national data protection agencies — playing a highly influential co-ordinating and steering role to ensure the regulation gets consistently applied.

He doesn’t say exactly where be thinks the first penalties will fall but notes a smorgasbord of issues that are being commonly complained about, saying: “Now we have an obvious trend and even a peak, in terms of complaints; different violations focusing particularly, but not only, on social media; big data breaches; rights like right of access to information held; right to erasure.”

He illustrates his conviction of incoming fines by pointing to the recent example of the ICO’s interim report into Cambridge Analytica’s misuse of Facebook data, in July — when the UK agency said it intended to fine Facebook the maximum possible (just £500k, because the breach took place before GDPR).

A similarly concluded data misuse investigation under GDPR would almost certainly result in much larger fines because the regulation allows for penalties of up to 4% of a company’s annual global turnover. (So in Facebook’s case the maximum suddenly balloons into the billions.)

The GDPR’s article 83 sets out general conditions for calculating fines — saying penalties should be “effective, proportionate and dissuasive”; and they must take into account factors such as whether an infringement was intentional or negligent; the categories of personal data affected; and how co-operative the data controller is as the data supervisor investigates.

For the security breach Facebook disclosed last week the EU’s regulatory oversight process will involve an assessment of how negligent the company was; what response steps it took when it discovered the breach, including how it communicated with data protection authorities and users; and how comprehensively it co-operatives with the DPC’s investigation. (In a not-so-great sign for Facebook the Irish DPC has already criticized its breach notification for lacking detail).

As well as evaluating a data controller’s security measures against GDPR standards, EU regulators can “prescribe additional safeguards”, as Buttarelli puts it. Which means enforcement is much more than just a financial penalty; organizations can be required to change their processes and priorities too.

And that’s why Schrems’ forced consent complaints are so interesting.

Because a fine, even a large one, can be viewed by a company as revenue-heavy as Facebook as just another business cost to suck up as it keeps on truckin’. But GDPR’s follow on enforcement prescriptions could force privacy law breakers to actively reshape their business practices to continue doing business in Europe.

And if the privacy problem with Facebook is that it’s forcing people-tracking ads on everyone, the solution is surely a version of Facebook that does not require users to accept privacy intrusive advertising to use it. Other business models are available, such as subscription.

But ads don’t have to be hostile to privacy. For example it’s possible to display advertising without persistently profiling users — as, for example, pro-privacy search engine DuckDuckGo does. Other startups are exploring privacy-by-design on-device ad-targeting architectures for delivering targeted ads without needing to track users. Alternatives to Facebook’s targeted ads certainly exist — and innovating in lock-step with privacy is clearly possible. Just ask Apple.

So — at least in theory — GDPR could force the social network behemoth to revise its entire business model.

Which would make even a $1.63BN fine the company could face as a result of Friday’s security breach pale into insignificance.

Accelerating ethics

There’s a wrinkle here though. Buttarelli does not sound convinced that GDPR alone will be remedy enough to fix all privacy hostile business models that EU regulators are seeing. Hence his comment about a “question with regard to the business model”.

And also why he’s looking ahead and talking about the need to evolve the regulatory landscape — to enable joint working between traditionally discrete areas of law. 

“We need structural remedies to make the digital market fairer for people,” he says. “And therefore this is we’ve been successful in persuading our colleagues of the Board to adopt a position on the intersection of consumer protection, competition rules and data protection. None of the independent regulators’ three areas, not speaking about audio-visual deltas, can succeed in their sort of old fashioned approach.

“We need more interaction, we need more synergies, we need to look to the future of these sectoral legislations.”

People are targeted with content to make them behave in a certain way. To predict but also to react. This is not the kind of democracy we deserve. Giovanni Buttarelli, European Data Protection Supervisor

The challenge posed by the web’s currently dominant privacy-hostile business models is also why, in a parallel track, Europe’s data protection supervisor is actively pushing to accelerate innovation and debate around data ethics — to support efforts to steer markets and business models in, well, a more humanitarian direction.

When we talk he highlights that Sir Tim Berners-Lee will be keynoting at the same European privacy conference where Vestager will appear at — which has an overarching discussion frame of “Debating Ethics: Dignity and Respect in Data Driven Life” as its theme.

Accelerating innovation to support the development of more ethical business models is also clearly the Commission’s underlying hope and aim.

Berners-Lee, the creator of the World Wide Web, has been increasingly strident in his criticism of how commercial interests have come to dominate the Internet by exploiting people’s personal data, including warning earlier this year that platform power is crushing the web as a force for good.

He has also just left his academic day job to focus on commercializing the pro-privacy, decentralized web platform he’s been building at MIT for years — via a new startup, called Inrupt.

Doubtless he’ll be telling the conference all about that.

“We are focusing on the solutions for the future,” says Buttarelli on ethics. “There is a lot of discussion about people becoming owners of their data, and ‘personal data’, and we call that personal because there’s something to be respected, not traded. And on the contrary we see a lot of inequality in the tech world, and we believe that the legal framework can be of an help. But will not give all the relevant answers to identify what is legally and technically feasible but morally untenable.”

Also just announced as another keynote speaker at the same conference later this month: Apple’s CEO Tim Cook.

In a statement on Cook’s addition to the line-up, Buttarelli writes: “We are delighted that Tim has agreed to speak at the International Conference of Data Protection and Privacy Commissioners. Tim has been a strong voice in the debate around privacy, as the leader of a company which has taken a clear privacy position, we look forward to hearing his perspective. He joins an already superb line up of keynote speakers and panellists who want to be part of a discussion about technology serving humankind.”

So Europe’s big fight to rule the damaging impacts of big data just got another big gun behind it.

Apple CEO Tim Cook looks on during a visit of the shopfitting company Dula that delivers tables for Apple stores worldwide in Vreden, western Germany, on February 7, 2017. (Photo: BERND THISSEN/AFP/Getty Images)

 

“Question is [how do] we go beyond the simple requirements of confidentiality, security, of data,” Buttarelli continues. “Europe after such a successful step [with GDPR] is now going beyond the lawful and fair accumulation of personal data — we are identifying a new way of assessing market power when the services delivered to individuals are not mediated by a binary. And although competition law is still a powerful instrument for regulation — it was invented to stop companies getting so big — but I think together with our efforts on ethics we would like now Europe to talk about the future of the current dominant business models.

“I’m… concerned about how these companies, in compliance with GDPR in a few cases, may collect as much data as they can. In a few cases openly, in other secretly. They can constantly monitor what people are doing online. They categorize excessively people. They profile them in a way which cannot be contested. So we have in our democracies a lot of national laws in an anti-discrimination mode but now people are to be discriminated depending on how they behave online. So people are targeted with content to make them behave in a certain way. To predict but also to react. This is not the kind of democracy we deserve. This is not our idea.”

White House says a draft executive order reviewing social media companies is not “official”

A draft executive order circulating around the White House “is not the result of an official White House policymaking process,” according to deputy White House press secretary, Lindsay Walters. According to a report in The Washington Post, Walters denied that White House staff had worked on a draft executive order that would require every federal agency to […]

A draft executive order circulating around the White House “is not the result of an official White House policymaking process,” according to deputy White House press secretary, Lindsay Walters.

According to a report in The Washington Post, Walters denied that White House staff had worked on a draft executive order that would require every federal agency to study how social media platforms moderate user behavior and refer any instances of perceived bias to the Justice Department for further study and potential legal action.

Bloomberg first reported the draft executive order and a copy of the document was acquired and published by Business Insider.

Here’s the relevant text of the draft (from Business Insider):

Section 2. Agency Responsibilities. (a) Executive departments and agencies with authorities that could be used to enhance competition among online platforms (agencies) shall, where consistent with other laws, use those authorities to promote competition and ensure that no online platform exercises market power in a way that harms consumers, including through the exercise of bias.

(b) Agencies with authority to investigate anticompetitive conduct shall thoroughly investigate whether any online platform has acted in violation of the antitrust laws, as defined in subsection (a) of the first section of the Clayton Act, 15 U.S.C. § 12, or any other law intended to protect competition.

(c) Should an agency learn of possible or actual anticompetitive conduct by a platform that the agency lacks the authority to investigate and/or prosecute, the matter should be referred to the Antitrust Division of the Department of Justice and the Bureau of Competition of the Federal Trade Commission.

While there are several reasonable arguments to be made for and against the regulation of social media platforms, “bias” is probably the least among them.

That hasn’t stopped the steady drumbeat of accusations of bias under the guise of “anticompetitive regulation” against platforms like Facebook, Google, YouTube, and Twitter from increasing in volume and tempo in recent months.

Bias was the key concern Republican lawmakers brought up when Mark Zuckerberg was called to testify before Congress earlier this year. And bias was front and center in Republican lawmakers’ questioning of Jack Dorsey, Sheryl Sandberg, and Google’s empty chair when they were called before Congress earlier this month to testify in front of the Senate Intelligence Committee.

The Justice Department has even called in the attorneys general of several states to review the legality of the moderation policies of social media platforms later this month (spoiler alert: they’re totally legal).

With all of this activity focused on tech companies, it’s no surprise that the administration would turn to the Executive Order — a preferred weapon of choice for Presidents who find their agenda stalled in the face of an uncooperative legislature (or prevailing rule of law).

However, as the Post reported, aides in the White House said there’s little chance of this becoming actual policy.

… three White House aides soon insisted they didn’t write the draft order, didn’t know where it came from, and generally found it to be unworkable policy anyway. One senior White House official confirmed the document had been floating around the White House but had not gone through the formal process, which is controlled by the staff secretary.

EU fines Asus, Denon & Marantz, Philips and Pioneer $130M for online price fixing

The European Union’s antitrust authorities have issued a series of penalties, fining consumer electronics companies Asus, Denon & Marantz, Philips and Pioneer more than €110 million (~$130M) in four separate decisions for imposing fixed or minimum resale prices on their online retailers in breach of EU competition rules. It says the four companies engaged in so called […]

The European Union’s antitrust authorities have issued a series of penalties, fining consumer electronics companies Asus, Denon & Marantz, Philips and Pioneer more than €110 million (~$130M) in four separate decisions for imposing fixed or minimum resale prices on their online retailers in breach of EU competition rules.

It says the four companies engaged in so called “fixed or minimum resale price maintenance (RPM)” by restricting the ability of their online retailers to set their own retail prices for widely used consumer electronics products — such as kitchen appliances, notebooks and hi-fi products.

Asus has been hit with the largest fine (63.5M), followed by Philips (29.8M). The other two fines were 10.1M for Pioneer, and 7.7M for Denon & Marantz.

The Commission found the manufacturers put pressure on ecommerce outlets who offered their products at low prices, writing: “If those retailers did not follow the prices requested by manufacturers, they faced threats or sanctions such as blocking of supplies. Many, including the biggest online retailers, use pricing algorithms which automatically adapt retail prices to those of competitors. In this way, the pricing restrictions imposed on low pricing online retailers typically had a broader impact on overall online prices for the respective consumer electronics products.”

It also notes that use of “sophisticated monitoring tools” by the manufacturers allowed them to “effectively track resale price setting in the distribution network and to intervene swiftly in case of price decreases”.

“The price interventions limited effective price competition between retailers and led to higher prices with an immediate effect on consumers,” it added.

In particular, Asus, was found to have monitored the resale price of retailers for certain computer hardware and electronics products such as notebooks and displays — and to have done so in two EU Member States (Germany and France), between 2011 and 2014.

While Denon & Marantz was found to have engaged in “resale price maintenance” with respect to audio and video consumer products such as headphones and speakers of the brands Denon, Marantz and Boston Acoustics in Germany and the Netherlands between 2011 and 2015.

Philips was found to have done the same in France between the end of 2011 and 2013 — but for a range of consumer electronics products, including kitchen appliances, coffee machines, vacuum cleaners, home cinema and home video systems, electric toothbrushes, hair driers and trimmers.

In Pioneer’s case, the resale price maintenance covered products including home theatre devices, iPod speakers, speaker sets and hi-fi products.

The Commission said the company also limited the ability of its retailers to sell-cross border to EU consumers in other Member States in order to sustain different resale prices in different Member States, for example by blocking orders of retailers who sold cross-border. Its conduct lasted from the beginning of 2011 to the end of 2013 and concerned 12 countries (Germany, France, Italy, the United Kingdom, Spain, Portugal, Sweden, Finland, Denmark, Belgium, the Netherlands and Norway).

In all four cases, the Commission said the level of fines were reduced — 50% in the case of Pioneer; and 40% for each of the others — due to the companies’ co-operation with its investigations, specifying that they had provided evidence with “significant added value” and had “expressly acknowledg[ed] the facts and the infringements of EU antitrust rules”.

Commenting in a statement, commissioner Margrethe Vestager, who heads up the bloc’s competition policy, said: The online commerce market is growing rapidly and is now worth over 500 billion euros in Europe every year. More than half of Europeans now shop online. As a result of the actions taken by these four companies, millions of European consumers faced higher prices for kitchen appliances, hair dryers, notebook computers, headphones and many other products. This is illegal under EU antitrust rules. Our decisions today show that EU competition rules serve to protect consumers where companies stand in the way of more price competition and better choice.”

We’ve reached out to all the companies for comment.

The fines follow the Commission’s ecommerce sector inquiry, which reported in May 2017, and showed that resale-price related restrictions are by far the most widespread restrictions of competition in ecommerce markets, making competition enforcement in this area a priority — as part of the EC’s wider Digital Single Market strategy.

The Commission further notes that the sector inquiry shed light on the increased use of automatic software applied by retailers for price monitoring and price setting.

Separate investigations were launched in February 2017 and June 2017 to assess if certain online sales practices are preventing, in breach of EU antitrust rules, consumers from enjoying cross-border choice and from being able to buy products and services online at competitive prices. The Commission adds that those investigations are ongoing.

Commenting on today’s EC decision, a spokesman for Philips told us: “Since the start of the EC investigation in late 2013, which Philips reported in its Annual Reports, the company has fully cooperated with the EC. Philips initiated an internal investigation and addressed the matter in 2014.”

“It is good that we can now leave this case behind us, and focus on the positive impact that our products and solutions can have on people,” he added. “Let me please stress that Philips attaches prime importance to full compliance with all applicable laws, rules and regulations. Being a responsible company, everyone in Philips is expected to always act with integrity. Philips rigorously enforces compliance of its General Business Principles throughout the company. Philips has a zero tolerance policy towards non-compliance in relation to breaches of its General Business Principles.”

Anticipating the decision of the EC, he said the company had already recognized a 30M provision in its Q2 2018.