Is Europe closing in on an antitrust fix for surveillance technologists?

The German Federal Cartel Office’s decision to order Facebook to change how it processes users’ personal data this week is a sign the antitrust tide could at last be turning against platform power. One European Commission source we spoke to, who was commenting in a personal capacity, described it as “clearly pioneering” and “a big deal”, […]

The German Federal Cartel Office’s decision to order Facebook to change how it processes users’ personal data this week is a sign the antitrust tide could at last be turning against platform power.

One European Commission source we spoke to, who was commenting in a personal capacity, described it as “clearly pioneering” and “a big deal”, even without Facebook being fined a dime.

The FCO’s decision instead bans the social network from linking user data across different platforms it owns, unless it gains people’s consent (nor can it make use of its services contingent on such consent). Facebook is also prohibited from gathering and linking data on users from third party websites, such as via its tracking pixels and social plugins.

The order is not yet in force, and Facebook is appealing, but should it come into force the social network faces being de facto shrunk by having its platforms siloed at the data level.

To comply with the order Facebook would have to ask users to freely consent to being data-mined — which the company does not do at present.

Yes, Facebook could still manipulate the outcome it wants from users but doing so would open it to further challenge under EU data protection law, as its current approach to consent is already being challenged.

The EU’s updated privacy framework, GDPR, requires consent to be specific, informed and freely given. That standard supports challenges to Facebook’s (still fixed) entry ‘price’ to its social services. To play you still have to agree to hand over your personal data so it can sell your attention to advertisers. But legal experts contend that’s neither privacy by design nor default.

The only ‘alternative’ Facebook offers is to tell users they can delete their account. Not that doing so would stop the company from tracking you around the rest of the mainstream web anyway. Facebook’s tracking infrastructure is also embedded across the wider Internet so it profiles non-users too.

EU data protection regulators are still investigating a very large number of consent-related GDPR complaints.

But the German FCO, which said it liaised with privacy authorities during its investigation of Facebook’s data-gathering, has dubbed this type of behavior “exploitative abuse”, having also deemed the social service to hold a monopoly position in the German market.

So there are now two lines of legal attack — antitrust and privacy law — threatening Facebook (and indeed other adtech companies’) surveillance-based business model across Europe.

A year ago the German antitrust authority also announced a probe of the online advertising sector, responding to concerns about a lack of transparency in the market. Its work here is by no means done.

Data limits

The lack of a big flashy fine attached to the German FCO’s order against Facebook makes this week’s story less of a major headline than recent European Commission antitrust fines handed to Google — such as the record-breaking $5BN penalty issued last summer for anticompetitive behaviour linked to the Android mobile platform.

But the decision is arguably just as, if not more, significant, because of the structural remedies being ordered upon Facebook. These remedies have been likened to an internal break-up of the company — with enforced internal separation of its multiple platform products at the data level.

This of course runs counter to (ad) platform giants’ preferred trajectory, which has long been to tear modesty walls down; pool user data from multiple internal (and indeed external sources), in defiance of the notion of informed consent; and mine all that personal (and sensitive) stuff to build identity-linked profiles to train algorithms that predict (and, some contend, manipulate) individual behavior.

Because if you can predict what a person is going to do you can choose which advert to serve to increase the chance they’ll click. (Or as Mark Zuckerberg puts it: ‘Senator, we run ads.’)

This means that a regulatory intervention that interferes with an ad tech giant’s ability to pool and process personal data starts to look really interesting. Because a Facebook that can’t join data dots across its sprawling social empire — or indeed across the mainstream web — wouldn’t be such a massive giant in terms of data insights. And nor, therefore, surveillance oversight.

Each of its platforms would be forced to be a more discrete (and, well, discreet) kind of business.

Competing against data-siloed platforms with a common owner — instead of a single interlinked mega-surveillance-network — also starts to sound almost possible. It suggests a playing field that’s reset, if not entirely levelled.

(Whereas, in the case of Android, the European Commission did not order any specific remedies — allowing Google to come up with ‘fixes’ itself; and so to shape the most self-serving ‘fix’ it can think of.)

Meanwhile, just look at where Facebook is now aiming to get to: A technical unification of the backend of its different social products.

Such a merger would collapse even more walls and fully enmesh platforms that started life as entirely separate products before were folded into Facebook’s empire (also, let’s not forget, via surveillance-informed acquisitions).

Facebook’s plan to unify its products on a single backend platform looks very much like an attempt to throw up technical barriers to antitrust hammers. It’s at least harder to imagine breaking up a company if its multiple, separate products are merged onto one unified backend which functions to cross and combine data streams.

Set against Facebook’s sudden desire to technically unify its full-flush of dominant social networks (Facebook Messenger; Instagram; WhatsApp) is a rising drum-beat of calls for competition-based scrutiny of tech giants.

This has been building for years, as the market power — and even democracy-denting potential — of surveillance capitalism’s data giants has telescoped into view.

Calls to break up tech giants no longer carry a suggestive punch. Regulators are routinely asked whether it’s time. As the European Commission’s competition chief, Margrethe Vestager, was when she handed down Google’s latest massive antitrust fine last summer.

Her response then was that she wasn’t sure breaking Google up is the right answer — preferring to try remedies that might allow competitors to have a go, while also emphasizing the importance of legislating to ensure “transparency and fairness in the business to platform relationship”.

But it’s interesting that the idea of breaking up tech giants now plays so well as political theatre, suggesting that wildly successful consumer technology companies — which have long dined out on shiny convenience-based marketing claims, made ever so saccharine sweet via the lure of ‘free’ services — have lost a big chunk of their populist pull, dogged as they have been by so many scandals.

From terrorist content and hate speech, to election interference, child exploitation, bullying, abuse. There’s also the matter of how they arrange their tax affairs.

The public perception of tech giants has matured as the ‘costs’ of their ‘free’ services have scaled into view. The upstarts have also become the establishment. People see not a new generation of ‘cuddly capitalists’ but another bunch of multinationals; highly polished but remote money-making machines that take rather more than they give back to the societies they feed off.

Google’s trick of naming each Android iteration after a different sweet treat makes for an interesting parallel to the (also now shifting) public perceptions around sugar, following closer attention to health concerns. What does its sickly sweetness mask? And after the sugar tax, we now have politicians calling for a social media levy.

Just this week the deputy leader of the main opposition party in the UK called for setting up a standalone Internet regulatory with the power to break up tech monopolies.

Talking about breaking up well-oiled, wealth-concentration machines is being seen as a populist vote winner. And companies that political leaders used to flatter and seek out for PR opportunities find themselves treated as political punchbags; Called to attend awkward grilling by hard-grafting committees, or taken to vicious task verbally at the highest profile public podia. (Though some non-democratic heads of state are still keen to press tech giant flesh.)

In Europe, Facebook’s repeat snubs of the UK parliament’s requests last year for Zuckerberg to face policymakers’ questions certainly did not go unnoticed.

Zuckerberg’s empty chair at the DCMS committee has become both a symbol of the company’s failure to accept wider societal responsibility for its products, and an indication of market failure; the CEO so powerful he doesn’t feel answerable to anyone; neither his most vulnerable users nor their elected representatives. Hence UK politicians on both sides of the aisle making political capital by talking about cutting tech giants down to size.

The political fallout from the Cambridge Analytica scandal looks far from done.

Quite how a UK regulator could successfully swing a regulatory hammer to break up a global Internet giant such as Facebook which is headquartered in the U.S. is another matter. But policymakers have already crossed the rubicon of public opinion and are relishing talking up having a go.

That represents a sea-change vs the neoliberal consensus that allowed competition regulators to sit on their hands for more than a decade as technology upstarts quietly hoovered up people’s data and bagged rivals, and basically went about transforming themselves from highly scalable startups into market-distorting giants with Internet-scale data-nets to snag users and buy or block competing ideas.

The political spirit looks willing to go there, and now the mechanism for breaking platforms’ distorting hold on markets may also be shaping up.

The traditional antitrust remedy of breaking a company along its business lines still looks unwieldy when faced with the blistering pace of digital technology. The problem is delivering such a fix fast enough that the business hasn’t already reconfigured to route around the reset. 

Commission antitrust decisions on the tech beat have stepped up impressively in pace on Vestager’s watch. Yet it still feels like watching paper pushers wading through treacle to try and catch a sprinter. (And Europe hasn’t gone so far as trying to impose a platform break up.) 

But the German FCO decision against Facebook hints at an alternative way forward for regulating the dominance of digital monopolies: Structural remedies that focus on controlling access to data which can be relatively swiftly configured and applied.

Vestager, whose term as EC competition chief may be coming to its end this year (even if other Commission roles remain in potential and tantalizing contention), has championed this idea herself.

In an interview on BBC Radio 4’s Today program in December she poured cold water on the stock question about breaking tech giants up — saying instead the Commission could look at how larger firms got access to data and resources as a means of limiting their power. Which is exactly what the German FCO has done in its order to Facebook. 

At the same time, Europe’s updated data protection framework has gained the most attention for the size of the financial penalties that can be issued for major compliance breaches. But the regulation also gives data watchdogs the power to limit or ban processing. And that power could similarly be used to reshape a rights-eroding business model or snuff out such business entirely.

The merging of privacy and antitrust concerns is really just a reflection of the complexity of the challenge regulators now face trying to rein in digital monopolies. But they’re tooling up to meet that challenge.

Speaking in an interview with TechCrunch last fall, Europe’s data protection supervisor, Giovanni Buttarelli, told us the bloc’s privacy regulators are moving towards more joint working with antitrust agencies to respond to platform power. “Europe would like to speak with one voice, not only within data protection but by approaching this issue of digital dividend, monopolies in a better way — not per sectors,” he said. “But first joint enforcement and better co-operation is key.”

The German FCO’s decision represents tangible evidence of the kind of regulatory co-operation that could — finally — crack down on tech giants.

Blogging in support of the decision this week, Buttarelli asserted: “It is not necessary for competition authorities to enforce other areas of law; rather they need simply to identity where the most powerful undertakings are setting a bad example and damaging the interests of consumers.  Data protection authorities are able to assist in this assessment.”

He also had a prediction of his own for surveillance technologists, warning: “This case is the tip of the iceberg — all companies in the digital information ecosystem that rely on tracking, profiling and targeting should be on notice.”

So perhaps, at long last, the regulators have figured out how to move fast and break things.

German antitrust office limits Facebook’s data-gathering

A lengthy antitrust probe into how Facebook gathers data on users has resulted in Germany’s competition watchdog banning the social network giant from combining data on users across its own suite of social platforms without their consent. The investigation of Facebook data-gathering practices began in March 2016. The decision by Germany’s Federal Cartel Office, announced […]

A lengthy antitrust probe into how Facebook gathers data on users has resulted in Germany’s competition watchdog banning the social network giant from combining data on users across its own suite of social platforms without their consent.

The investigation of Facebook data-gathering practices began in March 2016.

The decision by Germany’s Federal Cartel Office, announced today, also prohibits Facebook from gathering data on users from third party websites — such as via tracking pixels and social plug-ins — without their consent.

Although the decision does not yet have legal force and Facebook has said it’s appealing. The BBC reports that the company has a month to challenge the decision before it comes into force in Germany.

In both cases — i.e. Facebook collecting and linking user data from its own suite of services; and from third party websites — the Bundeskartellamt asserts that consent to data processing must be voluntary, so cannot be made a precondition of using Facebook’s service.

The company must therefore “adapt its terms of service and data processing accordingly”, it warns.

“Facebook’s terms of service and the manner and extent to which it collects and uses data are in violation of the European data protection rules to the detriment of users. The Bundeskartellamt closely cooperated with leading data protection authorities in clarifying the data protection issues involved,” it writes, couching Facebook’s conduct as “exploitative abuse”.

“Dominant companies may not use exploitative practices to the detriment of the opposite side of the market, i.e. in this case the consumers who use Facebook. This applies above all if the exploitative practice also impedes competitors that are not able to amass such a treasure trove of data,” it continues.

“This approach based on competition law is not a new one, but corresponds to the case-law of the Federal Court of Justice under which not only excessive prices, but also inappropriate contractual terms and conditions constitute exploitative abuse (so-called exploitative business terms).”

Commenting further in a statement, Andreas Mundt, president of the Bundeskartellamt, added: “In future, Facebook will no longer be allowed to force its users to agree to the practically unrestricted collection and assigning of non-Facebook data to their Facebook user accounts.

“The combination of data sources substantially contributed to the fact that Facebook was able to build a unique database for each individual user and thus to gain market power. In future, consumers can prevent Facebook from unrestrictedly collecting and using their data. The previous practice of combining all data in a Facebook user account, practically without any restriction, will now be subject to the voluntary consent given by the users.

“Voluntary consent means that the use of Facebook’s services must not be subject to the users’ consent to their data being collected and combined in this way. If users do not consent, Facebook may not exclude them from its services and must refrain from collecting and merging data from different sources.”

“With regard to Facebook’s future data processing policy, we are carrying out what can be seen as an internal divestiture of Facebook’s data,” Mundt added. 

Facebook has responded to the Bundeskartellamt’s decision with a blog post setting out why it disagrees. The company did not respond to specific questions we put to it.

One key consideration is that Facebook also tracks non-users via third party websites. Aka, the controversial issue of ‘shadow profiles’ — which both US and EU politicians questioned founder Mark Zuckerberg about last year.

Which raises the question of how it could comply with the decision on that front, if its appeal fails, given it has no obvious conduit for seeking consent from non-users to gather their data. (Facebook’s tracking of non-users has already previously been judged illegal elsewhere in Europe.)

The German watchdog says that if Facebook intends to continue collecting data from outside its own social network to combine with users’ accounts without consent it “must be substantially restricted”, suggesting a number of different criteria are feasible — such as restrictions including on the amount of data; purpose of use; type of data processing; additional control options for users; anonymization; processing only upon instruction by third party providers; and limitations on data storage periods.

Should the decision come to be legally enforced, the Bundeskartellamt says Facebook will be obliged to develop proposals for possible solutions and submit them to the authority which would then examine whether or not they fulfil its requirements.

While there’s lots to concern Facebook in this decision — which, it recently emerged, has plans to unify the technical infrastructure of its messaging platforms — it isn’t all bad for the company. Or, rather, it could have been worse.

The authority makes a point of saying the social network can continue to make the use of each of its messaging platforms subject to the processing of data generated by their use, writing: “It must be generally acknowledged that the provision of a social network aiming at offering an efficient, data-based business model funded by advertising requires the processing of personal data. This is what the user expects.”

Although it also does not close the door on further scrutiny of that dynamic, either under data protection law (as indeed, there is a current challenge to so called ‘forced consent‘ under Europe’s GDPR); or indeed under competition law.

“The issue of whether these terms can still result in a violation of data protection rules and how this would have to be assessed under competition law has been left open,” it emphasizes.

It also notes that it did not investigate how Facebook subsidiaries WhatsApp and Instagram collect and use user data — leaving the door open for additional investigations of those services.

On the wider EU competition law front, in recent years the European Commission’s competition chief has voiced concerns about data monopolies — going so far as to suggest, in an interview with the BBC last December, that restricting access to data might be a more appropriate solution to addressing monopolistic platform power vs breaking companies up.

In its blog post rejecting the German Federal Cartel Office’s decision, Facebook’s Yvonne Cunnane, head of data protection for its international business, Facebook Ireland, and Nikhil Shanbhag, director and associate general counsel, make three points to counter the decision, writing that: “The Bundeskartellamt underestimates the fierce competition we face in Germany, misinterprets our compliance with GDPR and undermines the mechanisms European law provides for ensuring consistent data protection standards across the EU.”

On the competition point, Facebook claims in the blog post that “popularity is not dominance” — suggesting the Bundeskartellamt found 40 per cent of social media users in Germany don’t use Facebook. (Not that that would stop Facebook from tracking those non-users around the mainstream Internet, of course.)

Although, in its announcement of the decision today, the Federal Cartel Office emphasizes that it found Facebook to have a dominant position in the Germany market — with (as of December 2018) 23M daily active users and 32M monthly active users, which it said constitutes a market share of more than 95 per cent (daily active users) and more than 80 per cent (monthly active users).

It also says it views social services such as Snapchat, YouTube and Twitter, and professional networks like LinkedIn and Xing, as only offering “parts of the services of a social network” — saying it therefore excluded them from its consideration of the market.

Though it adds that “even if these services were included in the relevant market, the Facebook group with its subsidiaries Instagram and WhatsApp would still achieve very high market shares that would very likely be indicative of a monopolisation process”.

The mainstay of Facebook’s argument against the Bundeskartellamt decision appears to fix on the GDPR — with the company both seeking to claim it’s in compliance with the pan-EU data-protection framework (although its business faces multiple complaints under GDPR), while simultaneously arguing that the privacy regulation supersedes regional competition authorities.

So, as ever, Facebook is underlining that its regulator of choice is the Irish Data Protection Commission.

“The GDPR specifically empowers data protection regulators – not competition authorities – to determine whether companies are living up to their responsibilities. And data protection regulators certainly have the expertise to make those conclusions,” Facebook writes.

“The GDPR also harmonizes data protection laws across Europe, so everyone lives by the same rules of the road and regulators can consistently apply the law from country to country. In our case, that’s the Irish Data Protection Commission. The Bundeskartellamt’s order threatens to undermine this, providing different rights to people based on the size of the companies they do business with.”

The final plank of Facebook’s rebuttal focuses on pushing the notion that pooling data across services enhances the consumer experience and increases “safety and security” — the latter point being the same argument Zuckerberg used last year to defend ‘shadow profiles’ (not that he called them that) — with the company claiming now that it needs to pool user data across services to identify abusive behavior online; and disable accounts link to terrorism; child exploitation; and election interference.

So the company is essentially seeking to leverage (you could say ‘legally weaponize’) a smorgasbord of antisocial problems many of which have scaled to become major societal issues in recent years, at least in part as a consequence of the size and scale of Facebook’s social empire, as arguments for defending the size and operational sprawl of its business. Go figure.

In a statement provided to us last month ahead of the ruling, Facebook also said: “Since 2016, we have been in regular contact with the Bundeskartellamt and have responded to their requests. As we outlined publicly in 2017, we disagree with their views and the conflation of data protection laws and antitrust laws, and will continue to defend our position.” 

Separately, a 2016 privacy policy reversal by WhatsApp to link user data with Facebook accounts, including for marketing purposes, attracted the ire of EU privacy regulations — and most of these data flows remain suspended in the region.

An investigation by the UK’s data watchdog was only closed last year after Facebook committed not to link user data across the two services until it could do so in a way that complies with the GDPR.

Although the company does still share data for business intelligence and security purposes — which has drawn continued scrutiny from the French data watchdog.

German antitrust office limits Facebook’s data-gathering

A lengthy antitrust probe into how Facebook gathers data on users has resulted in Germany’s competition watchdog banning the social network giant from combining data on users across its own suite of social platforms without their consent. The investigation of Facebook data-gathering practices began in March 2016. The decision by Germany’s Federal Cartel Office, announced […]

A lengthy antitrust probe into how Facebook gathers data on users has resulted in Germany’s competition watchdog banning the social network giant from combining data on users across its own suite of social platforms without their consent.

The investigation of Facebook data-gathering practices began in March 2016.

The decision by Germany’s Federal Cartel Office, announced today, also prohibits Facebook from gathering data on users from third party websites — such as via tracking pixels and social plug-ins — without their consent.

Although the decision does not yet have legal force and Facebook has said it’s appealing. The BBC reports that the company has a month to challenge the decision before it comes into force in Germany.

In both cases — i.e. Facebook collecting and linking user data from its own suite of services; and from third party websites — the Bundeskartellamt asserts that consent to data processing must be voluntary, so cannot be made a precondition of using Facebook’s service.

The company must therefore “adapt its terms of service and data processing accordingly”, it warns.

“Facebook’s terms of service and the manner and extent to which it collects and uses data are in violation of the European data protection rules to the detriment of users. The Bundeskartellamt closely cooperated with leading data protection authorities in clarifying the data protection issues involved,” it writes, couching Facebook’s conduct as “exploitative abuse”.

“Dominant companies may not use exploitative practices to the detriment of the opposite side of the market, i.e. in this case the consumers who use Facebook. This applies above all if the exploitative practice also impedes competitors that are not able to amass such a treasure trove of data,” it continues.

“This approach based on competition law is not a new one, but corresponds to the case-law of the Federal Court of Justice under which not only excessive prices, but also inappropriate contractual terms and conditions constitute exploitative abuse (so-called exploitative business terms).”

Commenting further in a statement, Andreas Mundt, president of the Bundeskartellamt, added: “In future, Facebook will no longer be allowed to force its users to agree to the practically unrestricted collection and assigning of non-Facebook data to their Facebook user accounts.

“The combination of data sources substantially contributed to the fact that Facebook was able to build a unique database for each individual user and thus to gain market power. In future, consumers can prevent Facebook from unrestrictedly collecting and using their data. The previous practice of combining all data in a Facebook user account, practically without any restriction, will now be subject to the voluntary consent given by the users.

“Voluntary consent means that the use of Facebook’s services must not be subject to the users’ consent to their data being collected and combined in this way. If users do not consent, Facebook may not exclude them from its services and must refrain from collecting and merging data from different sources.”

“With regard to Facebook’s future data processing policy, we are carrying out what can be seen as an internal divestiture of Facebook’s data,” Mundt added. 

Facebook has responded to the Bundeskartellamt’s decision with a blog post setting out why it disagrees. The company did not respond to specific questions we put to it.

One key consideration is that Facebook also tracks non-users via third party websites. Aka, the controversial issue of ‘shadow profiles’ — which both US and EU politicians questioned founder Mark Zuckerberg about last year.

Which raises the question of how it could comply with the decision on that front, if its appeal fails, given it has no obvious conduit for seeking consent from non-users to gather their data. (Facebook’s tracking of non-users has already previously been judged illegal elsewhere in Europe.)

The German watchdog says that if Facebook intends to continue collecting data from outside its own social network to combine with users’ accounts without consent it “must be substantially restricted”, suggesting a number of different criteria are feasible — such as restrictions including on the amount of data; purpose of use; type of data processing; additional control options for users; anonymization; processing only upon instruction by third party providers; and limitations on data storage periods.

Should the decision come to be legally enforced, the Bundeskartellamt says Facebook will be obliged to develop proposals for possible solutions and submit them to the authority which would then examine whether or not they fulfil its requirements.

While there’s lots to concern Facebook in this decision — which, it recently emerged, has plans to unify the technical infrastructure of its messaging platforms — it isn’t all bad for the company. Or, rather, it could have been worse.

The authority makes a point of saying the social network can continue to make the use of each of its messaging platforms subject to the processing of data generated by their use, writing: “It must be generally acknowledged that the provision of a social network aiming at offering an efficient, data-based business model funded by advertising requires the processing of personal data. This is what the user expects.”

Although it also does not close the door on further scrutiny of that dynamic, either under data protection law (as indeed, there is a current challenge to so called ‘forced consent‘ under Europe’s GDPR); or indeed under competition law.

“The issue of whether these terms can still result in a violation of data protection rules and how this would have to be assessed under competition law has been left open,” it emphasizes.

It also notes that it did not investigate how Facebook subsidiaries WhatsApp and Instagram collect and use user data — leaving the door open for additional investigations of those services.

On the wider EU competition law front, in recent years the European Commission’s competition chief has voiced concerns about data monopolies — going so far as to suggest, in an interview with the BBC last December, that restricting access to data might be a more appropriate solution to addressing monopolistic platform power vs breaking companies up.

In its blog post rejecting the German Federal Cartel Office’s decision, Facebook’s Yvonne Cunnane, head of data protection for its international business, Facebook Ireland, and Nikhil Shanbhag, director and associate general counsel, make three points to counter the decision, writing that: “The Bundeskartellamt underestimates the fierce competition we face in Germany, misinterprets our compliance with GDPR and undermines the mechanisms European law provides for ensuring consistent data protection standards across the EU.”

On the competition point, Facebook claims in the blog post that “popularity is not dominance” — suggesting the Bundeskartellamt found 40 per cent of social media users in Germany don’t use Facebook. (Not that that would stop Facebook from tracking those non-users around the mainstream Internet, of course.)

Although, in its announcement of the decision today, the Federal Cartel Office emphasizes that it found Facebook to have a dominant position in the Germany market — with (as of December 2018) 23M daily active users and 32M monthly active users, which it said constitutes a market share of more than 95 per cent (daily active users) and more than 80 per cent (monthly active users).

It also says it views social services such as Snapchat, YouTube and Twitter, and professional networks like LinkedIn and Xing, as only offering “parts of the services of a social network” — saying it therefore excluded them from its consideration of the market.

Though it adds that “even if these services were included in the relevant market, the Facebook group with its subsidiaries Instagram and WhatsApp would still achieve very high market shares that would very likely be indicative of a monopolisation process”.

The mainstay of Facebook’s argument against the Bundeskartellamt decision appears to fix on the GDPR — with the company both seeking to claim it’s in compliance with the pan-EU data-protection framework (although its business faces multiple complaints under GDPR), while simultaneously arguing that the privacy regulation supersedes regional competition authorities.

So, as ever, Facebook is underlining that its regulator of choice is the Irish Data Protection Commission.

“The GDPR specifically empowers data protection regulators – not competition authorities – to determine whether companies are living up to their responsibilities. And data protection regulators certainly have the expertise to make those conclusions,” Facebook writes.

“The GDPR also harmonizes data protection laws across Europe, so everyone lives by the same rules of the road and regulators can consistently apply the law from country to country. In our case, that’s the Irish Data Protection Commission. The Bundeskartellamt’s order threatens to undermine this, providing different rights to people based on the size of the companies they do business with.”

The final plank of Facebook’s rebuttal focuses on pushing the notion that pooling data across services enhances the consumer experience and increases “safety and security” — the latter point being the same argument Zuckerberg used last year to defend ‘shadow profiles’ (not that he called them that) — with the company claiming now that it needs to pool user data across services to identify abusive behavior online; and disable accounts link to terrorism; child exploitation; and election interference.

So the company is essentially seeking to leverage (you could say ‘legally weaponize’) a smorgasbord of antisocial problems many of which have scaled to become major societal issues in recent years, at least in part as a consequence of the size and scale of Facebook’s social empire, as arguments for defending the size and operational sprawl of its business. Go figure.

In a statement provided to us last month ahead of the ruling, Facebook also said: “Since 2016, we have been in regular contact with the Bundeskartellamt and have responded to their requests. As we outlined publicly in 2017, we disagree with their views and the conflation of data protection laws and antitrust laws, and will continue to defend our position.” 

Separately, a 2016 privacy policy reversal by WhatsApp to link user data with Facebook accounts, including for marketing purposes, attracted the ire of EU privacy regulations — and most of these data flows remain suspended in the region.

An investigation by the UK’s data watchdog was only closed last year after Facebook committed not to link user data across the two services until it could do so in a way that complies with the GDPR.

Although the company does still share data for business intelligence and security purposes — which has drawn continued scrutiny from the French data watchdog.

Cambridge Analytica’s parent pleads guilty to breaking UK data law

Cambridge Analytica’s parent company, SCL Elections, has been fined £15,000 in a UK court after pleading guilty to failing to comply with an enforcement notice issued by the national data protection watchdog, the Guardian reports. While the fine itself is a small and rather symbolic one, given the political data analytics firm went into administration last […]

Cambridge Analytica’s parent company, SCL Elections, has been fined £15,000 in a UK court after pleading guilty to failing to comply with an enforcement notice issued by the national data protection watchdog, the Guardian reports.

While the fine itself is a small and rather symbolic one, given the political data analytics firm went into administration last year, the implications of the prosecution are more sizeable.

Last year the Information Commissioner’s Office ordered SCL to hand over all the data it holds on U.S. academic, professor David Carroll, within 30 days. After the company failed to do so it was taken to court by the ICO.

Prior to Cambridge Analytica gaining infamy for massively misusing Facebook user data, the company, which was used by the Trump campaign, claimed to have up to 7,000 data points on the entire U.S. electorate — circa 240M people.

So Carroll’s attempt to understand exactly what data the company had on him, and how the information was processed to create a voter profile of it, has much wider relevance.

Under EU law, citizens can file a Subject Access Request (SAR) to obtain personal data held on them. So Carroll, a U.S. citizen, decided to bring a test case by requesting his data even though he is not a UK citizen — having learnt Cambridge Analytica had processed his personal data in the U.K.

He lodged his original SAR in January 2017 after becoming suspicious about the company’s claim to have built profiles of every U.S. voter.

Cambridge Analytica responded to the SAR in March 2017 but only sent partial data. So Carroll complained to the ICO which backed his request — issuing an enforcement notice on SCL Elections in May 2018, days after the (now) scandal-hit company announced it was shutting down.

The company pulled the plug on its business in the wake of the Facebook data misuse scandal, when it emerged SCL had paid an academic with developer access to Facebook’s platform to harvest data on millions of users without proper consents in a bid to create psychological profiles of U.S. voters for election campaign purposes.

The story snowballed into a global scandal for Facebook and triggered a major (and still ongoing) investigation by the ICO into how online data is used for political campaigning.

It also led the ICO to hit Facebook with a £500,000 fine last year (the maximum possible under the relevant UK data protection law). Although the company is appealing.

The SCL prosecution is an important one, cementing the fact that anyone who requests their personal information from a U.K.-based company or organisation is legally entitled to have that request answered, in full, under national data protection law — regardless of whether they’re a British citizen or not.

Commenting in a statement, information commissioner Elizabeth Denham said: “This prosecution, the first against Cambridge Analytica, is a warning that there are consequences for ignoring the law. Wherever you live in the world, if your data is being processed by a UK company, UK data protection laws apply.

“Organisations that handle personal data must respect people’s legal privacy rights. Where that does not happen and companies ignore ICO enforcement notices, we will take action.”

The Daily Beast reports that at today’s hearing at Hendon magistrates the court was told that the administrators of Cambridge Analytica and its related companies had now provided relevant passwords to the ICO.

Cambridge Analytica had previously failed to supply these passwords.

This means the regulator should be able to gain access to more of the data it seized when it raided the company’s London offices in March last year. So it’s at least possible Carroll’s SAR might eventually be fulfilled that way, i.e. by the regulatory sifting through the circa 700TB of data it seized.

However Carroll told TechCrunch he’s hoping for a faster route to get to the truth of exactly what the company did with his data, telling us there’s still “a March court event that could yield our end goal: Disclosure”.

“Why would they rather plead guilty to a criminal offense instead of complying with disclosure required by UK DPA ‘98. What are they hiding? Why has it come to this?” he added.

“Testing the Subject Access Request in this way is an important exercise. Do regulators and companies really know how to fully execute a Subject Access Request? How about when it escalates to a matter of international importance?”

Seized cache of Facebook docs raise competition and consent questions

A UK parliamentary committee has published the cache of Facebook documents it dramatically seized last week. The documents were obtained by a legal discovery process by a startup that’s suing the social network in a California court in a case related to Facebook changing data access permissions back in 2014/15. The court had sealed the documents […]

A UK parliamentary committee has published the cache of Facebook documents it dramatically seized last week.

The documents were obtained by a legal discovery process by a startup that’s suing the social network in a California court in a case related to Facebook changing data access permissions back in 2014/15.

The court had sealed the documents but the DCMS committee used rarely deployed parliamentary powers to obtain them from the Six4Three founder, during a business trip to London.

You can read the redacted documents here — all 250 pages of them.

In a series of tweets regarding the publication, committee chair Damian Collins says he believes there is “considerable public interest” in releasing them.

“They raise important questions about how Facebook treats users data, their policies for working with app developers, and how they exercise their dominant position in the social media market,” he writes.

“We don’t feel we have had straight answers from Facebook on these important issues, which is why we are releasing the documents. We need a more public debate about the rights of social media users and the smaller businesses who are required to work with the tech giants. I hope that our committee investigation can stand up for them.”

The committee has been investigating online disinformation and election interference for the best part of this year, and has been repeatedly frustrated in its attempts to extract answers from Facebook.

But it is protected by parliamentary privilege — hence it’s now published the Six4Three files, having waited a week in order to redact certain pieces of personal information.

Collins has included a summary of key issues, as the committee sees them after reviewing the documents, in which he draws attention to six issues.

Here is his summary of the key issues:

  1. White Lists Facebook have clearly entered into whitelisting agreements with certain companies, which meant that after the platform changes in 2014/15 they maintained full access to friends data. It is not clear that there was any user consent for this, nor how Facebook decided which companies should be whitelisted or not.
  2. Value of friends data It is clear that increasing revenues from major app developers was one of the key drivers behind the Platform 3.0 changes at Facebook. The idea of linking access to friends data to the financial value of the developers relationship with Facebook is a recurring feature of the documents.
  3. Reciprocity Data reciprocity between Facebook and app developers was a central feature in the discussions about the launch of Platform 3.0.
  4. Android Facebook knew that the changes to its policies on the Android mobile phone system, which enabled the Facebook app to collect a record of calls and texts sent by the user would be controversial. To mitigate any bad PR, Facebook planned to make it as hard of possible for users to know that this was one of the underlying features of the upgrade of their app.
  5. Onavo Facebook used Onavo to conduct global surveys of the usage of mobile apps by customers, and apparently without their knowledge. They used this data to assess not just how many people had downloaded apps, but how often they used them. This knowledge helped them to decide which companies to acquire, and which to treat as a threat.
  6. Targeting competitor Apps The files show evidence of Facebook taking aggressive positions against apps, with the consequence that denying them access to data led to the failure of that business

The publication of the files comes at an awkward moment for Facebook — which remains on the back foot after a string of data and security scandals, and has just announced a major policy change — ending a long-running ban on apps copying its own platform features.

Albeit the timing of Facebook’s policy shift announcement hardly looks incidental — given Collins said last week the committee would publish the files this week.

The policy in question has been used by Facebook to close down competitors in the past, such as — two years ago — when it cut off style transfer app Prisma’s access to its live-streaming Live API when the startup tried to launch a livestreaming art filter (Facebook subsequently launched its own style transfer filters for Live).

So its policy reversal now looks intended to diffuse regulatory scrutiny around potential antitrust concerns.

But emails in the Six4Three files suggesting that Facebook took “aggressive positions” against competing apps could spark fresh competition concerns.

In one email dated January 24, 2013, a Facebook staffer, Justin Osofsky, discusses Twitter’s launch of its short video clip app, Vine, and says Facebook’s response will be to close off its API access.

As part of their NUX, you can find friends via FB. Unless anyone raises objections, we will shut down their friends API access today. We’ve prepared reactive PR, and I will let Jana know our decision,” he writes. 

Osofsky’s email is followed by what looks like a big thumbs up from Zuckerberg, who replies: “Yup, go for it.”

Also of concern on the competition front is Facebook’s use of a VPN startup it acquired, Onavo, to gather intelligence on competing apps — either for acquisition purposes or to target as a threat to its business.

The files show various Onavo industry charts detailing reach and usage of mobile apps and social networks — with each of these graphs stamped ‘highly confidential’.

Facebook bought Onavo back in October 2013. Shortly after it shelled out $19BN to acquire rival messaging app WhatsApp — which one Onavo chart in the cache indicates was beasting Facebook on mobile, accounting for well over double the daily message sends at that time.

The files also spotlight several issues of concern relating to privacy and data protection law, with internal documents raising fresh questions over how or even whether (in the case of Facebook’s whitelisting agreements with certain developers) it obtained consent from users to process their personal data.

The company is already facing a number of privacy complaints under the EU’s GDPR framework over its use of ‘forced consent‘, given that it does not offer users an opt-out from targeted advertising.

But the Six4Three files look set to pour fresh fuel on the consent fire.

Collins’ fourth line item — related to an Android upgrade — also speaks loudly to consent complaints.

Earlier this year Facebook was forced to deny that it collects calls and SMS data from users of its Android apps without permission. But, as we wrote at the time, it had used privacy-hostile design tricks to sneak expansive data-gobbling permissions past users. So, put simple, people clicked ‘agree’ without knowing exactly what they were agreeing to.

The Six4Three files back up the notion that Facebook was intentionally trying to mislead users.

In one email dated November 15, 2013, from Matt Scutari, manager privacy and public policy, suggests ways to prevent users from choosing to set a higher level of privacy protection, writing: “Matt is providing policy feedback on a Mark Z request that Product explore the possibility of making the Only Me audience setting unsticky. The goal of this change would be to help users avoid inadvertently posting to the Only Me audience. We are encouraging Product to explore other alternatives, such as more aggressive user education or removing stickiness for all audience settings.”

Another awkward trust issue for Facebook which the documents could stir up afresh relates to its repeat claim — including under questions from lawmakers — that it does not sell user data.

In one email from the cache — sent by Mark Zuckerberg, dated October 7, 2012 — the Facebook founder appears to be entertaining the idea of charging developers for “reading anything, including friends”.

Yet earlier this year, when he was asked by a US lawmaker how Facebook makes money, Zuckerberg replied: “Senator, we sell ads.”

He did not include a caveat that he had apparently personally entertained the idea of liberally selling access to user data.

Responding to the publication of the Six4Three documents, a Facebook spokesperson told us:

As we’ve said many times, the documents Six4Three gathered for their baseless case are only part of the story and are presented in a way that is very misleading without additional context. We stand by the platform changes we made in 2015 to stop a person from sharing their friends’ data with developers. Like any business, we had many of internal conversations about the various ways we could build a sustainable business model for our platform. But the facts are clear: we’ve never sold people’s data.

Zuckerberg has repeatedly refused to testify in person to the DCMS committee.

At its last public hearing — which was held in the form of a grand committee comprising representatives from nine international parliaments, all with burning questions for Facebook — the company sent its policy VP, Richard Allan, leaving an empty chair where Zuckerberg’s bum should be.

Zuckerberg rejects facetime call for answers from five parliaments

Facebook has declined once again to send its CEO to the UK parliament — this time turning down an invitation to face questions from a grand committee comprised of representatives from five international parliaments. MPs from Argentina, Australia, Canada, Ireland and the UK have joined forces to try to pile pressure on the company’s founder, Mark […]

Facebook has declined once again to send its CEO to the UK parliament — this time turning down an invitation to face questions from a grand committee comprised of representatives from five international parliaments.

MPs from Argentina, Australia, Canada, Ireland and the UK have joined forces to try to pile pressure on the company’s founder, Mark Zuckerberg, to answer questions related to his “platform’s malign use in world affairs and democratic process”.

The UK’s Digital, Culture, Media and Sport committee, which has been running an enquiry into online disinformation for the best part of this year, revealed the latest Facebook snub yesterday. It put out the grand committee call for facetime with Zuckerberg last week.

In the latest rejection letter to DCMS, Facebook writes: “Thank you for the invitation to appear before your Grand Committee. As we explained in our letter of November 2nd, Mr Zuckerberg is not able to be in London on November 27th for your hearing and sends his apologies.”

“We remain happy to cooperate with your inquiry as you look at issues related to false news and elections,” the company’s UK head of public policy, Rebecca Stimson, adds, before going on to summarize “some of the things we have been doing at Facebook over the last year”.

This boils down to a list of Facebook activities and related research that intersects with the topics of election interference, political ads, disinformation and security, but without offering any new information of substance or data points that could be used to measure and quantify the company’s actions.

The letter does not explain why Zuckerberg is unavailable to speak to the committee remotely, e.g. via video call.

Responding to the latest snub, DCMS chair Damian Collins expressed disappointment and vowed to keep up the pressure.

“Facebook’s letter is, once again, hugely disappointing,” he writes. “We believe Mark Zuckerberg has important questions to answer about what he knew about breaches of data protection law involving their customers’ personal data and why the company didn’t do more to identify and act against known sources of disinformation; and in particular those coming from agencies in Russia.

“The fact that he has continually declined to give evidence, not just to my committee, but now to an unprecedented international grand committee, makes him look like he’s got something to hide.”

“We will not let the matter rest there, and are not reassured in any way by the corporate puff piece that passes off as Facebook’s letter back to us,” Collins adds. “The fact that the University of Michigan believes that Facebook’s ‘Iffy Quotient’ scores have recently improved means nothing to the victims of Facebook data breaches.

“We will continue with our planning for the international grand committee on 27th November, and expect to announce shortly the names of additional representatives who will be joining us and our plans for the hearing.”

Facebook must change and policymakers must act on data, warns UK watchdog

The UK’s data watchdog has warned that Facebook must overhaul its privacy-hostile business model or risk burning user trust for good. Comments she made today have also raised questions over the legality of so-called lookalike audiences to target political ads at users of its platform. Information commissioner Elizabeth Denham was giving evidence to the Digital, […]

The UK’s data watchdog has warned that Facebook must overhaul its privacy-hostile business model or risk burning user trust for good.

Comments she made today have also raised questions over the legality of so-called lookalike audiences to target political ads at users of its platform.

Information commissioner Elizabeth Denham was giving evidence to the Digital, Culture, Media and Sport committee in the UK parliament this morning. She’s just published her latest report to parliament, on the ICO’s (still ongoing) investigation into the murky world of data use and misuse in political campaigns.

Since May 2017 the watchdog has been pulling on myriad threads attached to the Cambridge Analytica Facebook data misuse scandal — to, in the regulator’s words, “follow the data” across an entire ecosystem of players; from social media firms to data brokers to political parties, and indeed beyond to other still unknown actors with an interest in also getting their hands on people’s data.

Denham readily admitted to the committee today that the sprawling piece of work had opened a major can of worms.

“I think we were astounded by the amount of data that’s held by all of these agencies — not just social media companies but data companies like Cambridge Analytica; political parties the extent of their data; the practices of data brokers,” she said.

“We also looked at universities, and the data practices in the Psychometric Centre, for example, at Cambridge University — and again I think universities have more to do to control data between academic researchers and the same individuals that are then running commercial companies.

“There’s a lot of switching of hats across this whole ecosystem — that I think there needs to be clarity on who’s the data controller and limits on how data can be shared. And that’s a theme that runs through our whole report.”

“The major concern that I have in this investigation is the very disturbing disregard that many of these organizations across the entire ecosystem have for personal privacy of UK citizens and voters. So if you look across the whole system that’s really what this report is all about — and we have to improve these practices for the future,” she added. “We really need to tighten up controls across the entire ecosystem because it matters to our democratic processes.”

Asked whether she would personally trust her data to Facebook, Denham told the committee: “Facebook has a long way to go to change practices to the point where people have deep trust in the platform. So I understand social media sites and platforms and the way we live our lives online now is here to stay but Facebook needs to change, significantly change their business model and their practices to maintain trust.”

“I understand that platforms will continue to play a really important role in people’s lives but they need to take much greater responsibility,” she added when pressed to confirm that she wouldn’t trust Facebook.

A code of practice for lookalike audiences

In another key portion of the session Denham confirmed that inferred data is personal data under the law.(Although of course Facebook has a different legal interpretation of this point.)

Inferred data refers to inferences made about individuals based on data-mining their wider online activity — such as identifying a person’s (non-stated) political views by examining which Facebook Pages they’ve liked. Facebook offers advertisers an interests-based tool to do this — by creating so-called lookalike audiences comprises of users with similar interests.

But if the information commissioner’s view of data protection law is correct, it implies that use of such tools to infer political views of individuals could be in breach of European privacy law. Unless explicit consent is gained beforehand for people’s personal data to be used for that purpose.

“What’s happened here is the model that’s familiar to people in the commercial sector — or behavioural targeting — has been transferred, I think transformed, into the political arena,” said Denham. “And that’s why I called for an ethical pause so that we can get this right.

“I don’t think that we want to use the same model that sells us holidays and shoes and cars to engage with people and voters. I think that people expect more than that. This is a time for a pause, to look at codes, to look at the practices of social media companies, to take action where they’ve broken the law.”

She told MPs that the use of lookalike audience should be included in a Code of Practice which she has previously called for vis-a-vis political campaigns’ use of data tools.

Social media platforms should also disclose the use of lookalike audiences for targeting political ads at users, she said today — a data-point that Facebook has nonetheless omitted to include in its newly launched political ad disclosure system.

“The use of lookalike audiences should be made transparent to the individuals,” she argued. “They need to know that a political party or an MP is making use of lookalike audiences, so I think the lack of transparency is problematic.”

Asked whether the use of Facebook lookalike audiences to target political ads at people who have chosen not to publicly disclose their political views is legal under current EU data protection laws, she declined to make an instant assessment — but told the committee: “We have to look at it in detail under the GDPR but I’m suggesting the public is uncomfortable with lookalike audiences and it needs to be transparent.”

We’ve reached out to Facebook for comment.

Links to known cyber security breaches

The ICO’s latest report to parliament and today’s evidence session also lit up a few new nuggets of intel on the Cambridge Analytica saga, including the fact that some of the misused Facebook data — which had found its way to Cambridge University’s Psychometric Centre — was not only accessed by IP addresses that resolve to Russia but some IP addresses have been linked to other known cyber security breaches.

“That’s what we understand,” Denham’s deputy, James Dipple-Johnstone told the committee. “We don’t know who is behind those IP addresses but what we understand is that some of those appear on lists of concern to cyber security professionals by virtue of other types of cyber incidents.”

“We’re still examining exactly what data that was, how secure it was and how anonymized,” he added saying “it’s part of an active line of enquiry”.

The ICO has also passed the information on “to the relevant authorities”, he added.

The regulator also revealed that it now knows exactly who at Facebook was aware of the Cambridge Analytica breach at the earliest instance — saying it has internal emails related to it issue which have “quite a large distribution list”. Although it’s still not been made public whether or not Mark Zuckerberg name is on that list.

Facebook’s CTO previously told the committee the person with ultimate responsibility where data misuse is concerned is Zuckerberg — a point the Facebook founder has also made personally (just never to this committee).

When pressed if Zuckerberg was on the distribution list for the breach emails, Denham declined to confirm so today, saying “we just don’t want to get it wrong”.

The ICO said it would pass the list to the committee in due course.

Which means it shouldn’t be too long before we know exactly who at Facebook was responsible for not disclosing the Cambridge Analytica breach to relevant regulators (and indeed parliamentarians) sooner.

The committee is pressing in this because Facebook gave earlier evidence to its online disinformation enquiry yet omitted to mention the Cambridge Analytica breach entirely. (Hence its accusation that senior management at Facebook deliberately withheld pertinent information.)

Denham agreed it would have been best practice for Facebook to notify relevant regulators at the time it became aware of the data misuse — even without the GDPR’s new legal requirement being in force then.

She also agreed with the committee that it would be a good idea for Zuckerberg to personally testify to the UK parliament.

Last week the committee issued yet another summons for the Facebook founder — this time jointly with a Canadian committee which has also been investigating the same knotted web of social media data misuse.

Though Facebook has yet to confirm whether or not Zuckerberg will make himself available this time.

How to regulate Internet harms?

This summer the ICO announced it would be issuing Facebook with the maximum penalty possible under the country’s old data protection regime for the Cambridge Analytica data breach.

At the same time Denham also called for an ethical pause on the use of social media microtargeting of political ads, saying there was an urgent need for “greater and genuine transparency” about the use of such technologies and techniques to ensure “people have control over their own data and that the law is upheld”.

She reiterated that call for an ethical pause today.

She also said the fine the ICO handed Facebook last month for the Cambridge Analytica breach would have been “significantly larger” under the rebooted privacy regime ushered in by the pan-EU GDPR framework this May — adding that it would be interesting to see how Facebook responds to the fine (i.e. whether it pays up or tries to appeal).

“We have evidence… that Cambridge Analytica may have partially deleted some of the data but even as recently as 2018, Spring, some of the data was still there at Cambridge Analytica,” she told the committee. “So the follow up was less than robust. And that’s one of the reasons that we fined Facebook £500,000.”

Data deletion assurances that Facebook had sought from various entities after the data misuse scandal blew up don’t appear to be worth the paper they’re written on — with the ICO also noting that some of these confirmations had not even been signed.

Dipple-Johnstone also said it believes that a number of additional individuals and academic institutions received “parts” of the Cambridge Analytica Facebook data-set — i.e. additional to the multiple known entities in the saga so far (such as GSR’s Aleksandr Kogan, and CA whistleblower Chris Wylie).

“We’re examining exactly what data has gone where,” he said, saying it’s looking into “about half a dozen” entities — but declining to name names while its enquiry remains ongoing.

Asked for her views on how social media should be regulated by policymakers to rein in data abuses and misuses, Denham suggested a system-based approach that looks at effectiveness and outcomes — saying it boils down to accountability.

“What is needed for tech companies — they’re already subject to data protection law but when it comes to the broader set of Internet harms that your committee is speaking about — misinformation, disinformation, harm to children in their development, all of these kinds of harms — I think what’s needed is an accountability approach where parliament sets the objectives and the outcomes that are needed for the tech companies to follow; that a Code of Practice is developed by a regulator; backstopped by a regulator,” she suggested.

“What I think’s really important is the regulators looking at the effectiveness of systems like takedown processes; recognizing bots and fake accounts and disinformation — rather than the regulator taking individual complaints. So I think it needs to be a system approach.”

“I think the time for self regulation is over. I think that ship has sailed,” she also told the committee.

On the regulatory powers front, Denham was generally upbeat about the potential of the new GDPR framework to curb bad data practices — pointing out that not only does it allow for supersized fines but companies can be ordered to stop processing data, which she suggested is an even more potent tool to control rogue data-miners.

She also said suggested another new power — to go in and inspect companies and conduct data audits — will help it get results.

But she said the ICO may need to ask parliament for another tool to be able to carry out effective data investigations. “One of the areas that we may be coming back to talk to parliament, to talk to government about is the ability to compel individuals to be interviewed,” she said, adding: “We have been frustrated by that aspect of our investigation.”

Both the former CEO of Cambridge Analytica, Alexander Nix, and Kogan, the academic who built the quiz app used to extract Facebook user data so it could be processed for political ad targeting purposes, had refused to appear for an interview with it under caution, she said today.

On the wider challenge of regulating a full range of “Internet harms” — spanning the spread of misinformation, disinformation and also offensive user-generated content — Denham suggested a hybrid regulatory model might ultimately be needed to tackle this, suggesting the ICO and communications regular Ofcom might work together.

“It’s a very complex area. No country has tackled this yet,” she conceded, noting the controversy around Germany’s social media take down law, and adding: “It’s very challenging for policymakers… Balancing privacy rights with freedom of speech, freedom of expression. These are really difficult areas.”

Asked what her full ‘can of worms’ investigation has highlighted for her, Denham summed it up as: “A disturbing amount of disrespect for personal data of voters and prospective voters.”

“The main purpose of this [investigation] is to pull back the curtain and show the public what’s happening with their personal data,” she added. “The politicians, the policymakers need to think about this too — stronger rules and stronger laws.”

One committee member suggestively floated the idea of social media platforms being required to have an ICO officer inside their organizations — to grease their compliance with the law.

Smiling, Denham responded that it would probably make for an uncomfortable prospect on both sides.

Big tech must not reframe digital ethics in its image

Facebook founder Mark Zuckerberg’s visage loomed large over the European parliament this week, both literally and figuratively, as global privacy regulators gathered in Brussels to interrogate the human impacts of technologies that derive their power and persuasiveness from our data. The eponymous social network has been at the center of a privacy storm this year. And […]

Facebook founder Mark Zuckerberg’s visage loomed large over the European parliament this week, both literally and figuratively, as global privacy regulators gathered in Brussels to interrogate the human impacts of technologies that derive their power and persuasiveness from our data.

The eponymous social network has been at the center of a privacy storm this year. And every fresh Facebook content concern — be it about discrimination or hate speech or cultural insensitivity — adds to a damaging flood.

The overarching discussion topic at the privacy and data protection confab, both in the public sessions and behind closed doors, was ethics: How to ensure engineers, technologists and companies operate with a sense of civic duty and build products that serve the good of humanity.

So, in other words, how to ensure people’s information is used ethically — not just in compliance with the law. Fundamental rights are increasingly seen by European regulators as a floor not the ceiling. Ethics are needed to fill the gaps where new uses of data keep pushing in.

As the EU’s data protection supervisor, Giovanni Buttarelli, told delegates at the start of the public portion of the International Conference of Data Protection and Privacy Commissioners: “Not everything that is legally compliant and technically feasible is morally sustainable.”

As if on cue Zuckerberg kicked off a pre-recorded video message to the conference with another apology. Albeit this was only for not being there to give an address in person. Which is not the kind of regret many in the room are now looking for, as fresh data breaches and privacy incursions keep being stacked on top of Facebook’s Cambridge Analytica data misuse scandal like an unpalatable layer cake that never stops being baked.

Evidence of a radical shift of mindset is what champions of civic tech are looking for — from Facebook in particular and adtech in general.

But there was no sign of that in Zuckerberg’s potted spiel. Rather he displayed the kind of masterfully slick PR manoeuvering that’s associated with politicians on the campaign trail. It’s the natural patter for certain big tech CEOs too, these days, in a sign of our sociotechnical political times.

(See also: Facebook hiring ex-UK deputy PM, Nick Clegg, to further expand its contacts database of European lawmakers.)

And so the Facebook founder seized on the conference’s discussion topic of big data ethics and tried to zoom right back out again. Backing away from talk of tangible harms and damaging platform defaults — aka the actual conversational substance of the conference (from talk of how dating apps are impacting how much sex people have and with whom they’re doing it; to shiny new biometric identity systems that have rebooted discriminatory caste systems) — to push the idea of a need to “strike a balance between speech, security, privacy and safety”.

This was Facebook trying reframe the idea of digital ethics — to make it so very big-picture-y that it could embrace his people-tracking ad-funded business model as a fuzzily wide public good, with a sort of ‘oh go on then’ shrug.

“Every day people around the world use our services to speak up for things they believe in. More than 80 million small businesses use our services, supporting millions of jobs and creating a lot of opportunity,” said Zuckerberg, arguing for a ‘both sides’ view of digital ethics. “We believe we have an ethical responsibility to support these positive uses too.”

Indeed, he went further, saying Facebook believes it has an “ethical obligation to protect good uses of technology”.

And from that self-serving perspective almost anything becomes possible — as if Facebook is arguing that breaking data protection law might really be the ‘ethical’ thing to do. (Or, as the existentialists might put it: ‘If god is dead, then everything is permitted’.)

It’s an argument that radically elides some very bad things, though. And glosses over problems that are systemic to Facebook’s ad platform.

A little later, Google’s CEO Sundar Pichai also dropped into the conference in video form, bringing much the same message.

“The conversation about ethics is important. And we are happy to be a part of it,” he began, before an instant hard pivot into referencing Google’s founding mission of “organizing the world’s information — for everyone” (emphasis his), before segwaying — via “knowledge is empowering” — to asserting that “a society with more information is better off than one with less”.

Is having access to more information of unknown and dubious or even malicious provenance better than having access to some verified information? Google seems to think so.

SAN FRANCISCO, CA – OCTOBER 04: Pichai Sundararajan, known as Sundar Pichai, CEO of Google Inc. speaks during an event to introduce Google Pixel phone and other Google products on October 4, 2016 in San Francisco, California. The Google Pixel is intended to challenge the Apple iPhone in the premium smartphone category. (Photo by Ramin Talaie/Getty Images)

The pre-recorded Pichai didn’t have to concern himself with all the mental ellipses bubbling up in the thoughts of the privacy and rights experts in the room.

“Today that mission still applies to everything we do at Google,” his digital image droned on, without mentioning what Google is thinking of doing in China. “It’s clear that technology can be a positive force in our lives. It has the potential to give us back time and extend opportunity to people all over the world.

“But it’s equally clear that we need to be responsible in how we use technology. We want to make sound choices and build products that benefit society that’s why earlier this year we worked with our employees to develop a set of AI principles that clearly state what types of technology applications we will pursue.”

Of course it sounds fine. Yet Pichai made no mention of the staff who’ve actually left Google because of ethical misgivings. Nor the employees still there and still protesting its ‘ethical’ choices.

It’s not almost as if the Internet’s adtech duopoly is singing from the same ‘ads for greater good trumping the bad’ hymn sheet; the Internet’s adtech’s duopoly is doing exactly that.

The ‘we’re not perfect and have lots more to learn’ line that also came from both CEOs seems mostly intended to manage regulatory expectation vis-a-vis data protection — and indeed on the wider ethics front.

They’re not promising to do no harm. Nor to always protect people’s data. They’re literally saying they can’t promise that. Ouch.

Meanwhile, another common FaceGoog message — an intent to introduce ‘more granular user controls’ — just means they’re piling even more responsibility onto individuals to proactively check (and keep checking) that their information is not being horribly abused.

This is a burden neither company can speak to in any other fashion. Because the solution is that their platforms not hoard people’s data in the first place.

The other ginormous elephant in the room is big tech’s massive size; which is itself skewing the market and far more besides.

Neither Zuckerberg nor Pichai directly addressed the notion of overly powerful platforms themselves causing structural societal harms, such as by eroding the civically minded institutions that are essential to defend free societies and indeed uphold the rule of law.

Of course it’s an awkward conversation topic for tech giants if vital institutions and societal norms are being undermined because of your cut-throat profiteering on the unregulated cyber seas.

A great tech fix to avoid answering awkward questions is to send a video message in your CEO’s stead. And/or a few minions. Facebook VP and chief privacy officer, Erin Egan, and Google’s SVP of global affairs Kent Walker, were duly dispatched and gave speeches in person.

They also had a handful of audience questions put to them by an on stage moderator. So it fell to Walker, not Pichai, to speak to Google’s contradictory involvement in China in light of its foundational claim to be a champion of the free flow of information.

“We absolutely believe in the maximum amount of information available to people around the world,” Walker said on that topic, after being allowed to intone on Google’s goodness for almost half an hour. “We have said that we are exploring the possibility of ways of engaging in China to see if there are ways to follow that mission while complying with laws in China.

“That’s an exploratory project — and we are not in a position at this point to have an answer to the question yet. But we continue to work.”

Egan, meanwhile, batted away her trio of audience concerns — about Facebook’s lack of privacy by design/default; and how the company could ever address ethical concerns without dramatically changing its business model — by saying it has a new privacy and data use team sitting horizontally across the business, as well as a data protection officer (an oversight role mandated by the EU’s GDPR; into which Facebook plugged its former global deputy chief privacy officer, Stephen Deadman, earlier this year).

She also said the company continues to invest in AI for content moderation purposes. So, essentially, more trust us. And trust our tech.

She also replied in the affirmative when asked whether Facebook will “unequivocally” support a strong federal privacy law in the US — with protections “equivalent” to those in Europe’s data protection framework.

But of course Zuckerberg has said much the same thing before — while simultaneously advocating for weaker privacy standards domestically. So who now really wants to take Facebook at its word on that? Or indeed on anything of human substance.

Not the EU parliament, for one. MEPs sitting in the parliament’s other building, in Strasbourg, this week adopted a resolution calling for Facebook to agree to an external audit by regional oversight bodies.

But of course Facebook prefers to run its own audit. And in a response statement the company claims it’s “working relentlessly to ensure the transparency, safety and security” of people who use its service (so bad luck if you’re one of those non-users it also tracks then). Which is a very long-winded way of saying ‘no, we’re not going to voluntarily let the inspectors in’.

Facebook’s problem now is that trust, once burnt, takes years and mountains’ worth of effort to restore.

This is the flip side of ‘move fast and break things’. (Indeed, one of the conference panels was entitled ‘move fast and fix things’.) It’s also the hard-to-shift legacy of an unapologetically blind ~decade-long dash for growth regardless of societal cost.

Given the, it looks unlikely that Zuckerberg’s attempt to paint a portrait of digital ethics in his company’s image will do much to restore trust in Facebook.

Not so long as the platform retains the power to cause damage at scale.

It was left to everyone else at the conference to discuss the hollowing out of democratic institutions, societal norms, humans interactions and so on — as a consequence of data (and market capital) being concentrated in the hands of the ridiculously powerful few.

“Today we face the gravest threat to our democracy, to our individual liberty in Europe since the war and the United States perhaps since the civil war,” said Barry Lynn, a former journalist and senior fellow at the Google-backed New America Foundation think tank in Washington, D.C., where he had directed the Open Markets Program — until it was shut down after he wrote critically about, er, Google.

“This threat is the consolidation of power — mainly by Google, Facebook and Amazon — over how we speak to one another, over how we do business with one another.”

Meanwhile the original architect of the World Wide Web, Tim Berners-Lee, who has been warning about the crushing impact of platform power for years now is working on trying to decentralize the net’s data hoarders via new technologies intended to give users greater agency over their data.

On the democratic damage front, Lynn pointed to how news media is being hobbled by an adtech duopoly now sucking hundreds of billion of ad dollars out of the market annually — by renting out what he dubbed their “manipulation machines”.

Not only do they sell access to these ad targeting tools to mainstream advertisers — to sell the usual products, like soap and diapers — they’re also, he pointed out, taking dollars from “autocrats and would be autocrats and other social disruptors to spread propaganda and fake news to a variety of ends, none of them good”.

The platforms’ unhealthy market power is the result of a theft of people’s attention, argued Lynn. “We cannot have democracy if we don’t have a free and robustly funded press,” he warned.

His solution to the society-deforming might of platform power? Not a newfangled decentralization tech but something much older: Market restructuring via competition law.

“The basic problem is how we structure or how we have failed to structure markets in the last generation. How we have licensed or failed to license monopoly corporations to behave.

“In this case what we see here is this great mass of data. The problem is the combination of this great mass of data with monopoly power in the form of control over essential pathways to the market combined with a license to discriminate in the pricing and terms of service. That is the problem.”

“The result is to centralize,” he continued. “To pick and choose winners and losers. In other words the power to reward those who heed the will of the master, and to punish those who defy or question the master — in the hands of Google, Facebook and Amazon… That is destroying the rule of law in our society and is replacing rule of law with rule by power.”

For an example of an entity that’s currently being punished by Facebook’s grip on the social digital sphere you need look no further than Snapchat.

Also on the stage in person: Apple’s CEO Tim Cook, who didn’t mince his words either — attacking what he dubbed a “data industrial complex” which he said is “weaponizing” people’s person data against them for private profit.

The adtech modeus operandi sums to “surveillance”, Cook asserted.

Cook called this a “crisis”, painting a picture of technologies being applied in an ethics-free vacuum to “magnify our worst human tendencies… deepen divisions, incite violence and even undermine our shared sense of what is true and what is false” — by “taking advantage of user trust”.

“This crisis is real… And those of us who believe in technology’s potential for good must not shrink from this moment,” he warned, telling the assembled regulators that Apple is aligned with their civic mission.

Of course Cook’s position also aligns with Apple’s hardware-dominated business model — in which the company makes most of its money by selling premium priced, robustly encrypted devices, rather than monopolizing people’s attention to sell their eyeballs to advertisers.

The growing public and political alarm over how big data platforms stoke addiction and exploit people’s trust and information — and the idea that an overarching framework of not just laws but digital ethics might be needed to control this stuff — dovetails neatly with the alternative track that Apple has been pounding for years.

So for Cupertino it’s easy to argue that the ‘collect it all’ approach of data-hungry platforms is both lazy thinking and irresponsible engineering, as Cook did this week.

“For artificial intelligence to be truly smart it must respect human values — including privacy,” he said. “If we get this wrong, the dangers are profound. We can achieve both great artificial intelligence and great privacy standards. It is not only a possibility — it is a responsibility.”

Yet Apple is not only a hardware business. In recent years the company has been expanding and growing its services business. It even involves itself in (a degree of) digital advertising. And it does business in China.

It is, after all, still a for-profit business — not a human rights regulator. So we shouldn’t be looking to Apple to spec out a digital ethical framework for us, either.

No profit making entity should be used as the model for where the ethical line should lie.

Apple sets a far higher standard than other tech giants, certainly, even as its grip on the market is far more partial because it doesn’t give its stuff away for free. But it’s hardly perfect where privacy is concerned.

One inconvenient example for Apple is that it takes money from Google to make the company’s search engine the default for iOS users — even as it offers iOS users a choice of alternatives (if they go looking to switch) which includes pro-privacy search engine DuckDuckGo.

DDG is a veritable minnow vs Google, and Apple builds products for the consumer mainstream, so it is supporting privacy by putting a niche search engine alongside a behemoth like Google — as one of just four choices it offers.

But defaults are hugely powerful. So Google search being the iOS default means most of Apple’s mobile users will have their queries fed straight into Google’s surveillance database, even as Apple works hard to keep its own servers clear of user data by not collecting their stuff in the first place.

There is a contradiction there. So there is a risk for Apple in amping up its rhetoric against a “data industrial complex” — and making its naturally pro-privacy preference sound like a conviction principle — because it invites people to dial up critical lenses and point out where its defence of personal data against manipulation and exploitation does not live up to its own rhetoric.

One thing is clear: In the current data-based ecosystem all players are conflicted and compromised.

Though only a handful of tech giants have built unchallengeably massive tracking empires via the systematic exploitation of other people’s data.

And as the apparatus of their power gets exposed, these attention-hogging adtech giants are making a dumb show of papering over the myriad ways their platforms pound on people and societies — offering paper-thin promises to ‘do better next time — when ‘better’ is not even close to being enough.

Call for collective action

Increasingly powerful data-mining technologies must be sensitive to human rights and human impacts, that much is crystal clear. Nor is it enough to be reactive to problems after or even at the moment they arise. No engineer or system designer should feel it’s their job to manipulate and trick their fellow humans.

Dark pattern designs should be repurposed into a guidebook of what not to do and how not to transact online. (If you want a mission statement for thinking about this it really is simple: Just don’t be a dick.)

Sociotechnical Internet technologies must always be designed with people and societies in mind — a key point that was hammered home in a keynote by Berners-Lee, the inventor of the World Wide Web, and the tech guy now trying to defang the Internet’s occupying corporate forces via decentralization.

“As we’re designing the system, we’re designing society,” he told the conference. “Ethical rules that we choose to put in that design [impact society]… Nothing is self evident. Everything has to be put out there as something that we think we will be a good idea as a component of our society.”

The penny looks to be dropping for privacy watchdogs in Europe. The idea that assessing fairness — not just legal compliance — must be a key component of their thinking, going forward, and so the direction of regulatory travel.

Watchdogs like the UK’s ICO — which just fined Facebook the maximum possible penalty for the Cambridge Analytica scandal — said so this week. “You have to do your homework as a company to think about fairness,” said Elizabeth Denham, when asked ‘who decides what’s fair’ in a data ethics context. “At the end of the day if you are working, providing services in Europe then the regulator’s going to have something to say about fairness — which we have in some cases.”

“Right now, we’re working with some Oxford academics on transparency and algorithmic decision making. We’re also working on our own tool as a regulator on how we are going to audit algorithms,” she added. “I think in Europe we’re leading the way — and I realize that’s not the legal requirement in the rest of the world but I believe that more and more companies are going to look to the high standard that is now in place with the GDPR.

“The answer to the question is ‘is this fair?’ It may be legal — but is this fair?”

So the short version is data controllers need to prepare themselves to consult widely — and examine their consciences closely.

Rising automation and AI makes ethical design choices even more imperative, as technologies become increasingly complex and intertwined, thanks to the massive amounts of data being captured, processed and used to model all sorts of human facets and functions.

The closed session of the conference produced a declaration on ethics and data in artificial intelligence — setting out a list of guiding principles to act as “core values to preserve human rights” in the developing AI era — which included concepts like fairness and responsible design.

Few would argue that a powerful AI-based technology such as facial recognition isn’t inherently in tension with a fundamental human right like privacy.

Nor that such powerful technologies aren’t at huge risk of being misused and abused to discriminate and/or suppress rights at vast and terrifying scale. (See, for example, China’s push to install a social credit system.)

Biometric ID systems might start out with claims of the very best intentions — only to shift function and impact later. The dangers to human rights of function creep on this front are very real indeed. And are already being felt in places like India — where the country’s Aadhaar biometric ID system has been accused of rebooting ancient prejudices by promoting a digital caste system, as the conference also heard.

The consensus from the event is it’s not only possible but vital to engineer ethics into system design from the start whenever you’re doing things with other people’s data. And that routes to market must be found that don’t require dispensing with a moral compass to get there.

The notion of data-processing platforms becoming information fiduciaries — i.e. having a legal duty of care towards their users, as a doctor or lawyer does — was floated several times during public discussions. Though such a step would likely require more legislation, not just adequately rigorous self examination.

In the meanwhile civic society must get to grips, and grapple proactively, with technologies like AI so that people and societies can come to collective agreement about a digital ethics framework. This is vital work to defend the things that matter to communities so that the anthropogenic platforms Berners-Lee referenced are shaped by collective human values, not the other way around.

It’s also essential that public debate about digital ethics does not get hijacked by corporate self interest.

Tech giants are not only inherently conflicted on the topic but — right across the board — they lack the internal diversity to offer a broad enough perspective.

People and civic society must teach them.

A vital closing contribution came from the French data watchdog’s Isabelle Falque-Pierrotin, who summed up discussions that had taken place behind closed doors as the community of global data protection commissioners met to plot next steps.

She explained that members had adopted a roadmap for the future of the conference to evolve beyond a mere talking shop and take on a more visible, open governance structure — to allow it to be a vehicle for collective, international decision-making on ethical standards, and so alight on and adopt common positions and principles that can push tech in a human direction.

The initial declaration document on ethics and AI is intended to be just the start, she said — warning that “if we can’t act we will not be able to collectively control our future”, and couching ethics as “no longer an option, it is an obligation”.

She also said it’s essential that regulators get with the program and enforce current privacy laws — to “pave the way towards a digital ethics” — echoing calls from many speakers at the event for regulators to get on with the job of enforcement.

This is vital work to defend values and rights against the overreach of the digital here and now.

“Without ethics, without an adequate enforcement of our values and rules our societal models are at risk,” Falque-Pierrotin also warned. “We must act… because if we fail, there won’t be any winners. Not the people, nor the companies. And certainly not human rights and democracy.”

If the conference had one short sharp message it was this: Society must wake up to technology — and fast.

“We’ve got a lot of work to do, and a lot of discussion — across the boundaries of individuals, companies and governments,” agreed Berners-Lee. “But very important work.

“We have to get commitments from companies to make their platforms constructive and we have to get commitments from governments to look at whenever they see that a new technology allows people to be taken advantage of, allows a new form of crime to get onto it by producing new forms of the law. And to make sure that the policies that they do are thought about in respect to every new technology as they come out.”

This work is also an opportunity for civic society to define and reaffirm what’s important. So it’s not only about mitigating risks.

But, equally, not doing the job is unthinkable — because there’s no putting the AI genii back in the bottle.

Audit Facebook and overhaul competition law, say MEPs responding to breach scandals

After holding a series of hearings in the wake of the Facebook -Cambridge Analytica data misuse scandal this summer, and attending a meeting with Mark Zuckerberg himself in May, the European Union parliament’s civil liberties committee has called for an update to competition rules to reflect what it dubs “the digital reality”, urging EU institutions […]

After holding a series of hearings in the wake of the Facebook -Cambridge Analytica data misuse scandal this summer, and attending a meeting with Mark Zuckerberg himself in May, the European Union parliament’s civil liberties committee has called for an update to competition rules to reflect what it dubs “the digital reality”, urging EU institutions to look into the “possible monopoly” of big tech social media platforms.

Top level EU competition law has not touched on the social media axis of big tech yet, with the Commission concentrating recent attention on mobile chips (Qualcomm); and mobile and ecommerce platforms (mostly Google; but Amazon’s use of merchant data is in its sights too); as well as probing Apple’s tax structure in Ireland.

But last week Europe’s data protection supervisor, Giovanni Buttarelli, told us that closer working between privacy regulators and the EU’s Competition Commission is on the cards, as regional lawmakers look to evolve their oversight frameworks to respond to growing ethical concerns about use and abuse of big data, and indeed to be better positioned to respond to fast-paced technology-fuelled change.

Local EU antitrust regulators, including in Germany and France, have also been investigating the Google, Facebook adtech duopoly on several fronts in recent years.

The Libe committee’s call is the latest political call to spin up and scale up antitrust effort and attention around social media. 

The committee also says it wants to see much greater accountability and transparency on “algorithmic-processed data by any actor, be it private or public” — signalling a belief that GDPR does not go far enough on that front.

Libe committee chair and rapporteur, MEP Claude Moraes, has previously suggested the Facebook Cambridge Analytica scandal could help inform and shape an update to Europe’s ePrivacy rules, which remain at the negotiation stage with disagreements over scope and proportionality.

But every big tech data breach and security scandal lends weight to the argument that stronger privacy rules are indeed required.

In yesterday’s resolution, the Libe committee also called for an audit of the advertising industry on social media — echoing a call made by the UK’s data protection watchdog, the ICO, this summer for an ‘ethical pause‘ on the use of online ads for political purposes.

The ICO made that call right after announcing it planned to issue Facebook with the maximum fine possible under UK data protection law — again for the Cambridge Analytica breach.

While the Cambridge Analytica scandal — in which the personal information of as many as 87 million Facebook users was extracted from the platform without the knowledge or consent of every person, and passed to the now defunct political consultancy (which used it to create psychographic profiles of US voters for election campaigning purposes) — has triggered this latest round of political scrutiny of the social media behemoth, last month Facebook revealed another major data breach, affecting at least 50M users — underlining the ongoing challenge it has to live up to claims of having ‘locked the platform down’.

In light of both breaches, the Libe committee has now called for EU bodies to be allowed to fully audit Facebook — to independently assess its data protection and security practices.

Buttarelli also told us last week that it’s his belief none of the tech giants are directing adequate resource at keeping user data safe.

And with Facebook having already revealed a second breach that’s potentially even larger than Cambridge Analytica fresh focus and political attention is falling on the substance of its security practices, not just its claims.

While the Libe committee’s MEPs say they have taken note of steps Facebook made in the wake of the Cambridge Analytica scandal to try to improve user privacy, they point out it has still not yet carried out the promised full internal audit.

Facebook has never said how long this historical app audit will take. Though it has given some progress reports, such as detailing additional suspicious activity it has found to date, with 400 apps suspended at the last count. (One app, called myPersonality, also got banned for improper data controls.)

The Libe committee is now urging Facebook to allow the EU Agency for Network and Information Security (ENISA) and the European Data Protection Board, which plays a key role in applying the region’s data protection rules, to carry out “a full and independent audit” — and present the findings to the European Commission and Parliament and national parliaments.

It has also recommended that Facebook makes “substantial modifications to its platform” to comply with EU data protection law.

We’ve reached out to Facebook for comment on the recommendations — including specifically asking the company whether it’s open to an external audit of its platform.

At the time of writing Facebook had not responded to our question but we’ll update this report with any response.

Commenting in a statement, Libe chair Moraes said: “This resolution makes clear that we expect measures to be taken to protect citizens’ right to private life, data protection and freedom of expression. Improvements have been made since the scandal, but, as the Facebook data breach of 50 million accounts showed just last month, these do not go far enough.”

The committee has also made a series of proposals for reducing the risk of social media being used as an attack vector for election interference — including:

  • applying conventional “off-line” electoral safeguards, such as rules on transparency and limits to spending, respect for silence periods and equal treatment of candidates;
  • making it easy to recognize online political paid advertisements and the organisation behind them;
  • banning profiling for electoral purposes, including use of online behaviour that may reveal political preferences;
  • social media platforms should label content shared by bots and speed up the process of removing fake accounts;
  • compulsory post-campaign audits to ensure personal data are deleted;
  • investigations by member states with the support of Eurojust if necessary, into alleged misuse of the online political space by foreign forces.

A couple of weeks ago, the Commission outted a voluntary industry Code of Practice aimed at tackling online disinformation which several tech platforms and adtech companies had agreed to sign up to, and which also presses for action in some of the same areas — including fake accounts and bots.

However the code is not only voluntary but does not bind signatories to any specific policy steps or processes so it looks like its effectiveness will be as difficult to quantify as its accountability will lack bite.

A UK parliamentary committee which has also been probing political disinformation this year also put out a report this summer with a package of proposed measures — with some similar ideas but also suggesting a levy on social media to ‘defend democracy’.

Meanwhile Facebook itself has been working on increasing transparency around advertisers on its platform, and putting in place some authorization requirements for political advertisers (though starting in the US first).

But few politicians appear ready to trust that the steps Facebook is taking will be enough to avoid a repeat of, for example, the mass Kremlin propaganda smear campaign that targeted the 2016 US presidential election.

The Libe committee has also urged all EU institutions, agencies and bodies to verify that their social media pages, and any analytical and marketing tools they use, “should not by any means put at risk the personal data of citizens”.

And it goes as far as suggesting that EU bodies could even “consider closing their Facebook accounts” — as a measure to protect the personal data of every individual contacting them.

The committee’s full resolution was passed by 41 votes to 10 and 1 abstention. And will be put to a vote by the full EU Parliament during the next plenary session later this month.

In it, the Libe also renews its call for the suspension of the EU-US Privacy Shield.

The data transfer arrangement, which is used by thousands of businesses to authorize transfers of EU users’ personal data across the Atlantic, is under growing pressure ahead of an annual review this month, as the Trump administration has failed entirely to respond as EU lawmakers had hoped their US counterparts would at the time of the agreement being inked in the Obama era, back in 2016.

The EU parliament also called for Privacy Shield to be suspended this summer. And while the Commission did not act on those calls, pressure has continued to mount from MEPs and EU consumer and digital and civil rights bodies.

During the Privacy Shield review process this month the Commission will be pressuring US counterparts to try to gain concessions that it can sell back home as ‘compliance’.

But without very major concessions — and who would bank on that, given the priorities of the current US administration — the future of the precariously placed mechanism looks increasingly uncertain.

Even as more oversight coming down the pipe to rule social media platforms looks all but inevitable in Europe.

ePrivacy: An overview of Europe’s other big privacy rule change

Gather round. The EU has a plan for a big update to privacy laws that could have a major impact on current Internet business models. Um, I thought Europe just got some new privacy rules? They did. You’re thinking of the General Data Protection Regulation (GDPR), which updated the European Union’s 1995 Data Protection Directive […]

Gather round. The EU has a plan for a big update to privacy laws that could have a major impact on current Internet business models.

Um, I thought Europe just got some new privacy rules?

They did. You’re thinking of the General Data Protection Regulation (GDPR), which updated the European Union’s 1995 Data Protection Directive — most notably by making the penalties for compliance violations much larger.

But there’s another piece of the puzzle — intended to ‘complete’ GDPR but which is still in train.

Or, well, sitting in the sidings being mobbed by lobbyists, as seems to currently be the case.

It’s called the ePrivacy Regulation.

ePrivacy Regulation, eh? So I guess that means there’s already an ePrivacy Directive then…

Indeed. Clever cookie. That’s the 2002 ePrivacy Directive to be precise, which was amended in 2009 (but is still just a directive).

Remind me what’s the difference between an EU Directive and a Regulation again… 

A regulation is a more powerful legislative instrument for EU lawmakers as it’s binding across all Member States and immediately comes into legal force on a set date, without needing to be transposed into national laws. In a word it’s self-executing.

Whereas, with a directive, Member States get a bit more flexibility because it’s up to them how they implement the substance of the thing. They could adapt an existing law or create a new one, for example.

With a regulation the deliberation happens among EU institutions and, once that discussion and negotiation process has concluded, the agreed text becomes law across the bloc — at the set time, and without necessarily requiring further steps from Member States.

So regulations are powerful.

So there’s more legal consistency with a regulation? 

In theory. Greater harmonization of data protection rules is certainly an impetus for updating the EU’s legal framework around privacy.

Although, in the case of GDPR, Member States did in fact need to update their national data protections laws to make certain choices allowed for in the framework, and identify competent national data enforcement agencies. So there’s still some variation.

Strengthening the rules around privacy and making enforcement more effective are other general aims for the ePrivacy Regulation.

Europe has had robust privacy rules for many years but enforcement has been lacking.

Another point of note: Where data protection law is concerned, national agencies need to be properly resourced to be able to enforce rules, or that could undermine the impact of regulation.

It’s up to Member States to do this, though GDPR essentially requires it (and the Commission is watching).

Europe’s data protection supervisor, Giovanni Buttarelli, sums up the current resourcing situation for national data protection agencies, as: “Not bad, not enough. But much better than before.”

But why does Europe need another digital privacy law. Why isn’t GDPR enough? 

There is some debate about that, and not everyone agrees with the current approach. But the general idea is that GDPR deals with general (personal) data.

Whereas the proposed update to ePrivacy rules is intended to supplement GDPR — addressing in detail the confidentiality of electronic communications, and the tracking of Internet users more broadly.

So the (draft) ePrivacy Regulation covers marketing, and a whole raft of tracking technologies (including but not just cookies); and is intended to combat problems like spam, as well as respond to rampant profiling and behavioral advertising by requiring transparency and affirmative consent.

One major impulse behind the reform of the rules is to expand the scope to not just cover telcos but reflect how many communications now travel ‘over the top’ of cellular networks, via Internet services.

This means ePrivacy could apply to all sorts of tech firms in future, be it Skype, Facebook, Google, and quite possibly plenty more — given how many apps and services include some ability for users to communicate with each other.

But scope remains one of the contested areas, with critics arguing the regulation could have a disproportionate impact, if — for example — every app with a chat function is going to be ruled.

On the communications front, the updated rules would not just cover message content but metadata too (to respond to how that gets tracked). Aka pieces of data that might not be personal data per se yet certainly pertain to privacy once they are wrapped up in and/or associated with people’s communications.

Although metadata tracking is also used for analytics, for wider business purposes than just profiling users, so you can see the challenge of trying to fashion rules to fit around all this granular background activity.

Simplifying problematic existing EU cookie consent rules — which have also been widely mocked for generating pretty pointless web page clutter — has also been a core part of the Commission’s intention for the update.

EU lawmakers also want the regulation to cover machine to machine comms — to regulate privacy around the still emergent IoT (Internet of Things), to keep pace with the rise of smart home technologies.

Those are some of the high level aims but there have been multiple proposed texts and revisions at this point so goalposts have been shifting around.

So whereabouts in the process are we?

The Commission’s original reform proposal came out in January 2017. More than a year and a half later EU institutions are still stuck trying to reach a consensus. It’s not even 100% certain whether ePrivacy will pass or founder in the attempt at this point.

The underlying problem is really the scope of exploitation of consumers’ online activity going on in the areas ePrivacy seeks to regulate — which is now firmly baked into dominant digital business models — so trying to rule over all that after the fact of mainstream operational execution is a recipe for co-ordinated industry objection and frenzied lobbying. Of which there has been an awful lot.

At the same time, consumer protection groups in Europe are more clear than ever that ePrivacy should be a vehicle for further strengthening the data protection framework put in place by GDPR — pointing out, for example, that data misuse scandals like the Facebook-Cambridge Analytica debacle show that data-driven business models need closer checks to protect consumers and ensure people’s rights are respected.

Safe to say, the two sides couldn’t be further apart.

Like GDPR, the proposed ePrivacy Regulation would also apply to companies offering services in Europe not only those based in Europe. And it also includes major penalties for violations (of up to 2% or 4% of a company’s global annual turnover) — similarly intended to bolster enforcement and support more consistently applied EU privacy rules.

But given the complexity of the proposals, and disagreements over scope and approach, having big fines baked in further complicates the negotiations — because lobbyists can argue that substantial financial penalties should not be attached to ‘ambiguous’ laws and disputed regulatory mechanisms.

The high cost of getting the update wrong is not so much concentrating minds as causing alarms to be yanked and brakes applied. With the risk of no progress at all looking like an increasing possibility.

One thing is clear: The existing ePrivacy rules are outdated and it’s not helpful to have old rules undermining a state-of-the-art data protection framework.

Telcos have also rightly complained it’s not fair for tech giants to be able to operate messaging empires without the same compliance burdens they have.

Just don’t assume telcos love the proposed update either. It’s complicated.

Sounds very messy. 

Indeed.

EU lawmakers could probably have dealt with updating both privacy-related directives together, or even in one ‘super regulation’, but they decided to separate the work to try to simplify the process. In retrospect that looks like a mistake.

On the plus side, it means GDPR is now locked in place — with Buttarelli saying the new framework is intended to stand for as long as its predecessor.

Less good: One shiny worldclass data protection framework is having to work alongside a set of rules long past their sell-by-date.

So, so much for consistency.

Buttarelli tells us he thinks it was a mistake not to do both updates together, describing the blocks being thrown up to try to derail ePrivacy reform as “unacceptable”.

“I would like to say very clearly that the EU made a mistake in not updating earlier the rules for confidentiality for electronic communications at the same time as general data protection,” he told us during an interview this week, about GDPR enforcement, datas ethics and the future of EU privacy regulation.

He argues the patchwork of new and old rules “doesn’t work for data controllers” either, as they’re the ones saddled with dealing with the legal inconsistency.

As Europe’s data protection supervisor, Buttarelli is of course trying to apply pressure on key parties — to “get to the table and start immediately trilogue negotiations to identify a sustainable outcome”.

But the nature of lawmaking across a bloc of 28 Member States is often slow and painful. Certainly no one entity can force progress; it must be achieved via negotiated consensus and compromise across the various institutions and entities.

And when interest groups are so far apart, well, it’s sweating toil to put it mildly.

Entities that don’t want to play ball with a particular legal reform issue can sometimes also throw a delaying spanner in the works by impeding negotiations. Which is what looks to be going on with ePrivacy right now.

The EU parliament confirmed its negotiating mandate on the reform almost a year ago now. But MEPs were then stuck waiting for Member States to take a position and get around the discussion table.

Except Member States seemingly weren’t so keen. Some were probably a bit preoccupied with Brexit.

Currently implicated as an ePrivacy blocker: Austria, which holds the six-month rotating presidency of the EU Council — meaning it gets to set priorities, and can thus kick issues into the long grass (as its right-wing government appears to be doing with ePrivacy). And so the wait goes on.

It now looks like a bit of a divide and conquer situation for anti-privacy lobbyists, who — having failed to derail GDPR — are throwing all their energies at blocking and even derailing/diluting the ePrivacy reform.

Some Member States appear to be trying to attack ePrivacy to weaken the overarching framework of GDPR too. So yes, it’s got very messy indeed.

There’s an added complication around timing because the EU parliament is up for re-election next Spring, and a few months after that the executive Commission will itself turn over, as the current president does not intend to seek reappointment. So it will be all change for the EU, politically speaking, in 2019.

A reconfigured political landscape could then change the entire conversation around ePrivacy. So the current delay could prove fatal unless agreement can be reached in early 2019.

Some EU lawmakers had hoped the reform could be done and dusted in in time to come into force at the same time as GDPR, this May.

That was certainly a major miscalculation.

But what’s all the disagreement about?

That depends on who you ask. There are many contested issues, depending on the interests of the group you’re talking to.

Media and publishing industry associations are terrified about what they say ePrivacy could do to their ad-supported business models, given their reliance on cookies and tracking technologies to try to monetize free content via targeted ads — and so claim it could destroy journalism as we know it if consumers need to opt-in to being tracked.

The ad industry is also of course screaming about ePrivacy as if its hair’s on fire. Big tech included, though it has generally preferred to lobby via proxies on this issue.

Anything that could impede adtech’s ability to track and thus behaviourally target ads at web users is clearly enemy number one, given the current modus operandi. So ePrivacy is a major lobbying target for the likes of the IAB who don’t want it to upend their existing business models.

Even telcos aren’t happy, despite the potential of the regulation to even the playing field somewhat with tech giants — suggesting they will end up with double the regulatory burden, as well as moaning it will make it harder for them to make the necessary investments to roll out 5G networks.

Plus, as I say, there also seems to be some efforts to try to use ePrivacy as a vector to attack and weaken GDPR itself.

Buttarelli had comments to make on this front too, describing some data controllers as being in post-GDPR “revenge mode”.

“They want to move in sort of a vendetta, vendetta — and get back what they lose with the GDPR. But while I respect honest lobbying about which pieces of ePrivacy are not necessary I think ePrivacy will help first small businesses, and not necessarily the big tech startups. And where done properly ePrivacy may give more power to individuals. It may make harder for big tech to snoop on private conversations without meaningful consent,” he told us, appealing to Europe’s publishing industry to get behind the reform process, rather than applying pressure at the Member State level to try to derail it — given the media hardly feels well done by by big tech.

He even makes this appeal to local adtech players — which aren’t exactly enamoured with the dominance of big tech either.

“I see space for market incentives,” he added. “For advertisers and publishers to, let’s say, re-establish direct relations with their readers and customers. And not have to accept the terms dictated by the major platform intermediaries. So I don’t see any other argument to discourage that we have a deal before the elections in May next year of the European legislators.”

There’s no doubt this is a challenging sell though, given how embedded all these players are with the big platforms. So it remains to be seen whether ePrivacy can be talked back on track.

Major progress is certainly very unlikely before 2019.

I’m still not sure why it’s so important though.  

The privacy of personal communications is a fundamental right in Europe. So there’s a need for the legal framework to defend against technological erosion of citizens’ rights.

Add to that, a big part of the problem with the modern adtech industry — aside from the core lack of genuine consent — is its opacity. Who’s doing what; for what specific purposes; and with what exact outcomes.

Existing European privacy rules like GDPR mean there’s more transparency than there’s ever been about what’s going on — if you know and/or can be bothered to dig down into privacy policies and purposes.

If you do, you might, for example, discover a very long list of companies that your data is being shared with (and even be able to switch off that sharing) — entities with weird sounding names like Outbrain and OpenX.

A privacy policy might even state a per company purpose like ‘Advertising exchange’ and ‘Advertising’. Or ‘Customer interaction’, whatever that means.

Thing is, it’s often still very difficult for a consumer to understand what a lot of these companies are really doing with their data.

Thanks to current EU laws, we now have the greatest level of transparency there has ever been about the mechanisms underpinning Internet business models. But yet so much remains murky.

The average Internet user is very likely none the wiser. Can profiling them without proper consent really be fair?

GDPR sets out an expectation of privacy by design and default. So, following that principle, you could argue that cookie consent, for example, should be default opt-out — and that any website must be required to gain affirmative opt in from a visitor for any tracking cookies. The adtech industry would certainly disagree though.

The original ePrivacy proposal even had a bit of a mixed approach to consent which was accused of being too overbearing for some technologies and not strong enough for others.

It’s not just creepy tech giants implicated here either. Publishers and the media (TechCrunch included) are very much caught up in the unpleasant tracking mess, complicit in darting users with cookies and trackers to try to increase what remain fantastically low conversation rates for digital ads.

Most of the time, most Internet users ignore most ads. So — with horribly wonky logic — the behavioral advertising industry, which has been able to grow like a weed because EU privacy rights have not previously been actively enforced, has made it its mission to suck up (and indeed buy up) more and more user data to try to move the ad conversion needle a fraction.

The media is especially desperate because the web has also decimated traditional business models. And European lawmakers can be very sensitive to publishing industry concerns (for e.g., see their backing of controversial copyright reforms which publishers have been pushing for).

Meanwhile Google and Facebook are gobbling up the majority of online ad spending, leaving publishers fighting for crumbs and stuck having to do businesses with the platforms that have so sorely disrupted them.

Platforms they can’t at all control but which are now so popular and powerful they can (and do) algorithmically control the visibility of publishers’ content.

It’s not a happy combination. Well, unless you’re Facebook or Google.

Meanwhile, for web users just wanting to go about their business and do all the stuff people can (and sometimes need to do) online, things have got very bad indeed.

Unless you ignore the fact you’re being creeped on almost all the time, by snoopy entities that double as intelligence traders, selling info on what you like or don’t, so that an unseen adtech collective can create highly detailed profiles of you to try and manipulate your online transactions and purchasing decisions. With what can sometimes be discriminatory impacts.

The rise in popularity of ad blockers illustrates quite how little consumers enjoy being ad-stalked around the Internet.

More recently tracker blockers have been springing up to try to beat back the adtech vampire octopus which also lards the average webpage with myriad data-sucking tentacles, impeding page load times and gobbling bandwidth in the process, in addition to abusing people’s privacy.

There’s also out-and-out malicious stuff to be found already here too as the increasing complexity, opacity and sprawl of the adtech industry’s surveillance apparatus (combined with its general lack of interest in and/or focus on security) offers rich and varied vectors of cyber attack.

And so ads and gnarly page elements sometimes come bundled or injected with actual malware as hackers exploit all this stuff for their own ends and launch man in the middle attacks to grab user data as it’s being routinely siphoned off for tracking purposes.

It’s truly a layer cake of suck.

Ouch. 

The ePrivacy Regulation could, in theory, help to change this, by helping to support alternative business models that don’t use people-tracking as their fuel by putting the emphasis back where it should be: Respect for privacy.

The (seemingly) radical idea underlying all these updates to European privacy legislation is that if you increase consumers’ trust in online services by respecting people’s privacy you can actually grease the wheel of ecommerce and innovation because web users will be more comfortable doing stuff online because they won’t feel like they’re under creepy surveillance.

More than that — you can lay down a solid foundation of trust for the next generation of disruptive technologies to build on.

Technologies like IoT and driverless cars.

Because, well, if consumers hate to feel like websites are spying on them, imagine how disgusted they’ll be to realize their fridge, toaster, kettle and TV are all complicit in snitching. Ditto their connected car.

‘I see you’re driving past McDonald’s. Great news! They have a special on those chocolate donuts you scoffed a whole box of last week…’

Ugh. 

Yeah…

So what are ePrivacy’s chances at this point? 

It’s hard to say but things aren’t looking great right now.

Buttarelli describes himself as “relatively optimistic” about getting an agreement by May, i.e. before the EU parliament elections, but that may well be wishful thinking.

Even if he’s right there would likely still need to be an implementation period before it comes into force — so new rules aren’t likely up and running before 2020.

Yet he also describes the ePrivacy Regulation as “an essential missing piece of the jigsaw”.

Getting that piece in place is not going to be easy though.