Seized cache of Facebook docs raise competition and consent questions

A UK parliamentary committee has published the cache of Facebook documents it dramatically seized last week. The documents were obtained by a legal discovery process by a startup that’s suing the social network in a California court in a case related to Facebook changing data access permissions back in 2014/15. The court had sealed the documents […]

A UK parliamentary committee has published the cache of Facebook documents it dramatically seized last week.

The documents were obtained by a legal discovery process by a startup that’s suing the social network in a California court in a case related to Facebook changing data access permissions back in 2014/15.

The court had sealed the documents but the DCMS committee used rarely deployed parliamentary powers to obtain them from the Six4Three founder, during a business trip to London.

You can read the redacted documents here — all 250 pages of them.

In a series of tweets regarding the publication, committee chair Damian Collins says he believes there is “considerable public interest” in releasing them.

“They raise important questions about how Facebook treats users data, their policies for working with app developers, and how they exercise their dominant position in the social media market,” he writes.

“We don’t feel we have had straight answers from Facebook on these important issues, which is why we are releasing the documents. We need a more public debate about the rights of social media users and the smaller businesses who are required to work with the tech giants. I hope that our committee investigation can stand up for them.”

The committee has been investigating online disinformation and election interference for the best part of this year, and has been repeatedly frustrated in its attempts to extract answers from Facebook.

But it is protected by parliamentary privilege — hence it’s now published the Six4Three files, having waited a week in order to redact certain pieces of personal information.

Collins has included a summary of key issues, as the committee sees them after reviewing the documents, in which he draws attention to six issues.

Here is his summary of the key issues:

  1. White Lists Facebook have clearly entered into whitelisting agreements with certain companies, which meant that after the platform changes in 2014/15 they maintained full access to friends data. It is not clear that there was any user consent for this, nor how Facebook decided which companies should be whitelisted or not.
  2. Value of friends data It is clear that increasing revenues from major app developers was one of the key drivers behind the Platform 3.0 changes at Facebook. The idea of linking access to friends data to the financial value of the developers relationship with Facebook is a recurring feature of the documents.
  3. Reciprocity Data reciprocity between Facebook and app developers was a central feature in the discussions about the launch of Platform 3.0.
  4. Android Facebook knew that the changes to its policies on the Android mobile phone system, which enabled the Facebook app to collect a record of calls and texts sent by the user would be controversial. To mitigate any bad PR, Facebook planned to make it as hard of possible for users to know that this was one of the underlying features of the upgrade of their app.
  5. Onavo Facebook used Onavo to conduct global surveys of the usage of mobile apps by customers, and apparently without their knowledge. They used this data to assess not just how many people had downloaded apps, but how often they used them. This knowledge helped them to decide which companies to acquire, and which to treat as a threat.
  6. Targeting competitor Apps The files show evidence of Facebook taking aggressive positions against apps, with the consequence that denying them access to data led to the failure of that business

The publication of the files comes at an awkward moment for Facebook — which remains on the back foot after a string of data and security scandals, and has just announced a major policy change — ending a long-running ban on apps copying its own platform features.

Albeit the timing of Facebook’s policy shift announcement hardly looks incidental — given Collins said last week the committee would publish the files this week.

The policy in question has been used by Facebook to close down competitors in the past, such as — two years ago — when it cut off style transfer app Prisma’s access to its live-streaming Live API when the startup tried to launch a livestreaming art filter (Facebook subsequently launched its own style transfer filters for Live).

So its policy reversal now looks intended to diffuse regulatory scrutiny around potential antitrust concerns.

But emails in the Six4Three files suggesting that Facebook took “aggressive positions” against competing apps could spark fresh competition concerns.

In one email dated January 24, 2013, a Facebook staffer, Justin Osofsky, discusses Twitter’s launch of its short video clip app, Vine, and says Facebook’s response will be to close off its API access.

As part of their NUX, you can find friends via FB. Unless anyone raises objections, we will shut down their friends API access today. We’ve prepared reactive PR, and I will let Jana know our decision,” he writes. 

Osofsky’s email is followed by what looks like a big thumbs up from Zuckerberg, who replies: “Yup, go for it.”

Also of concern on the competition front is Facebook’s use of a VPN startup it acquired, Onavo, to gather intelligence on competing apps — either for acquisition purposes or to target as a threat to its business.

The files show various Onavo industry charts detailing reach and usage of mobile apps and social networks — with each of these graphs stamped ‘highly confidential’.

Facebook bought Onavo back in October 2013. Shortly after it shelled out $19BN to acquire rival messaging app WhatsApp — which one Onavo chart in the cache indicates was beasting Facebook on mobile, accounting for well over double the daily message sends at that time.

The files also spotlight several issues of concern relating to privacy and data protection law, with internal documents raising fresh questions over how or even whether (in the case of Facebook’s whitelisting agreements with certain developers) it obtained consent from users to process their personal data.

The company is already facing a number of privacy complaints under the EU’s GDPR framework over its use of ‘forced consent‘, given that it does not offer users an opt-out from targeted advertising.

But the Six4Three files look set to pour fresh fuel on the consent fire.

Collins’ fourth line item — related to an Android upgrade — also speaks loudly to consent complaints.

Earlier this year Facebook was forced to deny that it collects calls and SMS data from users of its Android apps without permission. But, as we wrote at the time, it had used privacy-hostile design tricks to sneak expansive data-gobbling permissions past users. So, put simple, people clicked ‘agree’ without knowing exactly what they were agreeing to.

The Six4Three files back up the notion that Facebook was intentionally trying to mislead users.

In one email dated November 15, 2013, from Matt Scutari, manager privacy and public policy, suggests ways to prevent users from choosing to set a higher level of privacy protection, writing: “Matt is providing policy feedback on a Mark Z request that Product explore the possibility of making the Only Me audience setting unsticky. The goal of this change would be to help users avoid inadvertently posting to the Only Me audience. We are encouraging Product to explore other alternatives, such as more aggressive user education or removing stickiness for all audience settings.”

Another awkward trust issue for Facebook which the documents could stir up afresh relates to its repeat claim — including under questions from lawmakers — that it does not sell user data.

In one email from the cache — sent by Mark Zuckerberg, dated October 7, 2012 — the Facebook founder appears to be entertaining the idea of charging developers for “reading anything, including friends”.

Yet earlier this year, when he was asked by a US lawmaker how Facebook makes money, Zuckerberg replied: “Senator, we sell ads.”

He did not include a caveat that he had apparently personally entertained the idea of liberally selling access to user data.

Responding to the publication of the Six4Three documents, a Facebook spokesperson told us:

As we’ve said many times, the documents Six4Three gathered for their baseless case are only part of the story and are presented in a way that is very misleading without additional context. We stand by the platform changes we made in 2015 to stop a person from sharing their friends’ data with developers. Like any business, we had many of internal conversations about the various ways we could build a sustainable business model for our platform. But the facts are clear: we’ve never sold people’s data.

Zuckerberg has repeatedly refused to testify in person to the DCMS committee.

At its last public hearing — which was held in the form of a grand committee comprising representatives from nine international parliaments, all with burning questions for Facebook — the company sent its policy VP, Richard Allan, leaving an empty chair where Zuckerberg’s bum should be.

“The problem is Facebook,” lawmakers from nine countries tell Zuckerberg’s accountability stand-in

A grand committee of international parliamentarians empty-chaired Mark Zuckerberg at a hearing earlier today, after the Facebook founder snubbed repeat invitations to face questions about malicious, abusive and improper uses of his social media platform — including the democracy-denting impacts of so-called ‘fake news’. The UK’s DCMS committee has been leading the charge to hold Facebook […]

A grand committee of international parliamentarians empty-chaired Mark Zuckerberg at a hearing earlier today, after the Facebook founder snubbed repeat invitations to face questions about malicious, abusive and improper uses of his social media platform — including the democracy-denting impacts of so-called ‘fake news’.

The UK’s DCMS committee has been leading the charge to hold Facebook to account for data misuse scandals and election interference — now joined in the effort by international lawmakers from around the world. But still not by Zuckerberg himself.

In all parliamentarians from nine countries were in the room to put awkward questions to Zuckerberg’s stand in, policy VP Richard Allan — including asking what Facebook is doing to stop WhatsApp being used as a vector to spread political disinformation in South America; why Facebook refused to remove a piece of highly inflammatory anti-Muslim hate speech in Sri Lanka until the country blocked access to its platform; how Facebook continues to track non-users in Belgium and how it justifies doing so under Europe’s tough new GDPR framework; and, more generally, why anyone should have any trust in anything the company says at this point — with company neck-deep in privacy and trust scandals.

The elected representatives were collectively speaking up for close to 450 million people across the UK, Argentina, Belgium, Brazil, Canada, France, Ireland, Latvia and Singapore. The most oft repeated question on their lips was why wasn’t Zuckerberg there?

Allan looked uncomfortable on his absentee boss’ behalf and spent the best part of three hours running the gamut of placative hand gestures as he talked about wanting to work with regulators to find “the right regulation” to rein in social media’s antisocial, anti-democratic impacts.

Canadian MP Bob Zimmer spoke for the room, cutting into another bit of Allan’s defensive pablum with: “Here we are again hearing another apology from Facebook — ‘look trust us, y’all regulate us etc but we really don’t have that much influence in the global scheme of things’. In this room we regulate over 400M people and to not have your CEO sit in that chair there is an offence to all of us in this room and really our citizens as well.”

“[Blackberry co-founder] Jim Balsille said, when I asked him on our committee, is our democracy at risk if we don’t change the laws in Canada to deal with surveillance capitalism?” Zimmer continued. “He said without a doubt. What do you think?” — which Allan took as a cue to ummm his way into another series of “we need tos”, and talk of “a number of problematic vectors” Facebook is trying to address with a number of “tools”.

The session was largely filled up such frustratingly reframed waffle, as Allan sought to deflect, defang and defuse the committee’s questions — leading it to accuse him more than once of repeating the ‘delay, deny, deflect’ tactics recently reported on by the New York Times.

Allan claimed not — claiming to be there “acknowledging” problems. But that empty chair beside him sure looked awkward.

At the close, Canada’s Charlie Angus sought to sweep Facebook’s hot air away by accusing Allan of distracting with symptoms — to draw the regulatory eye away from the root cause of the problem which he sharply defined as Facebook itself.

“The problem we have with Facebook is there’s never accountability — so I would put it to you when we talk about regulation that perhaps the best regulation would be antitrust,” he said. “Because people who don’t like Facebook — oh they could go to WhatsApp . But oh we have some problems in South America, we have problems in Africa, we have to go back to Mr Zuckerberg who’s not here.

“My daughters could get off Facebook. But they’d go to Instagram . But that’s now controlled by Facebook. Perhaps the simplest form of regulation would be to break Facebook up — or treat it as a utility so that we could all then feel that when we talk about regulation we’re talking about allowing competition, counting metrics that are actually honest and true, and that Facebook has broken so much trust to allow you to simply gobble up every form of competition is probably not in the public interest.

“So when we’re talking about regulation would you be interested in asking your friend Mr Zuckerberg if we could have a discussion about antitrust?”

Allan’s reached for an “it depends upon the problem we’re trying to solve” reply.

“The problem is Facebook,” retorted Angus. “We’re talking about symptoms but the problem is the unprecedented economic control of every form of social discourse and communication. That it’s Facebook. That that is the problem that we need to address.”

Committee chair Damian Collins also gave short shrift to Allan’s attempt to muddily reframe this line of questioning — as regulators advocating “turning off the Internet” (instead of what Angus was actually advocating: A way to get “credible democratic responses from a corporation”) — by interjecting: “I think we would also distinguish between the Internet and Facebook to say they’re not necessarily the same thing.”

The room affirmed its accord with that.

At the start of the session Collins revealed the committee would not — at least for now — be publishing the cache of documents it dramatically seized this weekend from the founder of a startup that’s been suing Facebook since 2015, saying it was “not in a position to do that”.

Although at several points during the session DCMS committee members appeared to tease some new details derived from these documents, asking for example whether Facebook had ever made API decisions for developers contingent on them taking advertising on its platform.

Allan said it had not — and appeared to be attempting to suggest that the emails the committee might have been reading were the result of ‘normal’ internal business discussions about how to evolve Facebook’s original desktop-based business model for the mobile-first era.

Collins did detail one piece of new information that he categorically identified as having been sourced from the seized documents — and specifically from an internal email sent by a Facebook engineer, dating from October 2014 — describing this to be of significant public interest.

“An engineer at Facebook notified the company in October 2014 that entities with Russian IP addresses had been using a Pinterest API key to pull over 3BN data points a day through the ordered friends API,” he revealed, asking Allan whether “that reported to any external body at the time”.

The Facebook VP responded by characterizing the information contained in the seized documents as “partial”, on account of being sourced via a “hostile litigant”.

“I don’t want you to use this opportunity just to attack the litigant,” retorted Collins. “I want you to address the question… what internal process [Facebook] ran when this was reported to the company by an engineer? And did they notify external agencies of this activity? Because if Russian IP addresses were pulling down a huge amount of data from the platform — was that reported or was that just kept, as so often seems to be the case, just kept within the family and not talked about.”

“Any information you have seen that’s contained within that cache of emails is at best partial and at worst potentially misleading,” responded Allan.

“On the specific question of whether or not we believe, based on our subsequent investigations, that there was activity by Russians at that time I will come back to you.”

We reached out to Pinterest to ask whether Facebook ever informed it about such an abuse of its API key. At the time of writing it had not responded to our request for comment.

UK parliament seizes cache of internal Facebook documents to further privacy probe

Facebook founder Mark Zuckerberg may yet regret underestimating a UK parliamentary committee that’s been investigating the democracy-denting impact of online disinformation for the best part of this year — and whose repeat requests for facetime he’s just as repeatedly snubbed. In the latest high gear change, reported in yesterday’s Observer, the committee has used parliamentary powers […]

Facebook founder Mark Zuckerberg may yet regret underestimating a UK parliamentary committee that’s been investigating the democracy-denting impact of online disinformation for the best part of this year — and whose repeat requests for facetime he’s just as repeatedly snubbed.

In the latest high gear change, reported in yesterday’s Observer, the committee has used parliamentary powers to seize a cache of documents pertaining to a US lawsuit to further its attempt to hold Facebook to account for misuse of user data.

Facebook’s oversight — or rather lack of it — where user data is concerned has been a major focus for the committee, as its enquiry into disinformation and data misuse has unfolded and scaled over the course of this year, ballooning in scope and visibility since the Cambridge Analytica story blew up into a global scandal this April.

The internal documents now in the committee’s possession are alleged to contain significant revelations about decisions made by Facebook senior management vis-a-vis data and privacy controls — including confidential emails between senior executives and correspondence with Zuckerberg himself.

This has been a key line of enquiry for parliamentarians. And an equally frustrating one — with committee members accusing Facebook of being deliberately misleading and concealing key details from it.

The seized files pertain to a US lawsuit that predates mainstream publicity around political misuse of Facebook data, with the suit filed in 2015, by a US startup called Six4Three, after Facebook removed developer access to friend data. (As we’ve previously reported Facebook was actually being warned about data risks related to its app permissions as far back as 2011 — yet it didn’t full shut down the friends data API until May 2015.)

The core complaint is an allegation that Facebook enticed developers to create apps for its platform by implying they would get long-term access to user data in return. So by later cutting data access the claim is that Facebook was effectively defrauding developers.

Since lodging the complaint, the plaintiffs have seized on the Cambridge Analytica saga to try to bolster their case.

And in a legal motion filed in May Six4Three’s lawyers claimed evidence they had uncovered demonstrated that “the Cambridge Analytica scandal was not the result of mere negligence on Facebook’s part but was rather the direct consequence of the malicious and fraudulent scheme Zuckerberg designed in 2012 to cover up his failure to anticipate the world’s transition to smartphones”.

The startup used legal powers to obtain the cache of documents — which remain under seal on order of a California court. But the UK parliament used its own powers to swoop in and seize the files from the founder of Six4Three during a business trip to London when he came under the jurisdiction of UK law, compelling him to hand them over.

According to the Observer, parliament sent a serjeant at arms to the founder’s hotel — giving him a final warning and a two-hour deadline to comply with its order.

“When the software firm founder failed to do so, it’s understood he was escorted to parliament. He was told he risked fines and even imprisonment if he didn’t hand over the documents,” it adds, apparently revealing how Facebook lost control over some more data (albeit, its own this time).

In comments to the newspaper yesterday, DCMS committee chair Damian Collins said: “We are in uncharted territory. This is an unprecedented move but it’s an unprecedented situation. We’ve failed to get answers from Facebook and we believe the documents contain information of very high public interest.”

Collins later tweeted the Observer’s report on the seizure, teasing “more next week” — likely a reference to the grand committee hearing in parliament already scheduled for November 27.

But it could also be a hint the committee intends to reveal and/or make use of information locked up in the documents, as it puts questions to Facebook’s VP of policy solutions…

That said, the documents are subject to the Californian superior court’s seal order, so — as the Observer points out — cannot be shared or made public without risk of being found in contempt of court.

A spokesperson for Facebook made the same point, telling the newspaper: “The materials obtained by the DCMS committee are subject to a protective order of the San Mateo Superior Court restricting their disclosure. We have asked the DCMS committee to refrain from reviewing them and to return them to counsel or to Facebook. We have no further comment.”

Facebook’s spokesperson added that Six4Three’s “claims have no merit”, further asserting: “We will continue to defend ourselves vigorously.”

And, well, the irony of Facebook asking for its data to remain private also shouldn’t be lost on anyone at this point…

Another irony: In July, the Guardian reported that as part of Facebook’s defence against Six4Three’s suit the company had argued in court that it is a publisher — seeking to have what it couched as ‘editorial decisions’ about data access protected by the US’ first amendment.

Which is — to put it mildly — quite the contradiction, given Facebook’s long-standing public characterization of its business as just a distribution platform, never a media company.

So expect plenty of fireworks at next week’s public hearing as parliamentarians once again question Facebook over its various contradictory claims.

It’s also possible the committee will have been sent an internal email distribution list by then, detailing who at Facebook knew about the Cambridge Analytica breach in the earliest instance.

This list was obtained by the UK’s data watchdog, over the course of its own investigation into the data misuse saga. And earlier this month information commissioner Elizabeth Denham confirmed the ICO has the list and said it would pass it to the committee.

The accountability net does look to be closing in on Facebook management.

Even as Facebook continues to deny international parliaments any face-time with its founder and CEO (the EU parliament remains the sole exception).

Last week the company refused to even have Zuckerberg do a video call to take the committee’s questions — offering its VP of policy solutions, Richard Allan, to go before what’s now a grand committee comprised of representatives from seven international parliaments instead.

The grand committee hearing will take place in London on Tuesday morning, British time — followed by a press conference in which parliamentarians representing Facebook users from across the world will sign a set of ‘International Principles for the Law Governing the Internet’, making “a declaration on future action”.

So it’s also ‘watch this space’ where international social media regulation is concerned.

As noted above, Allan is just the latest stand-in for Zuckerberg. Back in April the DCMS committee spend the best part of five hours trying to extract answers from Facebook CTO, Mike Schroepfer.

“You are doing your best but the buck doesn’t stop with you does it? Where does the buck stop?” one committee member asked him then.

“It stops with Mark,” replied Schroepfer.

But Zuckerberg definitely won’t be stopping by on Tuesday.

Zuckerberg rejects facetime call for answers from five parliaments

Facebook has declined once again to send its CEO to the UK parliament — this time turning down an invitation to face questions from a grand committee comprised of representatives from five international parliaments. MPs from Argentina, Australia, Canada, Ireland and the UK have joined forces to try to pile pressure on the company’s founder, Mark […]

Facebook has declined once again to send its CEO to the UK parliament — this time turning down an invitation to face questions from a grand committee comprised of representatives from five international parliaments.

MPs from Argentina, Australia, Canada, Ireland and the UK have joined forces to try to pile pressure on the company’s founder, Mark Zuckerberg, to answer questions related to his “platform’s malign use in world affairs and democratic process”.

The UK’s Digital, Culture, Media and Sport committee, which has been running an enquiry into online disinformation for the best part of this year, revealed the latest Facebook snub yesterday. It put out the grand committee call for facetime with Zuckerberg last week.

In the latest rejection letter to DCMS, Facebook writes: “Thank you for the invitation to appear before your Grand Committee. As we explained in our letter of November 2nd, Mr Zuckerberg is not able to be in London on November 27th for your hearing and sends his apologies.”

“We remain happy to cooperate with your inquiry as you look at issues related to false news and elections,” the company’s UK head of public policy, Rebecca Stimson, adds, before going on to summarize “some of the things we have been doing at Facebook over the last year”.

This boils down to a list of Facebook activities and related research that intersects with the topics of election interference, political ads, disinformation and security, but without offering any new information of substance or data points that could be used to measure and quantify the company’s actions.

The letter does not explain why Zuckerberg is unavailable to speak to the committee remotely, e.g. via video call.

Responding to the latest snub, DCMS chair Damian Collins expressed disappointment and vowed to keep up the pressure.

“Facebook’s letter is, once again, hugely disappointing,” he writes. “We believe Mark Zuckerberg has important questions to answer about what he knew about breaches of data protection law involving their customers’ personal data and why the company didn’t do more to identify and act against known sources of disinformation; and in particular those coming from agencies in Russia.

“The fact that he has continually declined to give evidence, not just to my committee, but now to an unprecedented international grand committee, makes him look like he’s got something to hide.”

“We will not let the matter rest there, and are not reassured in any way by the corporate puff piece that passes off as Facebook’s letter back to us,” Collins adds. “The fact that the University of Michigan believes that Facebook’s ‘Iffy Quotient’ scores have recently improved means nothing to the victims of Facebook data breaches.

“We will continue with our planning for the international grand committee on 27th November, and expect to announce shortly the names of additional representatives who will be joining us and our plans for the hearing.”

Where’s the accountability Facebook?

Facebook has yet again declined an invitation for its founder and CEO Mark Zuckerberg to answer international politicians’ questions about how disinformation spreads on his platform and undermines democratic processes. But policymakers aren’t giving up — and have upped the ante by issuing a fresh invitation signed by representatives from another three national parliaments. So […]

Facebook has yet again declined an invitation for its founder and CEO Mark Zuckerberg to answer international politicians’ questions about how disinformation spreads on his platform and undermines democratic processes.

But policymakers aren’t giving up — and have upped the ante by issuing a fresh invitation signed by representatives from another three national parliaments. So the call for global accountability is getting louder.

Now representatives from a full five parliaments have signed up to an international grand committee calling for answers from Zuckerberg, with Argentina, Australia and Ireland joining the UK and Canada to try to pile political pressure on Facebook.

The UK’s Digital, Culture, Media and Sport (DCMS) committee has been asking for Facebook’s CEO to attend its multi-month enquiry for the best part of this year, without success…

In its last request the twist was it came not just from the DCMS inquiry into online disinformation but also the Canadian Standing Committee on Access to Information, Privacy and Ethics.

This year policymakers on both sides of the Atlantic have been digging down the rabbit hole of online disinformation — before and since the Cambridge Analytica scandal erupted into a major global scandal — announcing last week they will form an ‘international grand committee’ to further their enquiries.

The two committees will convene for a joint hearing in the UK parliament on November 27 — and they want Zuckerberg to join them to answer questions related to the “platform’s malign use in world affairs and democratic process”, as they put it in their invitation letter.

Facebook has previously despatched a number of less senior representatives to talk to policymakers probing damages caused by disinformation — including its CTO, Mike Schroepfer, who went before the DCMS committee in April.

But both Schroepfer and Zuckerberg have admitted the accountability buck stops with Facebook’s CEO.

The company’s nine-month-old ‘Privacy Principles‘ also makes the following claim [emphasis ours]:

We are accountable

In addition to comprehensive privacy reviews, we put products through rigorous data security testing. We also meet with regulators, legislators and privacy experts around the world to get input on our data practices and policies.

The increasingly pressing question, though, is to whom is Facebook actually accountable?

Zuckerberg went personally to the US House and Senate to face policymakers’ questions in April. He also attended a meeting of the EU parliament’s Conference of Presidents in May.

But the rest of the world continues being palmed off with minions. Despite some major, major harms.

Facebook’s 2BN+ user platform does not stop at the US border. And Zuckerberg himself has conceded the company probably wouldn’t be profitable without its international business.

Yet so far only the supranational EU parliament has managed to secure a public meeting with Facebook’s CEO. And MEPs there had to resort to heckling Zuckerberg to try to get answers to their actual questions.

“Facebook say that they remain “committed to working with our committees to provide any additional relevant information” that we require. Yet they offer no means of doing this,” tweeted DCMS chair Damian Collins today, reissuing the invitation for Zuckerberg. “The call for accountability is growing, with representatives from 5 parliaments now meeting on the 27th.”

The letter to Facebook’s CEO notes that the five nations represent 170 million Facebook users.

“We call on you once again to take up your responsibility to Facebook users, and speak in person to their elected representatives,” it adds.

The UK’s information commissioner said yesterday that Facebook needs to overhaul its business model, giving evidence to parliament on the “unprecedented” data investigation her office has been running which was triggered by the Cambridge Analytica scandal. She also urged policymakers to strengthen the rules on the use of people’s data for digital campaigning.

Last month the European parliament also called for Facebook to let in external auditors in the wake of Cambridge Analytica, to ensure users’ data is being properly protected — yet another invitation Facebook has declined.

Meanwhile an independent report assessing the company’s human rights impact in Myanmar — which Facebook commissioned but chose to release yesterday on the eve of the US midterms when most domestic eyeballs would be elsewhere — agreed with the UN’s damning assessment that Facebook did not do enough to prevent its platform from being used to incite ethical violence.

The report also said Facebook is still not doing enough in Myanmar.

Zuckerberg gets joint summons from UK and Canadian parliaments

Two separate parliamentary committees, in the UK and Canada, have issued an unprecedented international joint summons for Facebook’s CEO Mark Zuckerberg to appear before them. The committees are investigating the impact of online disinformation on democratic processes and want Zuckerberg to answer questions related to the Cambridge Analytica-Facebook user data misuse scandal, which both have been […]

Two separate parliamentary committees, in the UK and Canada, have issued an unprecedented international joint summons for Facebook’s CEO Mark Zuckerberg to appear before them.

The committees are investigating the impact of online disinformation on democratic processes and want Zuckerberg to answer questions related to the Cambridge Analytica-Facebook user data misuse scandal, which both have been probing this year.

More broadly, they are also seeking greater detail about Facebook’s digital policies and information governance practices — not least, in light of fresh data breaches — as they continue to investigate the democratic impacts and economic incentives related to the spread of online disinformation via social media platforms.

In a letter sent to the Facebook founder today, the chairs of the UK’s Digital, Culture, Media and Sport (DCMS) committee and the Canadian Standing Committee on Access to Information, Privacy and Ethics (SCAIPE), Damian Collins and Bob Zimmer respectively, write that they intend to hold a “special joint parliamentary hearing at the Westminster Parliament”, on November 27 — to form an “‘international grand committee’ on disinformation and fake news”.

“This will be led by ourselves but a number of other parliaments are likely to be represented,” they continue. “No such joint hearing has ever been held. Given your self-declared objective to “fix” Facebook, and to prevent the platform’s malign use in world affairs and democratic process, we would like to give you the chance to appear at this hearing.”

Both committees say they will be issuing their final reports into online disinformation by the end of December.

The DCMS committee has already put out a preliminary report this summer, following a number of hearings with company representatives and data experts, in which it called for urgent action from government to combat online disinformation and defend democracy — including suggesting it look at a levy on social media platforms to fund educational programs in digital literacy.

Although the UK government has so far declined to seize on the bulk of the committee’s recommendations — apparently preferring a ‘wait and gather evidence’ (and/or ‘kick a politically charged issue into the long grass’) approach.

Meanwhile, Canada’s interest in the democratic damage caused by so-called ‘fake news’ has been sharpened by AIQ, the data company linked to Cambridge Analytica, as one of its data handlers and system developers — and described by CA whistleblower Chris Wylie as essentially a division of his former employer — being located on its soil.

The SCAIPE committee has already held multiple, excoriating sessions interrogating executives from AIQ, which have been watched with close interest by at least some lawmakers across the Atlantic…

At the same time the DCMS committee has tried and failed repeatedly to get Facebook’s CEO before it during the course of its multi-month inquiry into online disinformation. Instead Facebook despatched a number of less senior staffers, culminating with its CTO — Mike Schroepfer — who spent around five hours being roasted by visibly irate committee members. And whose answers left it still unsatisfied.

Yet as political concern about election interference has stepped up steeply this year, Zuckerberg has attended sessions in the US Senate and House in April — to face (but not necessarily answer) policymakers’ questions.

He also appeared before a meeting of the EU parliament’s council of presidents — where he was heckled for dodging MEPs’ specific concerns.

But the UK parliament has been consistently snubbed. At the last, the DCMS committee resorted to saying it would issue Zuckerberg with a formal summons the next time he stepped on UK soil (and of course he hasn’t).

They’re now trying a different tack — in the form of a grand coalition of international lawmakers. From two — and possibly more — countries.

While the chairs of the UK and Canadian committees say they understand Zuckerberg cannot make himself available “to all parliaments” they argue Facebook’s users in other countries “need a line of accountability to your organisation — directly, via yourself”, adding: “We would have thought that this responsibility is something that you would want to take up. We both plan to issue final reports on this issue by the end of this December, 2018. The hearing of your evidence is now overdue, and urgent.”

“We call on you to take up this historic opportunity to tell parliamentarians from both sides of the Atlantic and beyond about the measures Facebook is taking to halt the spread of disinformation on your platform, and to protect user data,” they also write.

So far though, where non-domestic lawmakers are concerned, it’s only been elected representatives of the European Union’s 28 Member States who have proved to have enough collective political clout and pulling power to secure a little facetime with Zuckerberg.

So another Facebook snub seems the most likely response to the latest summons.

“We’ve received the committee’s letter and will respond to Mr Collins by his deadline,” a Facebook spokesperson told us when asked whether it would be despatching Zuckerberg this time.

The committee has given Facebook until November 7 to reply.

Perhaps the company will send its new global policy chief, Nick Clegg — who would at least be an all-too familiar face to Westminster lawmakers, having previously served as the UK’s deputy PM.

Even if Collins et al’s latest gambit still doesn’t net them Zuckerberg, the international coalition approach the two committees are now taking is interesting, given the challenges for many governments of regulating global platforms like Facebook whose user bases can scale bigger than some entire nations.

If the committees were to recruit lawmakers from additional countries to their joint hearing — Myanmar, for example, where Facebook’s platform has been accused of accelerating ethnic violence — such an invitation might be rather harder for Zuckerberg to ignore.

After all, Facebook does claim: “We are accountable.” And Zuckerberg is its CEO. (Though it does not state who exactly Facebook/Zuckerberg feels accountable to.)

While forming a joint international committee is a new tactic, UK and Canadian lawmakers and regulatory bodies have been working together for many months now — as part of their respective inquiries and investigations, and as they’ve sought to unpick complex data trails and understand transnational corporate structures.

One thing is increasingly clear when looking at the tangled web where politics and social media collide (with mass opinion manipulation the intended outcome): The interconnected, cross-border nature of the Internet, when meshed with well-funded digital political campaigning — and indeed buckets of personal data, is now placing huge strain on traditional legal structures at the nation-state level.

National election laws reliant on regulating things like campaign spending and joint working, as the UK’s laws are supposed to, simply won’t work unless you can actually follow the money and genuinely map the relationships.

And where use of personal data for online political ad-targeting is concerned, ethics must be front and center — as the UK’s data watchdog has warned.

Fake news ‘threat to democracy’ report gets back-burner response from UK gov’t

The UK government has rejected a parliamentary committee’s call for a levy on social media firms to fund digital literacy lessons to combat the impact of disinformation online. The recommendation of a levy on social media platforms was made by the Digital, Culture, Media and Sport committee three months ago, in a preliminary report following […]

The UK government has rejected a parliamentary committee’s call for a levy on social media firms to fund digital literacy lessons to combat the impact of disinformation online.

The recommendation of a levy on social media platforms was made by the Digital, Culture, Media and Sport committee three months ago, in a preliminary report following a multi-month investigation into the impact of so-called ‘fake news’ on democratic processes.

Though it has suggested the terms ‘misinformation’ and ‘disinformation’ be used instead, to better pin down exact types of problematic inauthentic content — and on that at least the government agrees. But just not on very much else. At least not yet.

Among around 50 policy suggestions in the interim report — which the committee put out quickly exactly to call for “urgent action” to ‘defend democracy’ — it urged the government to put forward proposals for an education levy on social media.

But in its response, released by the committee today, the government writes that it is “continuing to build the evidence base on a social media levy to inform our approach in this area”.

“We are aware that companies and charities are undertaking a wide range of work to tackle online harms and would want to ensure we do not negatively impact existing work,” it adds, suggesting it’s most keen not to be accused of making a tricky problem worse.

Earlier this year the government did announce plans to set up a dedicated national security unit to combat state-led disinformation campaigns, with the unit expected to monitor social media platforms to support faster debunking of online fakes — by being able to react more quickly to co-ordinated interference efforts by foreign states.

But going a step further and requiring social media platforms themselves to pay a levy to fund domestic education programs — to arm citizens with critical thinking capabilities so people can more intelligently parse content being algorithmically pushed at them — is not, apparently, forming part of government’s current thinking.

Though it is not taking the idea of some form of future social media tax off the table entirely, as it continues seeking ways to make big tech pay a fairer share of earnings into the public purse, also noting in its response: “We will be considering any levy in the context of existing work being led by HM Treasury in relation to corporate tax and the digital economy.”

As a whole, the government’s response to the DCMS committee’s laundry list of policy recommendations around the democratic risks of online disinformation can be summed up in a word as ‘cautious’ — with only three of the report’s forty-two recommendations being accepted outright, as the committee tells it, and four fully rejected.

Most of the rest are being filed under ‘come back later — we’re still looking into it’.

So if you take the view that ‘fake news’ online has already had a tangible and worrying impact on democratic debate the government’s response will come across as underwhelming and lacking in critical urgency. (Though it’s hardly alone on that front.)

The committee has reacted with disappointment — with chair Damian Collins dubbing the government response “disappointing and a missed opportunity”, and also accusing ministers of hiding behind ‘ongoing investigations’ to avoid commenting on the committee’s call that the UK’s National Crime Agency urgently carry out its own investigation into “allegations involving a number of companies”.

Earlier this month Collins also called for the Met Police to explain why they had not opened an investigation into Brexit-related campaign spending breaches.

It has also this month emerged that the force will not examine claims of Russian meddling in the referendum.

Meanwhile the political circus and business uncertainty triggered by the Brexit vote goes on.

Holding pattern

The bulk of the government’s response to the DCMS interim report entails flagging a number of existing and/or ongoing consultations and reviews — such as the ‘Protecting the Debate: Intimidating, Influence and Information‘ consultation, which it launched this summer.

But by saying it’s continuing to gather evidence on a number of fronts the government is also saying it does not feel it’s necessary to rush through any regulatory responses to technology-accelerated, socially divisive/politically sensitive viral nonsense — claiming also that it hasn’t seen any evidence that malicious misinformation has been able to skew genuine democratic debate on the domestic front.

It’ll be music to Facebook’s ears given the awkward scrutiny the company has faced from lawmakers at home and, indeed, elsewhere in Europe — in the wake of a major data misuse scandal with a deeply political angle.

The government also points multiple times to a forthcoming oversight body which is in the process of being established — aka the Centre for Data Ethics and Innovation — saying it expects this to grapple with a number of the issues of concern raised by the committee, such as ad transparency and targeting; and to work towards agreeing best practices in areas such as “targeting, fairness, transparency and liability around the use of algorithms and data-driven technologies”.

Identifying “potential new regulations” is another stated role for the future body. Though given it’s not yet actively grappling with any of these issues the UK’s democratically concerned citizens are simply being told to wait.

“The government recognises that as technological advancements are made, and the use of data and AI becomes more complex, our existing governance frameworks may need to be strengthened and updated. That is why we are setting up the Centre,” the government writes, still apparently questioning whether legislative updates are needed — this in a response to the committee’s call, informed by its close questioning of tech firms and data experts, for an oversight body to be able to audit “non-financial” aspects of technology companies (including security mechanism and algorithms) to “ensure they are operating responsibly”.

“As set out in the recent consultation on the Centre, we expect it to look closely at issues around the use of algorithms, such as fairness, transparency, and targeting,” the government continues, noting that details of the body’s initial work program will be published in the fall — when it says it will also put out its response to the aforementioned consultation.

It does not specify when the ethics body will be in any kind of position to hit this shifty ground running. So again there’s zero sense the government intends to act at a pace commensurate with the fast-changing technologies in question.

Then, where the committee’s recommendations touch on the work of existing UK oversight bodies, such as Competition and Markets Authority, the ICO data watchdog, the Electoral Commission and the National Crime Agency, the government dodges specific concerns by suggesting it’s not appropriate for it to comment “on independent bodies or ongoing investigations”.

Also notable: It continues to reject entirely the idea that Russian-backed disinformation campaigns have had any impact on domestic democratic processes at all — despite public remarks by prime minister Theresa May  last year generally attacking Putin for weaponizing disinformation for election interference purposes.

Instead it writes:

We want to reiterate, however, that the Government has not seen evidence of successful use of disinformation by foreign actors, including Russia, to influence UK democratic processes. But we are not being complacent and the Government is actively engaging with partners to develop robust policies to tackle this issue.

Its response on this point also makes no reference of the extensive use of social media platforms to run political ads targeting the 2016 Brexit referendum.

Nor does it make any note of the historic lack of transparency of such ad platforms. Which means that it’s simply not possible to determine where all the ad money came from to fund digital campaigning on domestic issues — with Facebook only just launching a public repository of who is paying for political ads and badging them as such in the UK, for example.

The elephant in the room is of course that ‘lack of evidence’ is not necessarily evidence of a lack of success, especially when it’s so hard to extract data from opaque adtech platforms in the first place.

Moreover, just this week fresh concerns have been raised about how platforms like Facebook are still enabling dark ads to target political messages at citizens — without it being transparently clear who is actually behind and paying for such campaigns…

In turn triggering calls from opposition MPs for updates to UK election law…

Yet the government, busily embroiled as it still is with trying to deliver some kind of Brexit outcome, is seemingly unconcerned by all this unregulated, background ongoing political advertising.

It also directly brushes off the committee’s call for it to state how many investigations are currently being carried out into Russian interference in UK politics, saying only that it has taken steps to ensure there is a “coordinated structure across all relevant UK authorities to defend against hostile foreign interference in British politics, whether from Russia or any other State”, before reiterating: “There has, however, been no evidence to date of any successful foreign interference.”

This summer the Electoral Commission found that the official Vote Leave campaign in the UK’s in/out EU referendum had broken campaign spending rules — with social media platforms being repurposed as the unregulated playing field where election law could be diddled at such scale. That much is clear.

The DCMS committee had backed the Commission’s call for digital imprint requirements for electronic campaigns to level the playing field between digital and print ads.

However the government has failed to back even that pretty uncontroversial call, merely pointing again to a public consultation (which ends today) on proposed changes to electoral law. So it’s yet more wait and see.

The committee is also disappointed about the lack of government response to its call for the Commission to establish a code for advertising through social media during election periods; and its recommendation that “Facebook and other platforms take responsibility for the way their platforms are used” — noting also the government made “no response to Facebook’s failure to respond adequately to the Committee’s inquiry and Mark Zuckerberg’s reluctance to appear as a witness“. (A reluctance that really enraged the committee.)

In a statement on the government’s response, committee chair Damian Collins writes: “The government’s response to our interim report on disinformation and ‘fake news’ is disappointing and a missed opportunity. It uses other ongoing investigations to further delay desperately needed announcements on the ongoing issues of harmful and misleading content being spread through social media.

“We need to see a more coordinated approach across government to combat campaigns of disinformation being organised by Russian agencies seeking to disrupt and undermine our democracy. The government’s response gives us no real indication of what action is being taken on this important issue.”

Collins finds one slender crumb of comfort, though, that the government might have some appetite to rule big tech.

After the committee had called for government to “demonstrate how seriously it takes Facebook’s apparent collusion in spreading disinformation in Burma, at the earliest opportunity”, the government writes that it: “has made it clear to Facebook, and other social media companies, that they must do more to remove illegal and harmful content”; and noting also that its forthcoming Online Harms White Paper will include “a range of policies to tackle harmful content”.

“We welcome though the strong words from the Government in its demand for action by Facebook to tackle the hate speech that has contributed to the ethnic cleansing of the Rohingya in Burma,” notes Collins, adding: “We will be looking for the government to make progress on these and other areas in response to our final report which will be published in December.

“We will also be raising these issues with the Secretary of State for DCMS, Jeremy Wright, when he gives evidence to the Committee on Wednesday this week.”

(Wright being the new minister in charge of the UK’s digital brief, after Matt Hancock moved over to health.)

We’ve reached out to Facebook for comment on the government’s call for a more robust approach to illegal hate speech.

Last week the company announced it had hired former UK deputy prime minister, Nick Clegg, to be its new head of global policy and comms — apparently signalling a willingness to pay a bit more attention to European regulators.

Fake news ‘threat to democracy’ report gets back-burner response from UK gov’t

The UK government has rejected a parliamentary committee’s call for a levy on social media firms to fund digital literacy lessons to combat the impact of disinformation online. The recommendation of a levy on social media platforms was made by the Digital, Culture, Media and Sport committee three months ago, in a preliminary report following […]

The UK government has rejected a parliamentary committee’s call for a levy on social media firms to fund digital literacy lessons to combat the impact of disinformation online.

The recommendation of a levy on social media platforms was made by the Digital, Culture, Media and Sport committee three months ago, in a preliminary report following a multi-month investigation into the impact of so-called ‘fake news’ on democratic processes.

Though it has suggested the terms ‘misinformation’ and ‘disinformation’ be used instead, to better pin down exact types of problematic inauthentic content — and on that at least the government agrees. But just not on very much else. At least not yet.

Among around 50 policy suggestions in the interim report — which the committee put out quickly exactly to call for “urgent action” to ‘defend democracy’ — it urged the government to put forward proposals for an education levy on social media.

But in its response, released by the committee today, the government writes that it is “continuing to build the evidence base on a social media levy to inform our approach in this area”.

“We are aware that companies and charities are undertaking a wide range of work to tackle online harms and would want to ensure we do not negatively impact existing work,” it adds, suggesting it’s most keen not to be accused of making a tricky problem worse.

Earlier this year the government did announce plans to set up a dedicated national security unit to combat state-led disinformation campaigns, with the unit expected to monitor social media platforms to support faster debunking of online fakes — by being able to react more quickly to co-ordinated interference efforts by foreign states.

But going a step further and requiring social media platforms themselves to pay a levy to fund domestic education programs — to arm citizens with critical thinking capabilities so people can more intelligently parse content being algorithmically pushed at them — is not, apparently, forming part of government’s current thinking.

Though it is not taking the idea of some form of future social media tax off the table entirely, as it continues seeking ways to make big tech pay a fairer share of earnings into the public purse, also noting in its response: “We will be considering any levy in the context of existing work being led by HM Treasury in relation to corporate tax and the digital economy.”

As a whole, the government’s response to the DCMS committee’s laundry list of policy recommendations around the democratic risks of online disinformation can be summed up in a word as ‘cautious’ — with only three of the report’s forty-two recommendations being accepted outright, as the committee tells it, and four fully rejected.

Most of the rest are being filed under ‘come back later — we’re still looking into it’.

So if you take the view that ‘fake news’ online has already had a tangible and worrying impact on democratic debate the government’s response will come across as underwhelming and lacking in critical urgency. (Though it’s hardly alone on that front.)

The committee has reacted with disappointment — with chair Damian Collins dubbing the government response “disappointing and a missed opportunity”, and also accusing ministers of hiding behind ‘ongoing investigations’ to avoid commenting on the committee’s call that the UK’s National Crime Agency urgently carry out its own investigation into “allegations involving a number of companies”.

Earlier this month Collins also called for the Met Police to explain why they had not opened an investigation into Brexit-related campaign spending breaches.

It has also this month emerged that the force will not examine claims of Russian meddling in the referendum.

Meanwhile the political circus and business uncertainty triggered by the Brexit vote goes on.

Holding pattern

The bulk of the government’s response to the DCMS interim report entails flagging a number of existing and/or ongoing consultations and reviews — such as the ‘Protecting the Debate: Intimidating, Influence and Information‘ consultation, which it launched this summer.

But by saying it’s continuing to gather evidence on a number of fronts the government is also saying it does not feel it’s necessary to rush through any regulatory responses to technology-accelerated, socially divisive/politically sensitive viral nonsense — claiming also that it hasn’t seen any evidence that malicious misinformation has been able to skew genuine democratic debate on the domestic front.

It’ll be music to Facebook’s ears given the awkward scrutiny the company has faced from lawmakers at home and, indeed, elsewhere in Europe — in the wake of a major data misuse scandal with a deeply political angle.

The government also points multiple times to a forthcoming oversight body which is in the process of being established — aka the Centre for Data Ethics and Innovation — saying it expects this to grapple with a number of the issues of concern raised by the committee, such as ad transparency and targeting; and to work towards agreeing best practices in areas such as “targeting, fairness, transparency and liability around the use of algorithms and data-driven technologies”.

Identifying “potential new regulations” is another stated role for the future body. Though given it’s not yet actively grappling with any of these issues the UK’s democratically concerned citizens are simply being told to wait.

“The government recognises that as technological advancements are made, and the use of data and AI becomes more complex, our existing governance frameworks may need to be strengthened and updated. That is why we are setting up the Centre,” the government writes, still apparently questioning whether legislative updates are needed — this in a response to the committee’s call, informed by its close questioning of tech firms and data experts, for an oversight body to be able to audit “non-financial” aspects of technology companies (including security mechanism and algorithms) to “ensure they are operating responsibly”.

“As set out in the recent consultation on the Centre, we expect it to look closely at issues around the use of algorithms, such as fairness, transparency, and targeting,” the government continues, noting that details of the body’s initial work program will be published in the fall — when it says it will also put out its response to the aforementioned consultation.

It does not specify when the ethics body will be in any kind of position to hit this shifty ground running. So again there’s zero sense the government intends to act at a pace commensurate with the fast-changing technologies in question.

Then, where the committee’s recommendations touch on the work of existing UK oversight bodies, such as Competition and Markets Authority, the ICO data watchdog, the Electoral Commission and the National Crime Agency, the government dodges specific concerns by suggesting it’s not appropriate for it to comment “on independent bodies or ongoing investigations”.

Also notable: It continues to reject entirely the idea that Russian-backed disinformation campaigns have had any impact on domestic democratic processes at all — despite public remarks by prime minister Theresa May  last year generally attacking Putin for weaponizing disinformation for election interference purposes.

Instead it writes:

We want to reiterate, however, that the Government has not seen evidence of successful use of disinformation by foreign actors, including Russia, to influence UK democratic processes. But we are not being complacent and the Government is actively engaging with partners to develop robust policies to tackle this issue.

Its response on this point also makes no reference of the extensive use of social media platforms to run political ads targeting the 2016 Brexit referendum.

Nor does it make any note of the historic lack of transparency of such ad platforms. Which means that it’s simply not possible to determine where all the ad money came from to fund digital campaigning on domestic issues — with Facebook only just launching a public repository of who is paying for political ads and badging them as such in the UK, for example.

The elephant in the room is of course that ‘lack of evidence’ is not necessarily evidence of a lack of success, especially when it’s so hard to extract data from opaque adtech platforms in the first place.

Moreover, just this week fresh concerns have been raised about how platforms like Facebook are still enabling dark ads to target political messages at citizens — without it being transparently clear who is actually behind and paying for such campaigns…

In turn triggering calls from opposition MPs for updates to UK election law…

Yet the government, busily embroiled as it still is with trying to deliver some kind of Brexit outcome, is seemingly unconcerned by all this unregulated, background ongoing political advertising.

It also directly brushes off the committee’s call for it to state how many investigations are currently being carried out into Russian interference in UK politics, saying only that it has taken steps to ensure there is a “coordinated structure across all relevant UK authorities to defend against hostile foreign interference in British politics, whether from Russia or any other State”, before reiterating: “There has, however, been no evidence to date of any successful foreign interference.”

This summer the Electoral Commission found that the official Vote Leave campaign in the UK’s in/out EU referendum had broken campaign spending rules — with social media platforms being repurposed as the unregulated playing field where election law could be diddled at such scale. That much is clear.

The DCMS committee had backed the Commission’s call for digital imprint requirements for electronic campaigns to level the playing field between digital and print ads.

However the government has failed to back even that pretty uncontroversial call, merely pointing again to a public consultation (which ends today) on proposed changes to electoral law. So it’s yet more wait and see.

The committee is also disappointed about the lack of government response to its call for the Commission to establish a code for advertising through social media during election periods; and its recommendation that “Facebook and other platforms take responsibility for the way their platforms are used” — noting also the government made “no response to Facebook’s failure to respond adequately to the Committee’s inquiry and Mark Zuckerberg’s reluctance to appear as a witness“. (A reluctance that really enraged the committee.)

In a statement on the government’s response, committee chair Damian Collins writes: “The government’s response to our interim report on disinformation and ‘fake news’ is disappointing and a missed opportunity. It uses other ongoing investigations to further delay desperately needed announcements on the ongoing issues of harmful and misleading content being spread through social media.

“We need to see a more coordinated approach across government to combat campaigns of disinformation being organised by Russian agencies seeking to disrupt and undermine our democracy. The government’s response gives us no real indication of what action is being taken on this important issue.”

Collins finds one slender crumb of comfort, though, that the government might have some appetite to rule big tech.

After the committee had called for government to “demonstrate how seriously it takes Facebook’s apparent collusion in spreading disinformation in Burma, at the earliest opportunity”, the government writes that it: “has made it clear to Facebook, and other social media companies, that they must do more to remove illegal and harmful content”; and noting also that its forthcoming Online Harms White Paper will include “a range of policies to tackle harmful content”.

“We welcome though the strong words from the Government in its demand for action by Facebook to tackle the hate speech that has contributed to the ethnic cleansing of the Rohingya in Burma,” notes Collins, adding: “We will be looking for the government to make progress on these and other areas in response to our final report which will be published in December.

“We will also be raising these issues with the Secretary of State for DCMS, Jeremy Wright, when he gives evidence to the Committee on Wednesday this week.”

(Wright being the new minister in charge of the UK’s digital brief, after Matt Hancock moved over to health.)

We’ve reached out to Facebook for comment on the government’s call for a more robust approach to illegal hate speech.

Last week the company announced it had hired former UK deputy prime minister, Nick Clegg, to be its new head of global policy and comms — apparently signalling a willingness to pay a bit more attention to European regulators.

Fake news ‘threat to democracy’ report gets back-burner response from UK gov’t

The UK government has rejected a parliamentary committee’s call for a levy on social media firms to fund digital literacy lessons to combat the impact of disinformation online. The recommendation of a levy on social media platforms was made by the Digital, Culture, Media and Sport committee three months ago, in a preliminary report following […]

The UK government has rejected a parliamentary committee’s call for a levy on social media firms to fund digital literacy lessons to combat the impact of disinformation online.

The recommendation of a levy on social media platforms was made by the Digital, Culture, Media and Sport committee three months ago, in a preliminary report following a multi-month investigation into the impact of so-called ‘fake news’ on democratic processes.

Though it has suggested the terms ‘misinformation’ and ‘disinformation’ be used instead, to better pin down exact types of problematic inauthentic content — and on that at least the government agrees. But just not on very much else. At least not yet.

Among around 50 policy suggestions in the interim report — which the committee put out quickly exactly to call for “urgent action” to ‘defend democracy’ — it urged the government to put forward proposals for an education levy on social media.

But in its response, released by the committee today, the government writes that it is “continuing to build the evidence base on a social media levy to inform our approach in this area”.

“We are aware that companies and charities are undertaking a wide range of work to tackle online harms and would want to ensure we do not negatively impact existing work,” it adds, suggesting it’s most keen not to be accused of making a tricky problem worse.

Earlier this year the government did announce plans to set up a dedicated national security unit to combat state-led disinformation campaigns, with the unit expected to monitor social media platforms to support faster debunking of online fakes — by being able to react more quickly to co-ordinated interference efforts by foreign states.

But going a step further and requiring social media platforms themselves to pay a levy to fund domestic education programs — to arm citizens with critical thinking capabilities so people can more intelligently parse content being algorithmically pushed at them — is not, apparently, forming part of government’s current thinking.

Though it is not taking the idea of some form of future social media tax off the table entirely, as it continues seeking ways to make big tech pay a fairer share of earnings into the public purse, also noting in its response: “We will be considering any levy in the context of existing work being led by HM Treasury in relation to corporate tax and the digital economy.”

As a whole, the government’s response to the DCMS committee’s laundry list of policy recommendations around the democratic risks of online disinformation can be summed up in a word as ‘cautious’ — with only three of the report’s forty-two recommendations being accepted outright, as the committee tells it, and four fully rejected.

Most of the rest are being filed under ‘come back later — we’re still looking into it’.

So if you take the view that ‘fake news’ online has already had a tangible and worrying impact on democratic debate the government’s response will come across as underwhelming and lacking in critical urgency. (Though it’s hardly alone on that front.)

The committee has reacted with disappointment — with chair Damian Collins dubbing the government response “disappointing and a missed opportunity”, and also accusing ministers of hiding behind ‘ongoing investigations’ to avoid commenting on the committee’s call that the UK’s National Crime Agency urgently carry out its own investigation into “allegations involving a number of companies”.

Earlier this month Collins also called for the Met Police to explain why they had not opened an investigation into Brexit-related campaign spending breaches.

It has also this month emerged that the force will not examine claims of Russian meddling in the referendum.

Meanwhile the political circus and business uncertainty triggered by the Brexit vote goes on.

Holding pattern

The bulk of the government’s response to the DCMS interim report entails flagging a number of existing and/or ongoing consultations and reviews — such as the ‘Protecting the Debate: Intimidating, Influence and Information‘ consultation, which it launched this summer.

But by saying it’s continuing to gather evidence on a number of fronts the government is also saying it does not feel it’s necessary to rush through any regulatory responses to technology-accelerated, socially divisive/politically sensitive viral nonsense — claiming also that it hasn’t seen any evidence that malicious misinformation has been able to skew genuine democratic debate on the domestic front.

It’ll be music to Facebook’s ears given the awkward scrutiny the company has faced from lawmakers at home and, indeed, elsewhere in Europe — in the wake of a major data misuse scandal with a deeply political angle.

The government also points multiple times to a forthcoming oversight body which is in the process of being established — aka the Centre for Data Ethics and Innovation — saying it expects this to grapple with a number of the issues of concern raised by the committee, such as ad transparency and targeting; and to work towards agreeing best practices in areas such as “targeting, fairness, transparency and liability around the use of algorithms and data-driven technologies”.

Identifying “potential new regulations” is another stated role for the future body. Though given it’s not yet actively grappling with any of these issues the UK’s democratically concerned citizens are simply being told to wait.

“The government recognises that as technological advancements are made, and the use of data and AI becomes more complex, our existing governance frameworks may need to be strengthened and updated. That is why we are setting up the Centre,” the government writes, still apparently questioning whether legislative updates are needed — this in a response to the committee’s call, informed by its close questioning of tech firms and data experts, for an oversight body to be able to audit “non-financial” aspects of technology companies (including security mechanism and algorithms) to “ensure they are operating responsibly”.

“As set out in the recent consultation on the Centre, we expect it to look closely at issues around the use of algorithms, such as fairness, transparency, and targeting,” the government continues, noting that details of the body’s initial work program will be published in the fall — when it says it will also put out its response to the aforementioned consultation.

It does not specify when the ethics body will be in any kind of position to hit this shifty ground running. So again there’s zero sense the government intends to act at a pace commensurate with the fast-changing technologies in question.

Then, where the committee’s recommendations touch on the work of existing UK oversight bodies, such as Competition and Markets Authority, the ICO data watchdog, the Electoral Commission and the National Crime Agency, the government dodges specific concerns by suggesting it’s not appropriate for it to comment “on independent bodies or ongoing investigations”.

Also notable: It continues to reject entirely the idea that Russian-backed disinformation campaigns have had any impact on domestic democratic processes at all — despite public remarks by prime minister Theresa May  last year generally attacking Putin for weaponizing disinformation for election interference purposes.

Instead it writes:

We want to reiterate, however, that the Government has not seen evidence of successful use of disinformation by foreign actors, including Russia, to influence UK democratic processes. But we are not being complacent and the Government is actively engaging with partners to develop robust policies to tackle this issue.

Its response on this point also makes no reference of the extensive use of social media platforms to run political ads targeting the 2016 Brexit referendum.

Nor does it make any note of the historic lack of transparency of such ad platforms. Which means that it’s simply not possible to determine where all the ad money came from to fund digital campaigning on domestic issues — with Facebook only just launching a public repository of who is paying for political ads and badging them as such in the UK, for example.

The elephant in the room is of course that ‘lack of evidence’ is not necessarily evidence of a lack of success, especially when it’s so hard to extract data from opaque adtech platforms in the first place.

Moreover, just this week fresh concerns have been raised about how platforms like Facebook are still enabling dark ads to target political messages at citizens — without it being transparently clear who is actually behind and paying for such campaigns…

In turn triggering calls from opposition MPs for updates to UK election law…

Yet the government, busily embroiled as it still is with trying to deliver some kind of Brexit outcome, is seemingly unconcerned by all this unregulated, background ongoing political advertising.

It also directly brushes off the committee’s call for it to state how many investigations are currently being carried out into Russian interference in UK politics, saying only that it has taken steps to ensure there is a “coordinated structure across all relevant UK authorities to defend against hostile foreign interference in British politics, whether from Russia or any other State”, before reiterating: “There has, however, been no evidence to date of any successful foreign interference.”

This summer the Electoral Commission found that the official Vote Leave campaign in the UK’s in/out EU referendum had broken campaign spending rules — with social media platforms being repurposed as the unregulated playing field where election law could be diddled at such scale. That much is clear.

The DCMS committee had backed the Commission’s call for digital imprint requirements for electronic campaigns to level the playing field between digital and print ads.

However the government has failed to back even that pretty uncontroversial call, merely pointing again to a public consultation (which ends today) on proposed changes to electoral law. So it’s yet more wait and see.

The committee is also disappointed about the lack of government response to its call for the Commission to establish a code for advertising through social media during election periods; and its recommendation that “Facebook and other platforms take responsibility for the way their platforms are used” — noting also the government made “no response to Facebook’s failure to respond adequately to the Committee’s inquiry and Mark Zuckerberg’s reluctance to appear as a witness“. (A reluctance that really enraged the committee.)

In a statement on the government’s response, committee chair Damian Collins writes: “The government’s response to our interim report on disinformation and ‘fake news’ is disappointing and a missed opportunity. It uses other ongoing investigations to further delay desperately needed announcements on the ongoing issues of harmful and misleading content being spread through social media.

“We need to see a more coordinated approach across government to combat campaigns of disinformation being organised by Russian agencies seeking to disrupt and undermine our democracy. The government’s response gives us no real indication of what action is being taken on this important issue.”

Collins finds one slender crumb of comfort, though, that the government might have some appetite to rule big tech.

After the committee had called for government to “demonstrate how seriously it takes Facebook’s apparent collusion in spreading disinformation in Burma, at the earliest opportunity”, the government writes that it: “has made it clear to Facebook, and other social media companies, that they must do more to remove illegal and harmful content”; and noting also that its forthcoming Online Harms White Paper will include “a range of policies to tackle harmful content”.

“We welcome though the strong words from the Government in its demand for action by Facebook to tackle the hate speech that has contributed to the ethnic cleansing of the Rohingya in Burma,” notes Collins, adding: “We will be looking for the government to make progress on these and other areas in response to our final report which will be published in December.

“We will also be raising these issues with the Secretary of State for DCMS, Jeremy Wright, when he gives evidence to the Committee on Wednesday this week.”

(Wright being the new minister in charge of the UK’s digital brief, after Matt Hancock moved over to health.)

We’ve reached out to Facebook for comment on the government’s call for a more robust approach to illegal hate speech.

Last week the company announced it had hired former UK deputy prime minister, Nick Clegg, to be its new head of global policy and comms — apparently signalling a willingness to pay a bit more attention to European regulators.

Call for social media adtech to be probed by UK competition watchdog

A British Conservative politician, who has called repeatedly for Mark Zuckerberg to come to parliament to answer questions about how Facebook fences fake news — only to be repeatedly rebuffed — has made a public call for the UK’s competition regulator to look into social media giants’ adtech operations. Damian Collins, the chair of the […]

A British Conservative politician, who has called repeatedly for Mark Zuckerberg to come to parliament to answer questions about how Facebook fences fake news — only to be repeatedly rebuffed — has made a public call for the UK’s competition regulator to look into social media giants’ adtech operations.

Damian Collins, the chair of the DCMS committee which has spent months this year asking questions about how disinformation spreads online — culminating in a report, this summer, recommending the government impose a levy on social media to defend democracy — made the suggestion in a tweet that references a news article reporting on a U.S. class action lawsuit against Facebook.

Advertisers in the US lawsuit allege Facebook knowingly inflated video viewing stats and thus mislead them into spending more money on its ad platform than they otherwise would have.

But Facebook disputes the allegations, saying the lawsuit is “without merit”. It has also filed a motion to dismiss the claims of ad fraud.

Although, two years ago, it did ‘fess up to a ‘miscalculation’ around average video viewing times, saying it had mistakenly discounted all the people who dropped out of watching a video in the first 3 seconds in calculating averages — thereby bumping viewing averages up.

At about the same time, it also said it had discovered some other ad-related bugs and errors in its system that had led to the wrong numbers being reported across four products, including Instant Articles, video and Page Insights.

The advertisers in the class action lawsuit — which was filed back in 2016 — had originally claimed Facebook engaged in unfair business practices. After receiving tens of thousands of documents in relation to the case they amended their complaint to accuse the company of fraud, CBS reports.

In its statement denying the suit’s claims, Facebook also said: “Suggestions that we in any way tried to hide this issue from our partners are false. We told our customers about the error when we discovered it — and updated our help center to explain the issue.” 

The company declined to comment on Collins’ remarks about adtech industry practices today.

A spokeswoman for the UK’s Competition and Markets Authority (CMA) also declined to comment when asked whether it has any concerns related to practices in the adtech sector.

Given market sensitivity to regulatory action it’s normal for the CMA to not want to stoke any speculation around a particular company.

For the same reason it would not normally discuss any complaints it’s received until the point of actually launching any investigation.

However this is not the first time the CMA has been urged by concerned politicians to investigate the adtech sector.

This fall another UK committee, the Lords Select Committee on Communications, directly asked the body to investigate digital advertising.

And earlier this month the CMA’s CEO, Andrea Coscelli, told the committee it is indeed considering doing so, if only it can carve out the resources to do so — saying he was worried about “potential gaps” in the regulatory framework around competition and consumer issues.

“A month ago, this Committee asked us to look at digital advertising. That is something we are actively considering, subject to Brexit in the next few weeks, because it has a big resource implication for us,” said Coscelli on October 9. “It is certainly something where we are interested in getting involved. If we did, we would work closely with Ofcom and give serious thought to the regulatory framework in that context.”

The CMA has also generally been ramping up its activity on the digital market front, recently spinning up a new data unit and appointing a chief data and digital insights officer, Stefan Hunt, hired in from the Financial Conduct Authority — to help it “develop and deliver an effective data and digital insight strategy… to better understand the impact that data, machine learning and other algorithms have on markets and people”.

So it sounds like a case of ‘watch this regulatory space’ for more action at the very least.

Elsewhere in Europe competition regulators have also been paying closer attention to the adtech industry in recent years — examining a variety of practices by adtech giants, Facebook and Google, and coming away with a range of antitrust-related concerns.

In preliminary findings at the end of last year, for example, Germany’s Federal Cartel Office accused Facebook of using its size to strong-arm users into handing over data.

While, earlier this year, the French Competition Authority suggested it was planning to investigate Facebook and Google‘s dominance of the adtech market, publishing a report in which it identified a raft of problematic behaviors — and pointed out that the two companies act as both publishers and technical intermediaries for advertisers, thereby gaining a competitive advantage.

Italian regulators have also been probing competition concerns related to big data for more than a year.

As we’ve reported before, the European Commission is also actively eyeing digital platforms’ market power — and looking to reshape competition policy to take account of how tech giants are able to draw on network effects and leverage their position from one market to another.

And when you’re talking about platform power, you are also — in the current era — talking about adtech.

There’s no doubt closer scrutiny of the digital advertising sector is coming. And with a brighter spotlight, tighter accountability screws applied to its practices.

Privacy reviews of adtech platforms have already raised plenty of ethical questions, in addition to flagging actual violations of the law.

This summer the UK’s data protection watchdog also called for an ethical pause of the use of social media ads for political purposes, writing that: “It is important that there is greater and genuine transparency about the use of such techniques to ensure that people have control over their own data and that the law is upheld.”

So while it remains to be seen what any competition investigations of the adtech sector will conclude, political momentum is building to increase transparency and ensure accountability — which makes regulation more likely.