Seized cache of Facebook docs raise competition and consent questions

A UK parliamentary committee has published the cache of Facebook documents it dramatically seized last week. The documents were obtained by a legal discovery process by a startup that’s suing the social network in a California court in a case related to Facebook changing data access permissions back in 2014/15. The court had sealed the documents […]

A UK parliamentary committee has published the cache of Facebook documents it dramatically seized last week.

The documents were obtained by a legal discovery process by a startup that’s suing the social network in a California court in a case related to Facebook changing data access permissions back in 2014/15.

The court had sealed the documents but the DCMS committee used rarely deployed parliamentary powers to obtain them from the Six4Three founder, during a business trip to London.

You can read the redacted documents here — all 250 pages of them.

In a series of tweets regarding the publication, committee chair Damian Collins says he believes there is “considerable public interest” in releasing them.

“They raise important questions about how Facebook treats users data, their policies for working with app developers, and how they exercise their dominant position in the social media market,” he writes.

“We don’t feel we have had straight answers from Facebook on these important issues, which is why we are releasing the documents. We need a more public debate about the rights of social media users and the smaller businesses who are required to work with the tech giants. I hope that our committee investigation can stand up for them.”

The committee has been investigating online disinformation and election interference for the best part of this year, and has been repeatedly frustrated in its attempts to extract answers from Facebook.

But it is protected by parliamentary privilege — hence it’s now published the Six4Three files, having waited a week in order to redact certain pieces of personal information.

Collins has included a summary of key issues, as the committee sees them after reviewing the documents, in which he draws attention to six issues.

Here is his summary of the key issues:

  1. White Lists Facebook have clearly entered into whitelisting agreements with certain companies, which meant that after the platform changes in 2014/15 they maintained full access to friends data. It is not clear that there was any user consent for this, nor how Facebook decided which companies should be whitelisted or not.
  2. Value of friends data It is clear that increasing revenues from major app developers was one of the key drivers behind the Platform 3.0 changes at Facebook. The idea of linking access to friends data to the financial value of the developers relationship with Facebook is a recurring feature of the documents.
  3. Reciprocity Data reciprocity between Facebook and app developers was a central feature in the discussions about the launch of Platform 3.0.
  4. Android Facebook knew that the changes to its policies on the Android mobile phone system, which enabled the Facebook app to collect a record of calls and texts sent by the user would be controversial. To mitigate any bad PR, Facebook planned to make it as hard of possible for users to know that this was one of the underlying features of the upgrade of their app.
  5. Onavo Facebook used Onavo to conduct global surveys of the usage of mobile apps by customers, and apparently without their knowledge. They used this data to assess not just how many people had downloaded apps, but how often they used them. This knowledge helped them to decide which companies to acquire, and which to treat as a threat.
  6. Targeting competitor Apps The files show evidence of Facebook taking aggressive positions against apps, with the consequence that denying them access to data led to the failure of that business

The publication of the files comes at an awkward moment for Facebook — which remains on the back foot after a string of data and security scandals, and has just announced a major policy change — ending a long-running ban on apps copying its own platform features.

Albeit the timing of Facebook’s policy shift announcement hardly looks incidental — given Collins said last week the committee would publish the files this week.

The policy in question has been used by Facebook to close down competitors in the past, such as — two years ago — when it cut off style transfer app Prisma’s access to its live-streaming Live API when the startup tried to launch a livestreaming art filter (Facebook subsequently launched its own style transfer filters for Live).

So its policy reversal now looks intended to diffuse regulatory scrutiny around potential antitrust concerns.

But emails in the Six4Three files suggesting that Facebook took “aggressive positions” against competing apps could spark fresh competition concerns.

In one email dated January 24, 2013, a Facebook staffer, Justin Osofsky, discusses Twitter’s launch of its short video clip app, Vine, and says Facebook’s response will be to close off its API access.

As part of their NUX, you can find friends via FB. Unless anyone raises objections, we will shut down their friends API access today. We’ve prepared reactive PR, and I will let Jana know our decision,” he writes. 

Osofsky’s email is followed by what looks like a big thumbs up from Zuckerberg, who replies: “Yup, go for it.”

Also of concern on the competition front is Facebook’s use of a VPN startup it acquired, Onavo, to gather intelligence on competing apps — either for acquisition purposes or to target as a threat to its business.

The files show various Onavo industry charts detailing reach and usage of mobile apps and social networks — with each of these graphs stamped ‘highly confidential’.

Facebook bought Onavo back in October 2013. Shortly after it shelled out $19BN to acquire rival messaging app WhatsApp — which one Onavo chart in the cache indicates was beasting Facebook on mobile, accounting for well over double the daily message sends at that time.

The files also spotlight several issues of concern relating to privacy and data protection law, with internal documents raising fresh questions over how or even whether (in the case of Facebook’s whitelisting agreements with certain developers) it obtained consent from users to process their personal data.

The company is already facing a number of privacy complaints under the EU’s GDPR framework over its use of ‘forced consent‘, given that it does not offer users an opt-out from targeted advertising.

But the Six4Three files look set to pour fresh fuel on the consent fire.

Collins’ fourth line item — related to an Android upgrade — also speaks loudly to consent complaints.

Earlier this year Facebook was forced to deny that it collects calls and SMS data from users of its Android apps without permission. But, as we wrote at the time, it had used privacy-hostile design tricks to sneak expansive data-gobbling permissions past users. So, put simple, people clicked ‘agree’ without knowing exactly what they were agreeing to.

The Six4Three files back up the notion that Facebook was intentionally trying to mislead users.

In one email dated November 15, 2013, from Matt Scutari, manager privacy and public policy, suggests ways to prevent users from choosing to set a higher level of privacy protection, writing: “Matt is providing policy feedback on a Mark Z request that Product explore the possibility of making the Only Me audience setting unsticky. The goal of this change would be to help users avoid inadvertently posting to the Only Me audience. We are encouraging Product to explore other alternatives, such as more aggressive user education or removing stickiness for all audience settings.”

Another awkward trust issue for Facebook which the documents could stir up afresh relates to its repeat claim — including under questions from lawmakers — that it does not sell user data.

In one email from the cache — sent by Mark Zuckerberg, dated October 7, 2012 — the Facebook founder appears to be entertaining the idea of charging developers for “reading anything, including friends”.

Yet earlier this year, when he was asked by a US lawmaker how Facebook makes money, Zuckerberg replied: “Senator, we sell ads.”

He did not include a caveat that he had apparently personally entertained the idea of liberally selling access to user data.

Responding to the publication of the Six4Three documents, a Facebook spokesperson told us:

As we’ve said many times, the documents Six4Three gathered for their baseless case are only part of the story and are presented in a way that is very misleading without additional context. We stand by the platform changes we made in 2015 to stop a person from sharing their friends’ data with developers. Like any business, we had many of internal conversations about the various ways we could build a sustainable business model for our platform. But the facts are clear: we’ve never sold people’s data.

Zuckerberg has repeatedly refused to testify in person to the DCMS committee.

At its last public hearing — which was held in the form of a grand committee comprising representatives from nine international parliaments, all with burning questions for Facebook — the company sent its policy VP, Richard Allan, leaving an empty chair where Zuckerberg’s bum should be.

Europe dials up pressure on tech giants over election security

The European Union has announced a package of measures intended to step up efforts and pressure on tech giants to combat democracy-denting disinformation ahead of the EU parliament elections next May. The European Commission Action Plan, which was presented at a press briefing earlier today, has four areas of focus: 1) Improving detection of disinformation; 2) […]

The European Union has announced a package of measures intended to step up efforts and pressure on tech giants to combat democracy-denting disinformation ahead of the EU parliament elections next May.

The European Commission Action Plan, which was presented at a press briefing earlier today, has four areas of focus: 1) Improving detection of disinformation; 2) Greater co-ordination across EU Member States, including by sharing alerts about threats; 3) Increased pressure on online platforms, including to increase transparency around political ads and purge fake accounts; and 4) raising awareness and critical thinking among EU citizens.

The Commission says 67% of EU citizens are worried about their personal data being used for political targeting, and 80% want improved transparency around how much political parties spend to run campaigns on social media.

And it warned today that it wants to see rapid action from online platforms to deliver on pledges they’ve already made to fight fake news and election interference.

The EC’s plan follows a voluntary Code of Practice launched two months ago, which signed up tech giants including Facebook, Google and Twitter, along with some ad industry players, to some fairly fuzzy commitments to combat the spread of so-called ‘fake news’.

They also agreed to hike transparency around political advertising. But efforts so far remain piecemeal, with — for example — no EU-wide roll out of Facebook’s political ads disclosure system.

Facebook has only launched political ad identification checks plus an archive library of ads in the US and the UK so far, leaving the rest of the world to rely on the more limited ‘view ads’ functionality that it has rolled out globally.

The EC said it will be stepping up its monitoring of platforms’ efforts to combat election interference — with the new plan including “continuous” monitoring.

This will take the form of monthly progress reports, starting with a Commission progress report in January and then monthly reports thereafter (against what it slated as “very specific targets”) to ensure signatories are actually purging and disincentivizing bad actors and inauthentic content from their platform, not just saying they’re going to.

As we reported in September the Code of Practice looked to be a pretty dilute first effort. But ongoing progress reports could at least help concentrate minds — coupled with the ongoing threat of EU-wide legislation if platforms fail to effectively self-regulate.

Digital economy and society commissioner Mariya Gabriel said the EC would have “measurable and visible results very soon”, warning platforms: “We need greater transparency, greater responsibility both on the content, as well as the political approach.”

Security union commissioner, Julian King, came in even harder on tech firms — warning that the EC wants to see “real progress” from here on in.

“We need to see the Internet platforms step up and make some real progress on their commitments. This is stuff that we believe the platforms can and need to do now,” he said, accusing them of “excuses” and “foot-dragging”.

“The risks are real. We need to see urgent improvement in how adverts are placed,” he continued. “Greater transparency around sponsored content. Fake accounts rapidly and effectively identified and deleted.”

King pointed out Facebook admits that between 3% and 4% of its entire user-base is fake.

“That is somewhere between 60M and 90M face accounts,” he continued. “And some of those accounts are the most active accounts. A recent study found that 80% of the Twitter accounts that spread disinformation during the 2016 US election are still active today — publishing more than a million tweets a day. So we’ve got to get serious about this stuff.”

Twitter declined to comment on today’s developments but a spokesperson told us its “number one priority is improving the health of the public conversation”.

“Tackling co-ordinated disinformation campaigns is a key component of this. Disinformation is a complex, societal issue which merits a societal response,” Twitter’s statement said. “For our part, we are already working with our industry partners, Governments, academics and a range of civil society actors to develop collaborative solutions that have a meaningful impact for citizens. For example, Twitter recently announced a global partnership with UNESCO on media and information literacy to help equip citizens with the skills they need to critically analyse content they are engaging with online.”

We’ve also reached out to Facebook and Google for comment on the Commission plan.

King went on to press for “clearer rules around bots”, saying he would personally favor a ban on political content being “disseminated by machines”.

The Code of Practice does include a commitment to address both fake accounts and online bots, and “establish clear marking systems and rules for bots to ensure their activities cannot be confused with human interactions”. And Twitter has previously said it’s considering labelling bots; albeit with the caveat “as far as we can detect them”.

But action is still lacking.

“We need rapid corrections, which are given the same prominence and circulation as the original fake news. We need more effective promotion of alternative narratives. And we need to see overall greater clarity around how the algorithms are working,” King continued, banging the drum for algorithmic accountability.

“All of this should be subject to independent oversight and audit,” he added, suggesting the self-regulation leash here will be a very short one.

He said the Commission will make a “comprehensive assessment” of how the Code is working next year, warning: “If the necessary progress is not made we will not hesitate to reconsider our options — including, eventually, regulation.”

“We need to be honest about the risks, we need to be ready to act. We can’t afford an Internet that is the wild west where anything goes, so we won’t allow it,” he concluded.

Commissioner Vera Jourova also attended the briefing and used her time at the podium to press platforms to “immediately guarantee the transparency of political advertising”.

“This is a quick fix that is necessary and urgent,” she said. “It includes properly checking and clearly indicating who is behind online advertisement and who paid for it.”

In Spain regional elections took place in Andalusia on Sunday and — as noted above — while Facebook has launched a political ad authentication process and ad archive library in the US and the UK, the company confirmed to us that such a system was not up and running in Spain in time for that regional European election.

In the vote in Andalusia a tiny Far Right party, Vox, broke pollsters’ predictions to take twelve seats in the parliament — a first since the country’s return to democracy after the death of the dictator Francisco Franco in 1975.

Zooming in on election security risks, Jourova warned that “large-scale organized disinformation campaigns” have become “extremely efficient and spread with the speed of light” online. She also warned that non-transparent ads “will be massively used to influence opinions” in the run up to the EU elections.

Hence the pressing need for a transparency guarantee.

“When we allow the machines to massively influence free decisions of democracy I think that we have appeared in a bad science fiction,” she added. “The electoral campaign should be the competition of ideas, not the competition of dirty money, dirty methods, and hidden advertising where the people are not informed and don’t have a clue that they are influenced by some hidden powers.”

Jourova urged Member States to update their election laws so existing requirements on traditional media to observe a pre-election period also apply online.

“We all have roles to play, not only Member States, also social media platforms, but also traditional political parties. [They] need to make public the information on their expenditure for online activities as well as information on any targeting criteria used,” she concluded.

A report by the UK’s DCMS committee, which has been running an enquiry into online disinformation for the best part of this year, made similar recommendations in its preliminary report this summer.

Though the committee also went further — calling for a levy on social media to defend democracy. Albeit, the UK government did not leap into the recommended actions.

Also speaking at today’s presser, EC VP, Andrus Ansip, warned of the ongoing disinformation threat from Russia but said the EU does not intend to respond to the threat from propaganda outlets like RT, Sputnik and IRA troll farms by creating its own pro-EU propaganda machine.

Rather he said the plan is to focus efforts on accelerating collaboration and knowledge-sharing to improve detection and indeed debunking of disinformation campaigns.

“We need to work together and co-ordinate our efforts — in a European way, protecting our freedoms,” he said, adding that the plan sets out “how to fight back against the relentless propaganda and information weaponizing used against our democracies”.

Under the action plan, the budget of the European External Action Service (EEAS) — which bills itself as the EU’s diplomatic service — will more than double next year, to €5M, with the additional funds intended for strategic comms to “address disinformation and raise awareness about its adverse impact”, including beefing up headcount.

“This will help them to use new tools and technologies to fight disinformation,” Ansip suggested.

Another new measure announced today is a dedicated Rapid Alert System which the EC says will facilitate “the sharing of data and assessments of disinformation campaigns and to provide alerts on disinformation threats in real time”, with knowledge-sharing flowing between EU institutions and Member States.

The EC also says it will boost resource for national multidisciplinary teams of independent fact-checkers and researchers to detect and expose disinformation campaigns across social networks — working towards establishing a European network of fact-checkers.

“Their work is absolutely vital in order to combat disinformation,” said Gabriel, adding: “This is very much in line with our principles of pluralism of the media and freedom of expression.”

Investments will also go towards supporting media education and critical awareness, with Gabriel noting that the Commission will to run a European media education week, next March, to draw attention to the issue and gather ideas.

She said the overarching aim is to “give our citizens a whole array of tools that they can use to make a free choice”.

“It’s high time we give greater visibility to this problem because we face this on a day to day basis. We want to provide solutions — so we really need a bottom up approach,” she added. “It’s not up to the Commission to say what sort of initiatives should be adopted; we need to give stakeholders and citizens their possibility to share best practices.”

Union’s human rights challenge to Deliveroo dismissed by UK High Court

A UK union that has been fighting to win collective bargaining rights for gig economy riders who provide delivery services via Deliveroo’s platform has had its claim for a judicial review of an earlier blocking decision dismissed by the High Court today. Six months ago the IWGB Union was granted permission to challenge Deliveroo’s opposition to collective […]

A UK union that has been fighting to win collective bargaining rights for gig economy riders who provide delivery services via Deliveroo’s platform has had its claim for a judicial review of an earlier blocking decision dismissed by the High Court today.

Six months ago the IWGB Union was granted permission to challenge Deliveroo’s opposition to collective bargaining for couriers on human rights grounds.

The union had already lost a challenge to Deliveroo’s employment classification for couriers last year. Then the Central Arbitration Committee (CAC) ruled that Deliveroo riders could not be considered workers because they had a genuine right to find a substitute to do their job for them.

The union disputes that finding but so far the courts have accepted Deliveroo’s assertion that riders are independent contractors — an employment classification that does not support forming a collective bargaining unit.

Even so, the union sought to pursue a case for collective bargaining on one ground related to Article 11 of the European Convention on Human Rights, which protects freedom of assembly and association.

But the High Court has now dismissed its argument, blocking its claim for a judicial review.

Writing in today’s judgement, Mr Justice Supperstone concludes: “I do not consider that, on the findings made by the CAC, the Riders have the right for which the Union contends under Article 11(1). Neither domestic nor Strasbourg case law supports this contention. Article 11(1) is not engaged in this case.”

Commenting in a statement, IWGB general secretary Dr Jason Moyer-Lee said: “Today’s judgement is a terrible one, not just in terms of what it means for low paid Deliveroo riders, but also in terms of understanding the European Convention on Human Rights,” he said. “Deliveroo riders should be entitled to basic worker rights as well as to the ability to be represented by trade unions to negotiate pay and terms and conditions.”

The union has vowed to appeal the decision.

Deliveroo, meanwhile, described the ruling as a “victory for riders”. It also argues that the judgement is consistent with previous decisions reached across Europe — including in France and the Netherlands.

“We are pleased that today’s judgment upholds the earlier decisions of the High Court and the CAC that Deliveroo riders are self-employed, providing them the flexibility they want,” said Dan Warne, UK MD, in a statement. “In addition to emphatically confirming this under UK national law, the Court also carefully examined the question under European law and concluded riders are self-employed.

“This a victory for riders who have consistently told us the flexibility to choose when and where they work, which comes with self-employment, is their number one reason for riding with Deliveroo. We will continue to seek to offer riders more security and make the case that Government should end the trade off in Britain between flexibility and security.”

Despite not having collective bargaining rights, in recent years UK gig economy workers have carried out a number of wildcat strikes — often related to changes to pricing policies.

Two years ago Deliveroo couriers in the UK staged a number of protests after the company trialed a new pricing structure.

While, in recent months, UberEats couriers in a number of UK cities have protested over pay.

UK Uber drivers have also organized to protest pay and conditions this year.

The UK government revealed a package of labor market reforms early this year that it said were intended to bolster workers rights, including for those in the gig economy.

Although it also announced it would be carrying out a number of consultations — leaving the full details of the reform tbc.

Google ‘incognito’ search results still vary from person to person, DDG study finds

A study of Google search results by anti-tracking rival DuckDuckGo has suggested that escaping the so-called ‘filter bubble’ of personalized online searches is a perniciously hard problem for the put upon Internet consumer who just wants to carve out a little unbiased space online, free from the suggestive taint of algorithmic fingers. DDG reckons it’s […]

A study of Google search results by anti-tracking rival DuckDuckGo has suggested that escaping the so-called ‘filter bubble’ of personalized online searches is a perniciously hard problem for the put upon Internet consumer who just wants to carve out a little unbiased space online, free from the suggestive taint of algorithmic fingers.

DDG reckons it’s not possible even for logged out users of Google search, who are also browsing in Incognito mode, to prevent their online activity from being used by Google to program — and thus shape — the results they see.

DDG says it found significant variation in Google search results, with most of the participants in the study seeing results that were unique to them — and some seeing links others simply did not.

Results within news and video infoboxes also varied significantly, it found.

While it says there was very little difference for logged out, incognito browsers.

“It’s simply not possible to use Google search and avoid its filter bubble,” it concludes.

Google has responded by counter-claiming that DuckDuckGo’s research is “flawed”.

Degrees of personalization

DuckDuckGo says it carried out the research to test recent claims by Google to have tweaked its algorithms to reduce personalization.

A CNBC report in September, drawing on access provided by Google, letting the reporter sit in on an internal meeting and speak to employees on its algorithm team, suggested that Mountain View is now using only very little personalization to generate search results.

A query a user comes with usually has so much context that the opportunity for personalization is just very limited,” Google fellow Pandu Nayak, who leads the search ranking team, told CNBC this fall.

On the surface, that would represent a radical reprogramming of Google’s search modus operandi — given the company made “Personalized Search” the default for even logged out users all the way back in 2009.

Announcing the expansion of the feature then Google explained it would ‘customize’ search results for these logged out users via an ‘anonymous cookie’:

This addition enables us to customize search results for you based upon 180 days of search activity linked to an anonymous cookie in your browser. It’s completely separate from your Google Account and Web History (which are only available to signed-in users). You’ll know when we customize results because a “View customizations” link will appear on the top right of the search results page. Clicking the link will let you see how we’ve customized your results and also let you turn off this type of customization.

A couple of years after Google threw the Personalized Search switch, Eli Pariser published his now famous book describing the filter bubble problem. Since then online personalization’s bad press has only grown.

In recent years concern has especially spiked over the horizon-reducing impact of big tech’s subjective funnels on democratic processes, with algorithms carefully engineered to keep serving users more of the same stuff now being widely accused of entrenching partisan opinions, rather than helping broaden people’s horizons.

Especially so where political (and politically charged) topics are concerned. And, well, at the extreme end, algorithmic filter bubbles stand accused of breaking democracy itself — by creating highly effective distribution channels for individually targeted propaganda.

Although there have also been some counter claims floating around academic circles in recent years that imply the echo chamber impact is itself overblown. (Albeit sometimes emanating from institutions that also take funding from tech giants like Google.)

As ever, where the operational opacity of commercial algorithms is concerned, the truth can be a very difficult animal to dig out.

Of course DDG has its own self-interested iron in the fire here — suggesting, as it is, that “Google is influencing what you click” — given it offers an anti-tracking alternative to the eponymous Google search.

But that does not merit an instant dismissal of a finding of major variation in even supposedly ‘incognito’ Google search results.

DDG has also made the data from the study downloadable — and the code it used to analyze the data open source — allowing others to look and draw their own conclusions.

It carried out a similar study in 2012, after the earlier US presidential election — and claimed then to have found that Google’s search had inserted tens of millions of more links for Obama than for Romney in the run-up to that.

It says it wanted to revisit the state of Google search results now, in the wake of the 2016 presidential election that installed Trump in the White House — to see if it could find evidence to back up Google’s claims to have ‘de-personalized’ search.

For the latest study DDG asked 87 volunteers in the US to search for the politically charged topics of “gun control”, “immigration”, and “vaccinations” (in that order) at 9pm ET on Sunday, June 24, 2018 — initially searching in private browsing mode and logged out of Google, and then again without using Incognito mode.

You can read its full write-up of the study results here.

The results ended up being based on 76 users as those searching on mobile were excluded to control for significant variation in the number of displayed infoboxes.

Here’s the topline of what DDG found:

Private browsing mode (and logged out):

  • “gun control”: 62 variations with 52/76 participants (68%) seeing unique results.
  • “immigration”: 57 variations with 43/76 participants (57%) seeing unique results.
  • “vaccinations”: 73 variations with 70/76 participants (92%) seeing unique results.

‘Normal’ mode:

  • “gun control”: 58 variations with 45/76 participants (59%) seeing unique results.
  • “immigration”: 59 variations with 48/76 participants (63%) seeing unique results.
  • “vaccinations”: 73 variations with 70/76 participants (92%) seeing unique results.

DDG’s contention is that truly ‘unbiased’ search results should produce largely the same results.

Yet, by contrast, the search results its volunteers got served were — in the majority — unique. (Ranging from 57% at the low end to a full 92% at the upper end.)

“With no filter bubble, one would expect to see very little variation of search result pages — nearly everyone would see the same single set of results,” it writes. “Instead, most people saw results unique to them. We also found about the same variation in private browsing mode and logged out of Google vs. in normal mode.”

“We often hear of confusion that private browsing mode enables anonymity on the web, but this finding demonstrates that Google tailors search results regardless of browsing mode. People should not be lulled into a false sense of security that so-called “incognito” mode makes them anonymous,” DDG adds.

Google initially declined to provide a statement responding to the study, telling us instead that several factors can contribute to variations in search results — flagging time and location differences among them.

It even suggested results could vary depending on the data center a user query was connected with — potentially introducing some crawler-based micro-lag.

Google also claimed it does not personalize the results of logged out users browsing in Incognito mode based on their signed-in search history.

However the company admited it uses contextual signals to rank results even for logged out users (as that 2009 blog post described) — such as when trying to clarify an ambiguous query.

In which case it said a recent search might be used for disambiguation purposes. (Although it also described this type of contextualization in search as extremely limited, saying it would not account for dramatically different results.)

But with so much variation evident in the DDG volunteer data, there seems little question that Google’s approach very often results in individualized — and sometimes highly individualized — search results.

Some Google users were even served with more or fewer unique domains than others.

Lots of questions naturally flow from this.

Such as: Does Google applying a little ‘ranking contextualization’ sound like an adequately ‘de-personalized’ approach — if the name of the game is popping the filter bubble?

Does it make the served results even marginally less clickable, biased and/or influential?

Or indeed any less ‘rank’ from a privacy perspective… ?

You tell me.

Even the same bunch of links served up in a slightly different configuration has the potential to be majorly significant since the top search link always gets a disproportionate chunk of clicks. (DDG says the no.1 link gets circa 40%.)

And if the topics being Google-searched are especially politically charged even small variations in search results could — at least in theory — contribute to some major democratic impacts.

There is much to chew on.

DDG says it controlled for time- and location-based variation in the served search results by having all participants in the study carry out the search from the US and do so at the very same time.

While it says it controlled for the inclusion of local links (i.e to cancel out any localization-based variation) by bundling such results with a localdomain.com placeholder (and ‘Local Source’ for infoboxes).

Yet even taking steps to control for space-time based variations it still found the majority of Google search results to be unique to the individual.

“These editorialized results are informed by the personal information Google has on you (like your search, browsing, and purchase history), and puts you in a bubble based on what Google’s algorithms think you’re most likely to click on,” it argues.

Google would counter argue that’s ‘contextualizing’, not editorializing.

And that any ‘slight variation’ in results is a natural property of the dynamic nature of its Internet-crawling search response business.

Albeit, as noted above, DDG found some volunteers did not get served certain links (when others did), which sounds rather more significant than ‘slight difference’.

In the statement Google later sent us it describes DDG’s attempts to control for time and location differences as ineffective — and the study as a whole as “flawed” — asserting:

This study’s methodology and conclusions are flawed since they are based on the assumption that any difference in search results are based on personalization. That is simply not true. In fact, there are a number of factors that can lead to slight differences, including time and location, which this study doesn’t appear to have controlled for effectively.

One thing is crystal clear: Google is — and always has been — making decisions that affect what people see.

This capacity is undoubtedly influential, given the majority marketshare captured by Google search. (And the major role Google still plays in shaping what Internet users are exposed to.)

That’s clear even without knowing every detail of how personalized and/or customized these individual Google search results were.

Google’s programming formula remains locked up in a proprietary algorithm box — so we can’t easily (and independently) unpick that.

And this unfortunate ‘techno-opacity’ habit offers convenient cover for all sorts of claim and counter-claim — which can’t really now be detached from the filter bubble problem.

Unless and until we can know exactly how the algorithms work to properly track and quantify impacts.

Also true: Algorithmic accountability is a topic of increasing public and political concern.

Lastly, ‘trust us’ isn’t the great brand mantra for Google it once was.

So the devil may yet get (manually) unchained from all these fuzzy details.

Oath agrees to pay $5M to settle charges it violated children’s privacy

TechCrunch’s Verizon-owned parent, Oath, an ad tech division made from the merging of AOL and Yahoo, has agreed to pay around $5 million to settle charges that it violated a federal children’s privacy law. The penalty is said to be the largest ever issued under COPPA. The New York Times reported the story yesterday, saying the […]

TechCrunch’s Verizon-owned parent, Oath, an ad tech division made from the merging of AOL and Yahoo, has agreed to pay around $5 million to settle charges that it violated a federal children’s privacy law.

The penalty is said to be the largest ever issued under COPPA.

The New York Times reported the story yesterday, saying the settlement will be announced by the New York attorney general’s office today.

At the time of writing the AG’s office could not be reached for comment.

We reached out to Oath with a number of questions about this privacy failure. But a spokesman did not engage with any of them directly — emailing a short statement instead, in which it writes: “We are pleased to see this matter resolved and remain wholly committed to protecting children’s privacy online.”

The spokesman also did not confirm nor dispute the contents of the NYT report.

According to the newspaper, which cites the as-yet unpublished settlement documents, AOL, via its ad exchange, helped place adverts on hundreds of websites that it knew were targeted at children under 13 — such as Roblox.com and Sweetyhigh.com.

The ads were placed used children’s personal data, including cookies and geolocation, which the attorney general’s office said violated the Children’s Online Privacy Protection Act (COPPA) of 1998.

The NYT quotes attorney general, Barbara D. Underwood, describing AOL’s actions as “flagrantly” in violation of COPPA.

The $5M fine for Oath comes at a time when scrutiny is being dialled up on online privacy and ad tech generally, and around kids’ data specifically — with rising concern about how children are being tracked and ‘datafied’ online.

Earlier this year, a coalition of child advocacy, consumer and privacy groups in the US filed a complaint with the FTC asking it to investigate Google-owned YouTube over COPPA violations — arguing that while the site’s terms claim it’s aimed at children older than 13 content on YouTube is clearly targeting younger children, including by hosting cartoon videos, nursery rhymes, and toy ads.

COPPA requires that companies provide direct notice to parents and verifiable consent parents before collecting under 13’s information online.

Consent must also be sought for using or disclosing personal data from children. Or indeed for targeting kids with adverts linked to what they do online.

Personal data under COPPA includes persistent identifiers (such as cookies) and geolocation information, as well as data such as real names or screen names.

In the case of Oath, the NYT reports that even though AOL’s policies technically prohibited the use of its display ad exchange to auction ad space on kids’ websites, the company did so anyway —  citing settlement documents covering the ad tech firm’s practices between October 2015 and February 2017.

According to these documents, an account manager for AOL in New York repeatedly — and erroneously — told a client, Playwire Media (which represents children’s websites such as Roblox.com), that AOL’s ad exchange could be used to sell ad space while complying with Coppa.

Playwire then used the exchange to place more than a billion ads on space that should have been covered by Coppa, the newspaper adds.

The paper also reports that AOL (via Advertising.com) also bought ad space on websites flagged as COPPA-covered from other ad exchanges.

It says Oath has since introduced technology to identify when ad space is deemed to be covered by Coppa and ‘adjust its practices’ accordingly — again citing the settlement documents.

As part of the settlement the ad tech division of Verizon has agreed to create a COPPA compliance program, to be overseen by a dedicated executive or officer; and to provide annual training on COPPA compliance to account managers and other employees who work with ads on kids’ websites.

Oath also agreed to destroy personal information it has collected from children.

It’s not clear whether the censured practices ended in February 2017 or continued until more recently. We asked Oath for clarification but it did not respond to the question.

It’s also not clear whether AOL was also tracking and targeting adverts at children in the EU. If Oath was doing so but stopped before May 25 this year it should avoid the possibility of any penalty under Europe’s tough new privacy framework, GDPR, which came into force in May this year — beefing up protection around children’s data by setting a cap of between 16- and 13-years-old for children being able to consent to their own data being processed.

GDPR also steeply hikes penalties for privacy violations (up to a maximum of 4% of global annual turnover).

Prior to the regulation a European data protection directive was in force across the bloc but it’s GDPR that has strengthened protections in this area with the new provision on children’s data.

‘Google You Owe Us’ claimants aren’t giving up on UK Safari workaround suit

Lawyers behind a UK class-action style compensation litigation against Google for privacy violations have filed an appeal against a recent High Court ruling blocking the proceeding. In October Mr Justice Warby ruled the case could not proceed on legal grounds, finding the claimants had not demonstrated a basis for bringing a compensation claim. The case relates to the […]

Lawyers behind a UK class-action style compensation litigation against Google for privacy violations have filed an appeal against a recent High Court ruling blocking the proceeding.

In October Mr Justice Warby ruled the case could not proceed on legal grounds, finding the claimants had not demonstrated a basis for bringing a compensation claim.

The case relates to the so called ‘Safari workaround’ Google used between 2011 and 2012 to override iPhone privacy settings and track users without consent.

The civil legal action — whose claimants refer to themselves as ‘Google You Owe Us’ — was filed last year by one named iPhone user, Richard Lloyd, the former director of consumer group, Which?, seeking to represent millions of UK users whose Safari settings the complaint alleges were similarly ignored by Google, via a representative legal action.

Lawyers for the claimants argued that sensitive personal data such as iPhone users’ political affiliation, sexual orientation, financial situation and more had been gathered by Google and used for targeted advertising without their consent.

Google You Owe Us proposed the sum of £750 per claimant for the company’s improper use of people’s data — which could result in a bill of up to £3BN (based on the suit’s intent to represent ~4.4 million UK iPhone users).

However UK law requires claimants demonstrate they suffered damage as a result of violation of the relevant data protection rules.

And in his October ruling Justice Warby found that the “bare facts pleaded in this case” were not “individualised” — hence he saw no case for damages.

He also ruled against the case proceeding on another legal point, related to defining a class for the case — finding “the essential requirements for a representative action are absent” because he said individuals in the group do not have the “same interest” in the claim.

Lodging its appeal today in the Court of Appeal, Google You Owe us described the High Court judgement as disappointing, and said it highlights the barriers that remain for consumers seeking to use collective actions as a route to redress in England and Wales.

In the US, meanwhile, Google settled with the FTC over a similar cookie tracking issue back in 2012 — agreeing to pay $22.5M in that instance.

Countering Justice Warby’s earlier suggestion that affected class members in the UK case did not care about their data being taken without permission, Google You Owe Us said, on the contrary, affected class members have continued to show their support for the case on Facebook — noting that more than 20,000 have signed up for case updates.

For the appeal, the legal team will argue that the High Court judgment was incorrect in stating the class had not suffered damage within the meaning of the UK’s Data Protection Act, and that the class had not all suffered in the same way as a result of the data breach.

Commenting in a statement, Lloyd said:

Google’s business model is based on using personal data to target adverts to consumers and they must ask permission before using this data. The court accepted that people did not give Google permission to use their data in this case, yet slammed the door shut on holding Google to account.

By appealing this decision, we want to give affected consumers the opportunity to get the compensation they are owed and show that collective actions offer a clear route to justice for data protection claims.

We’ve reached out to Google for comment.

DeepMind claims early progress in AI-based predictive protein modelling

Google -owned AI specialist, DeepMind, has claimed a “significant milestone” in being able to demonstrate the usefulness of artificial intelligence to help with the complex task of predicting 3D structures of proteins based solely on their genetic sequence. Understanding protein structures is important in disease diagnosis and treatment, and could improve scientists’ understanding of the human […]

Google -owned AI specialist, DeepMind, has claimed a “significant milestone” in being able to demonstrate the usefulness of artificial intelligence to help with the complex task of predicting 3D structures of proteins based solely on their genetic sequence.

Understanding protein structures is important in disease diagnosis and treatment, and could improve scientists’ understanding of the human body — as well as potentially helping to support protein design and bioengineering.

Writing in a blog post about the project to use AI to predict how proteins fold — now two years in — it writes: “The 3D models of proteins that AlphaFold [DeepMind’s AI] generates are far more accurate than any that have come before — making significant progress on one of the core challenges in biology.”

There are various scientific methods for predicting the native 3D state of protein molecules (i.e. how the protein chain folds to arrive at the native state) from residual amino acids in DNA.

But modelling the 3D structure is a highly complex task, given how many permutations there can be on account of protein folding being dependent on factors such as interactions between amino acids.

There’s even a crowdsourced game (FoldIt) that tries to leverage human intuition to predict workable protein forms.

DeepMind says its approach rests upon years of prior research in using big data to try to predict protein structures.

Specifically it’s applying deep learning approaches to genomic data.

“Fortunately, the field of genomics is quite rich in data thanks to the rapid reduction in the cost of genetic sequencing. As a result, deep learning approaches to the prediction problem that rely on genomic data have become increasingly popular in the last few years. DeepMind’s work on this problem resulted in AlphaFold, which we submitted to CASP [Community Wide Experiment on the Critical Assessment of Techniques for Protein Structure Prediction] this year,” it writes in the blog post.

“We’re proud to be part of what the CASP organisers have called “unprecedented progress in the ability of computational methods to predict protein structure,” placing first in rankings among the teams that entered (our entry is A7D).”

“Our team focused specifically on the hard problem of modelling target shapes from scratch, without using previously solved proteins as templates. We achieved a high degree of accuracy when predicting the physical properties of a protein structure, and then used two distinct methods to construct predictions of full protein structures,” it adds.

DeepMind says the two methods it used relied on using deep neural networks trained to predict protein properties from its genetic sequence.

“The properties our networks predict are: (a) the distances between pairs of amino acids and (b) the angles between chemical bonds that connect those amino acids. The first development is an advance on commonly used techniques that estimate whether pairs of amino acids are near each other,” it explains.

“We trained a neural network to predict a separate distribution of distances between every pair of residues in a protein. These probabilities were then combined into a score that estimates how accurate a proposed protein structure is. We also trained a separate neural network that uses all distances in aggregate to estimate how close the proposed structure is to the right answer.”

It then used new methods to try to construct predictions of protein structures, searching known structures that matched its predictions.

“Our first method built on techniques commonly used in structural biology, and repeatedly replaced pieces of a protein structure with new protein fragments. We trained a generative neural network to invent new fragments, which were used to continually improve the score of the proposed protein structure,” it writes.

“The second method optimised scores through gradient descent — a mathematical technique commonly used in machine learning for making small, incremental improvements — which resulted in highly accurate structures. This technique was applied to entire protein chains rather than to pieces that must be folded separately before being assembled, reducing the complexity of the prediction process.”

DeepMind describes the results achieved thus far as “early signs of progress in protein folding” using computational methods — claiming they demonstrate “the utility of AI for scientific discovery”.

Though it also emphasizes it’s still early days for the deep learning approach having any kind of “quantifiable impact”.

“Even though there’s a lot more work to do before we’re able to have a quantifiable impact on treating diseases, managing the environment, and more, we know the potential is enormous,” it writes. “With a dedicated team focused on delving into how machine learning can advance the world of science, we’re looking forward to seeing the many ways our technology can make a difference.”

Lime tries to back-peddle on VP’s line on why it hired Definers

Scooter startup Lime has sought to back peddle on an explanation given by its VP of global expansion late last week when asked why it had hired the controversial PR firm, Definers Public Affairs. The opposition research firm, which has ties to the Republican Party, has been at the center of a reputation storm for Facebook, […]

Scooter startup Lime has sought to back peddle on an explanation given by its VP of global expansion late last week when asked why it had hired the controversial PR firm, Definers Public Affairs.

The opposition research firm, which has ties to the Republican Party, has been at the center of a reputation storm for Facebook, after a New York Times report last month suggested the controversial PR firm sought to leverage anti-semitic smear tactics — by sending journalists a document linking anti-Facebook groups to billionaire George Soros (after he had been critical of Facebook).

Last month it also emerged that other tech firms had engaged Definers — Lime being one of them. And speaking during an on stage interview at TechCrunch Disrupt Berlin last Thursday, Lime’s Caen Contee claimed it had not known Definers would use smear tactics.

Yet, as we reported previously, a Definers employee sent us an email pitch in October in which it wrote suggestively that “Bird’s numbers seem off”.

This pitch did not disclose the PR firm was being paid by Lime.

Asked about this last week Contee claimed not to know anything about Definers’ use of smear tactics, saying Lime had engaged the firm to work on its green and carbon free programs — and to try to understand “what were the levers of opportunity for us to really create the messaging and also to do our own research; understanding the life-cycle; all the pieces that are in a very complex business”.

“As soon as we understood they were doing some of these things we parted ways and finished our program with them,” he also said.

However, following the publication of our article reporting on his comments, a Lime spokesperson emailed with what the subject line billed as a “statement for your latest story”, tee-ing this up by writing: “Hoping you can update the piece”.

The statement went on to claim that Contee “misspoke” and “was inaccurate in his description of [Definers] work”.

However it did not specify exactly what Contee had said that was incorrect.

A short while later the same Lime spokesperson sent us another version of the statement with updated wording, now entirely removing the reference to Contee.

You can read both statements below.

As you read them, note how the second version of the statement seeks to obfuscate the exact source of the claimed inaccuracy, using wording that seeks to shift blame in way that a casual reader might interpret as external and outside the company’s control…

Statement 1:

Our VP of Global Expansion misspoke at TechCrunch Disrupt regarding our relationship with Definers and was inaccurate in his description of their work. As previously reported, we engaged them for a three month contract to assist with compiling media coverage reports, limited public relations and fact checking, and we are no longer working with Definers.

Statement 2:

What was presented at Disrupt regarding our relationship with Definers and the description of their work was inaccurate. As previously reported, we engaged them for a three month contract to assist with compiling media coverage reports, limited public relations and fact checking, and we are no longer working with Definers.

Despite the Lime spokesperson’s hope for a swift update to our report, they did not respond when we asked for clarification on what exactly Contee had said that was “inaccurate”.

A claim of inaccuracy that does not provide any detail of the substance upon which the claim rests smells a lot like spin to us.

Three days later we’re still waiting to hear the substance of Lime’s claim because it has still not provided us with an explanation of exactly what Contee said that was ‘wrong’.

Perhaps Lime was hoping for a silent edit to the original report to provide some camouflaging fuzz atop a controversy of the company’s own making. i.e. that a PR firm it hired tried to smear a rival.

If so, oopsy.

Of course we’ll update this report if Lime does get in touch to provide an explanation of what it was that Contee “misspoke”. Frankly we’re all ears at this point.

DoJ charges Autonomy founder with fraud over $11BN sale to HP

UK entrepreneur turned billionaire investor, Mike Lynch, has been charged with fraud in US over the 2011 sale of his enterprise software company. Lynch sold Autonomy, the big data company he founded back in 1996, to computer giant HP for around $11BN some seven years ago. But within a year around three-quarters of the value […]

UK entrepreneur turned billionaire investor, Mike Lynch, has been charged with fraud in US over the 2011 sale of his enterprise software company.

Lynch sold Autonomy, the big data company he founded back in 1996, to computer giant HP for around $11BN some seven years ago.

But within a year around three-quarters of the value of the business had been written off, with HP accusing Autonomy’s management of accounting misrepresentations and disclosure failures.

Lynch has always rejected the allegations, and after HP sought to sue him in UK courts he countersued in 2015.

Meanwhile the UK’s own Serious Fraud Office dropped an investigation into the Autonomy sale in 2015 — finding “insufficient evidence for a realistic prospect of conviction”.

But now the DoJ has filed charges in a San Francisco court, accusing Lynch and other senior Autonomy executives of making false statement that inflated the value of the company.

They face 14 counts of conspiracy and fraud, according to Reuters — a charge which carries a maximum penalty of 20 years in prison.

We’ve reached out to Lynch’s fund, Invoke Capital, for comment on the latest development.

The BBC has obtained a statement from his lawyers, Chris Morvillo of Clifford Chance and Reid Weingarten of Steptoe & Johnson, which describes the indictment as “a travesty of justice”.

The statement also claims Lynch is being made a scapegoat for HP’s failures, framing the allegations as a business dispute over the application of UK accounting standards. 

Two years ago we interviewed Lynch on stage at TechCrunch Disrupt London and he mocked the morass of allegations still swirling around the acquisition as “spin and bullshit”.

Following the latest developments, the BBC reports that Lynch has stepped down as a scientific adviser to the UK government.

“Dr. Lynch has decided to resign his membership of the CST [Council for Science and Technology] with immediate effect. We appreciate the valuable contribution he has made to the CST in recent years,” a government spokesperson told it.

Enterprise AR is an opportunity to “do well by doing good”, says General Catalyst

A founder-investor panel on augmented reality (AR) technology here at TechCrunch Disrupt Berlin suggests growth hopes for the space have regrouped around enterprise use-cases, after the VR consumer hype cycle landed with yet another flop in the proverbial ‘trough of disillusionment’. Matt Miesnieks, CEO of mobile AR startup 6d.ai, conceded the space has generally been on another […]

A founder-investor panel on augmented reality (AR) technology here at TechCrunch Disrupt Berlin suggests growth hopes for the space have regrouped around enterprise use-cases, after the VR consumer hype cycle landed with yet another flop in the proverbial ‘trough of disillusionment’.

Matt Miesnieks, CEO of mobile AR startup 6d.ai, conceded the space has generally been on another downer but argued it’s coming out of its third hype cycle now with fresh b2b opportunities on the horizon.

6d.ai investor General Catalyst‘s Niko Bonatsos was also on stage, and both suggested the challenge for AR startups is figuring out how to build for enterprises so the b2b market can carry the mixed reality torch forward.

“From my point of view the fact that Apple, Google, Microsoft, have made such big commitments to the space is very reassuring over the long term,” said Miesnieks. “Similar to the smartphone industry ten years ago we’re just gradually seeing all the different pieces come together. And as those pieces mature we’ll eventually, over the next few years, see it sort of coalesce into an iPhone moment.”

“I’m still really positive,” he continued. “I don’t think anyone should be looking for some sort of big consumer hit product yet but in verticals in enterprise, and in some of the core tech enablers, some of the tool spaces, there’s really big opportunities there.”

Investors shot the arrow over the target where consumer VR/AR is concerned because they’d underestimated how challenging the content piece is, Bonatsos suggested.

“I think what we got wrong is probably the belief that we thought more indie developers would have come into the space and that by now we would probably have, I don’t know, another ten Pokémon-type consumer massive hit applications. This is not happening yet,” he said.

“I thought we’d have a few more games because games always lead the adoption to new technology platforms. But in the enterprise this is very, very exciting.”

“For sure also it’s clear that in order to have the iPhone moment we probably need to have much better hardware capabilities,” he added, suggesting everyone is looking to the likes of Apple to drive that forward in the future. On the plus side he said current sentiment is “much, much much better than what it was a year ago”.

Discussing potential b2b applications for AR tech one idea Miesnieks suggested is for transportation platforms that want to link a rider to the location of an on-demand and/or autonomous vehicle.

Another area of opportunity he sees is working with hardware companies — to add spacial awareness to devices such as smartphones and drones to expand their capabilities.

More generally they mentioned training for technical teams, field sales and collaborative use-cases as areas with strong potential.

“There are interesting applications in pharma, oil & gas where, with the aid of the technology, you can do very detailed stuff that you couldn’t do before because… you can follow everything on your screen and you can use your hands to do whatever it is you need to be doing,” said Bonatsos. “So that’s really, really exciting.

“These are some of the applications that I’ve seen. But it’s early days. I haven’t seen a lot of products in the space. It’s more like there’s one dev shop is working with the chief innovation officer of one specific company that is much more forward thinking and they want to come up with a really early demo.

“Now we’re seeing some early stage tech startups that are trying to attack these problems. The good news is that good dollars is being invested in trying to solve some of these problems — and whoever figures out how to get dollars from the… bigger companies, these are real enterprise businesses to be built. So I’m very excited about that.”

At the same time, the panel delved into some of the complexities and social challenges facing technologists as they try to integrate blended reality into, well, the real deal.

Including raising the spectre of Black Mirror style dystopia once smartphones can recognize and track moving objects in a scene — and 6d.ai’s tech shows that’s coming.

Miesnieks showed a brief video demo of 3D technology running live on a smartphone that’s able to identify cars and people moving through the scene in real time.

“Our team were able to solve this problem probably a year ahead of where the rest of the world is at. And it’s exciting. If we showed this to anyone who really knows 3D they’d literally jump out of the chair. But… it opens up all of these potentially unintended consequences,” he said.

“We’re wrestling with what might this be used for. Sure it’s going to make Pokémon game more fun. It could also let a blind person walk down the street and have awareness of cars and people and they may not need a cane or something.

“But it could let you like tap and literally have people be removed from your field of view and so you only see the type of people that you want to look at. Which can be dystopian.”

He pointed to issues being faced by the broader technology industry now, around social impacts and areas like privacy, adding: “We’re seeing some of the social impacts of how this stuff can go wrong, even if you assume good intentions.

“These sort of breakthroughs that we’re having are definitely causing us to be aware of the responsibility we have to think a bit more deeply about how this might be used for the things we didn’t expect.”

From the investor point of view Bonatsos said his thesis for enterprise AR has to be similarly sensitive to the world around the tech.

“It’s more about can we find the domain experts, people like Matt, that are going to do well by doing good. Because there are a tonne of different parameters to think about here and have the credibility in the market to make it happen,” he suggested, noting: “It‘s much more like traditional enterprise investing.”

“This is a great opportunity to use this new technology to do well by doing good,” Bonatsos continued. “So the responsibility is here from day one to think about privacy, to think about all the fake stuff that we could empower, what do we want to do, what do we want to limit? As well as, as we’re creating this massive, augmented reality, 3D version of the world — like who is going to own it, and share all this wealth? How do we make sure that there’s going to be a whole new ecosystem that everybody can take part of it. It’s very interesting stuff to think about.”

“Even if we do exactly what we think is right, and we assume that we have good intentions, it’s a big grey area in lots of ways and we’re going to make lots of mistakes,” conceded Miesnieks, after discussing some of the steps 6d.ai has taken to try to reduce privacy risks around its technology — such as local processing coupled with anonymizing/obfuscating any data that is taken off the phone.

“When [mistakes] happen — not if, when — all that we’re going to be able to rely on is our values as a company and the trust that we’ve built with the community by saying these are our values and then actually living up to them. So people can trust us to live up to those values. And that whole domain of startups figuring out values, communicating values and looking at this sort of abstract ‘soft’ layer — I think startups as an industry have done a really bad job of that.

“Even big companies. There’d only a handful that you could say… are pretty clear on their values. But for AR and this emerging tech domain it’s going to be, ultimately, the core that people trust us.”

Bonatsos also pointed to rising political risk as a major headwind for startups in this space — noting how China’s government has decided to regulate the gaming market because of social impacts.

“That’s unbelievable. This is where we’re heading with the technology world right now. Because we’ve truly made it. We’ve become mainstream. We’re the incumbents. Anything we build has huge, huge intended and unintended consequences,” he said.

“Having a government that regulates how many games that can be built or how many games can be released — like that’s incredible. No company had to think of that before as a risk. But when people are spending so many hours and so much money on the tech products they are using every day. This is the [inevitable] next step.”