Yes Facebook is using your 2FA phone number to target you with ads

Facebook has confirmed it does in fact use phone numbers that users provided it for security purposes to also target them with ads. Specifically a phone number handed over for two factor authentication (2FA) — a security technique that adds a second layer of authentication to help keep accounts secure. Facebook’s confession follows a story Gizmodo […]

Facebook has confirmed it does in fact use phone numbers that users provided it for security purposes to also target them with ads.

Specifically a phone number handed over for two factor authentication (2FA) — a security technique that adds a second layer of authentication to help keep accounts secure.

Facebook’s confession follows a story Gizmodo ran a story yesterday, related to research work carried out by academics at two U.S. universities who ran a study in which they say they were able to demonstrate the company uses pieces of personal information that individuals did not explicitly provide it to, nonetheless, target them with ads.

While it’s been — if not clear, then at least evident — for a number of years that Facebook uses contact details of individuals who never personally provided their information for ad targeting purposes (harvesting people’s personal data by other means, such as other users’ mobile phone contact books which the Facebook app uploads), the revelation that numbers provided to Facebook by users in good faith, for the purpose of 2FA, are also, in its view, fair game for ads has not been so explicitly ‘fessed up to before.

Some months ago Facebook did say that users who were getting spammed with Facebook notifications to the number they provided for 2FA was a bug. “The last thing we want is for people to avoid helpful security features because they fear they will receive unrelated notifications,” Facebook then-CSO Alex Stamos wrote in a blog post at the time.

Apparently not thinking to mention the rather pertinent additional side-detail that it’s nonetheless happy to repurpose the same security feature for ad targeting.

Because $$$s, presumably.

We asked Facebook to confirm this is indeed what it’s doing — to make doubly doubly sure. Because, srsly wtaf. And it sent us a statement confirming that it repurposes digits handed to it by people wanting to secure their accounts to target them with marketing.

Here’s the statement, attributed to a Facebook spokesperson: “We use the information people provide to offer a better, more personalized experience on Facebook, including ads. We are clear about how we use the information we collect, including the contact information that people upload or add to their own accounts. You can manage and delete the contact information you’ve uploaded at any time.”

A spokesman also told us that users can opt out of this ad-based repurposing of their security digits by not using phone number based 2FA. (Albeit, the company only added the ability to do non-mobile phone based 2FA back in May, so anyone before then was all outta luck.)

On the ‘shadow profiles’ front — aka Facebook maintaining profiles of non-users based on the data it has been able to scrape about them from users and other data sources — the company has also been less than transparent.

Founder Mark Zuckerberg feigned confusion when questioned about the practice by US lawmakers earlier this year — claiming it only gathers data on non-users for “security purposes”.

Well it seems Facebook is also using the (valid) security concerns of actual users to extend its ability to target individuals with ads — by using numbers provided for 2FA to also carry out ad targeting.

Safe to say criticism of the company has been swift and sharp.

Soon Facebook will also be using behind-the-scenes tech means to target ads at WhatsApp users — despite also providing a robust encrypted security wrapper around their actual messages.

Stamos — now Facebook’s ex-CSO — has also defended its actions on that front.

Facebook’s ex-CSO, Alex Stamos, defends its decision to inject ads in WhatsApp

Alex Stamos, Facebook’s former chief security officer, who left the company this summer to take up a role in academia, has made a contribution to what’s sometimes couched as a debate about how to monetize (and thus sustain) commercial end-to-end encrypted messaging platforms in order that the privacy benefits they otherwise offer can be as widely […]

Alex Stamos, Facebook’s former chief security officer, who left the company this summer to take up a role in academia, has made a contribution to what’s sometimes couched as a debate about how to monetize (and thus sustain) commercial end-to-end encrypted messaging platforms in order that the privacy benefits they otherwise offer can be as widely spread as possible.

Stamos made the comments via Twitter, where he said he was indirectly responding to the fallout from a Forbes interview with WhatsApp co-founder Brian Acton — in which Acton hit at out at his former employer for being greedy in its approach to generating revenue off of the famously anti-ads messaging platform.

Both WhatsApp founders’ exits from Facebook has been blamed on disagreements over monetization. (Jan Koum left some months after Acton.)

In the interview, Acton said he suggested Facebook management apply a simple business model atop WhatsApp, such as metered messaging for all users after a set number of free messages. But that management pushed back — with Facebook COO Sheryl Sandberg telling him they needed a monetization method that generates greater revenue “scale”.

And while Stamos has avoided making critical remarks about Acton (unlike some current Facebook staffers), he clearly wants to lend his weight to the notion that some kind of trade-off is necessary in order for end-to-end encryption to be commercially viable (and thus for the greater good (of messaging privacy) to prevail); and therefore his tacit support to Facebook and its approach to making money off of a robustly encrypted platform.

Stamos’ own departure from the fb mothership was hardly under such acrimonious terms as Acton, though he has had his own disagreements with the leadership team — as set out in a memo he sent earlier this year that was obtained by BuzzFeed. So his support for Facebook combining e2e and ads perhaps counts for something, though isn’t really surprising given the seat he occupied at the company for several years, and his always fierce defence of WhatsApp encryption.

(Another characteristic concern that also surfaces in Stamos’ Twitter thread is the need to keep the technology legal, in the face of government attempts to backdoor encryption, which he says will require “accepting the inevitable downsides of giving people unfettered communications”.)

This summer Facebook confirmed that, from next year, ads will be injected into WhatsApp statuses (aka the app’s Stories clone). So it is indeed bringing ads to the famously anti-ads messaging platform.

For several years the company has also been moving towards positioning WhatsApp as a business messaging platform to connect companies with potential customers — and it says it plans to meter those messages, also from next year.

So there are two strands to its revenue generating playbook atop WhatsApp’s e2e encrypted messaging platform. Both with knock-on impacts on privacy, given Facebook targets ads and marketing content by profiling users by harvesting their personal data.

This means that while WhatsApp’s e2e encryption means Facebook literally cannot read WhatsApp users’ messages, it is ‘circumventing’ the technology (for ad-targeting purposes) by linking accounts across different services it owns — using people’s digital identities across its product portfolio (and beyond) as a sort of ‘trojan horse’ to negate the messaging privacy it affords them on WhatsApp.

Facebook is using different technical methods (including the very low-tech method of phone number matching) to link WhatsApp user and Facebook accounts. Once it’s been able to match a Facebook user to a WhatsApp account it can then connect what’s very likely to be a well fleshed out Facebook profile with a WhatsApp account that nonetheless contains messages it can’t read. So it’s both respecting and eroding user privacy.

This approach means Facebook can carry out its ad targeting activities across both messaging platforms (as it will from next year). And do so without having to literally read messages being sent by WhatsApp users.

As trade offs go, it’s a clearly a big one — and one that’s got Facebook into regulatory trouble in Europe.

It is also, at least in Stamos’ view, a trade off that’s worth it for the ‘greater good’ of message content remaining strongly encrypted and therefore unreadable. Even if Facebook now knows pretty much everything about the sender, and can access any unencrypted messages they sent using its other social products.

In his Twitter thread Stamos argues that “if we want that right to be extended to people around the world, that means that E2E encryption needs to be deployed inside of multi-billion user platforms”, which he says means: “We need to find a sustainable business model for professionally-run E2E encrypted communication platforms.”

On the sustainable business model front he argues that two models “currently fit the bill” — either Apple’s iMessage or Facebook-owned WhatsApp. Though he doesn’t go into any detail on why he believes only those two are sustainable.

He does say he’s discounting the Acton-backed alternative, Signal, which now operates via a not-for-profit (the Signal Foundation) — suggesting that rival messaging app is “unlikely to hit 1B users”.

In passing he also throws it out there that Signal is “subsidized, indirectly, by FB ads” — i.e. because Facebook pays a licensing fee for use of the underlying Signal Protocol used to power WhatsApp’s e2e encryption. (So his slightly shade-throwing subtext is that privacy purists are still benefiting from a Facebook sugardaddy.)

Then he gets to the meat of his argument in defence of Facebook-owned (and monetized) WhatsApp — pointing out that Apple’s sustainable business model does not reach every mobile user, given its hardware is priced at a premium. Whereas WhatsApp running on a cheap Android handset ($50 or, perhaps even $30 in future) can.

Other encrypted messaging apps can also of course run on Android but presumably Stamos would argue they’re not professionally run.

“I think it is easy to underestimate how radical WhatsApp’s decision to deploy E2E was,” he writes. “Acton and Koum, with Zuck’s blessing, jumped off a bridge with the goal of building a monetization parachute on the way down. FB has a lot of money, so it was a very tall bridge, but it is foolish to expect that FB shareholders are going to subsidize a free text/voice/video global communications network forever. Eventually, WhatsApp is going to need to generate revenue.

“This could come from directly charging for the service, it could come from advertising, it could come from a WeChat-like services play. The first is very hard across countries, the latter two are complicated by E2E.”

“I can’t speak to the various options that have been floated around, or the arguments between WA and FB, but those of us who care about privacy shouldn’t see WhatsApp monetization as something evil,” he adds. “In fact, we should want WA to demonstrate that E2E and revenue are compatible. That’s the only way E2E will become a sustainable feature of massive, non-niche technology platforms.”

Stamos is certainly right that Apple’s iMessage cannot reach every mobile user, given the premium cost of Apple hardware.

Though he elides the important role that second hand Apple devices play in helping to reduce the barrier to entry to Apple’s pro-privacy technology — a role Apple is actively encouraging via support for older devices (and by its own services business expansion which extends its model so that support for older versions of iOS (and thus secondhand iPhones) is also commercially sustainable).

Robust encryption only being possible via multi-billion user platforms essentially boils down to a usability argument by Stamos — which is to suggest that mainstream app users will simply not seek encryption out unless it’s plated up for them in a way they don’t even notice it’s there.

The follow on conclusion is then that only a well-resourced giant like Facebook has the resources to maintain and serve this different tech up to the masses.

There’s certainly substance in that point. But the wider question is whether or not the privacy trade offs that Facebook’s monetization methods of WhatsApp entail, by linking Facebook and WhatsApp accounts and also, therefore, looping in various less than transparent data-harvest methods it uses to gather intelligence on web users generally, substantially erodes the value of the e2e encryption that is now being bundled with Facebook’s ad targeting people surveillance. And so used as a selling aid for otherwise privacy eroding practices.

Yes WhatsApp users’ messages will remain private, thanks to Facebook funding the necessary e2e encryption. But the price users are having to pay is very likely still their personal privacy.

And at that point the argument really becomes about how much profit a commercial entity should be able to extract off of a product that’s being marketed as securely encrypted and thus ‘pro-privacy’? How much revenue “scale” is reasonable or unreasonable in that scenario?

Other business models are possible, which was Acton’s point. But likely less profitable. And therein lies the rub where Facebook is concerned.

How much money should any company be required to leave on the table, as Acton did when he left Facebook without the rest of his unvested shares, in order to be able to monetize a technology that’s bound up so tightly with notions of privacy?

Acton wanted Facebook to agree to make as much money as it could without users having to pay it with their privacy. But Facebook’s management team said no. That’s why he’s calling them greedy.

Stamos doesn’t engage with that more nuanced point. He just writes: “It is foolish to expect that FB shareholders are going to subsidize a free text/voice/video global communications network forever. Eventually, WhatsApp is going to need to generate revenue” — thereby collapsing the revenue argument into an all or nothing binary without explaining why it has to be that way.

Facebook pilots new political campaign security tools — just 50 days before Election Day

Facebook has rolled out a “pilot” program of new security tools for political campaigns — just weeks before millions of Americans go to the polls for the midterm elections. The social networking giant said it’s targeting campaigns that “may be particularly vulnerable to targeting by hackers and foreign adversaries.” Once enrolled, Facebook said it’ll help […]

Facebook has rolled out a “pilot” program of new security tools for political campaigns — just weeks before millions of Americans go to the polls for the midterm elections.

The social networking giant said it’s targeting campaigns that “may be particularly vulnerable to targeting by hackers and foreign adversaries.”

Once enrolled, Facebook said it’ll help campaigns adopt stronger security protections, “like two-factor authentication and monitor for potential hacking threats,” said Nathaniel Gleicher, Facebook’s head of cybersecurity policy, in a Monday blog post.

Facebook’s chief Mark Zuckerberg has admitted that the company “didn’t do enough” in the 2016 presidential election to prevent meddling and spreading misinformation, yet took a lashing from lawmakers for failing to step up in the midterms.

A former Obama campaign official told TechCrunch that the offering was important — but late.

“Fifty days is an eternity in campaign time,” said Harper Reed, who served as President Obama’s chief technology officer during the 2012 re-election campaign. “At this point, if [a campaign] has made gross security problems, they’ve already made them.”

But he questioned if now equipping campaigns with security tools will “actually solve the problem, or if it just solves Facebook’s PR problem.”

Facebook — like other tech giants — has been under the microscope in recent years after the social networking giant failed to prevent foreign meddling in the 2016 presidential election, in which adversaries — typically Russia — used the platform to spread disinformation.

The company’s done more to crack down on foreign interference campaigns after facing rebuke from lawmakers.

But ahead of the midterms, even the company’s former chief security officer was critical of Facebook. In an interview at Disrupt SF, Alex Stamos said that critical steps to protect the midterms hadn’t been taken in time.

“If there’s no foreign interference during the midterms, it’s not because we did a great job. It’s because our adversaries decided to [show] a little forbearance, which is unfortunate,” said Stamos.

Facebook, for its part, said its latest rollout of security tools “might be expanded to future elections and other users” beyond the midterms.

“Hacking is a part of elections,” said Reed. But with just two months to go before voters go to the polls, campaigns “have to just keep doing what they’re doing,” he said.

Facebook pilots new political campaign security tools — just 50 days before Election Day

Facebook has rolled out a “pilot” program of new security tools for political campaigns — just weeks before millions of Americans go to the polls for the midterm elections. The social networking giant said it’s targeting campaigns that “may be particularly vulnerable to targeting by hackers and foreign adversaries.” Once enrolled, Facebook said it’ll help […]

Facebook has rolled out a “pilot” program of new security tools for political campaigns — just weeks before millions of Americans go to the polls for the midterm elections.

The social networking giant said it’s targeting campaigns that “may be particularly vulnerable to targeting by hackers and foreign adversaries.”

Once enrolled, Facebook said it’ll help campaigns adopt stronger security protections, “like two-factor authentication and monitor for potential hacking threats,” said Nathaniel Gleicher, Facebook’s head of cybersecurity policy, in a Monday blog post.

Facebook’s chief Mark Zuckerberg has admitted that the company “didn’t do enough” in the 2016 presidential election to prevent meddling and spreading misinformation, yet took a lashing from lawmakers for failing to step up in the midterms.

A former Obama campaign official told TechCrunch that the offering was important — but late.

“Fifty days is an eternity in campaign time,” said Harper Reed, who served as President Obama’s chief technology officer during the 2012 re-election campaign. “At this point, if [a campaign] has made gross security problems, they’ve already made them.”

But he questioned if now equipping campaigns with security tools will “actually solve the problem, or if it just solves Facebook’s PR problem.”

Facebook — like other tech giants — has been under the microscope in recent years after the social networking giant failed to prevent foreign meddling in the 2016 presidential election, in which adversaries — typically Russia — used the platform to spread disinformation.

The company’s done more to crack down on foreign interference campaigns after facing rebuke from lawmakers.

But ahead of the midterms, even the company’s former chief security officer was critical of Facebook. In an interview at Disrupt SF, Alex Stamos said that critical steps to protect the midterms hadn’t been taken in time.

“If there’s no foreign interference during the midterms, it’s not because we did a great job. It’s because our adversaries decided to [show] a little forbearance, which is unfortunate,” said Stamos.

Facebook, for its part, said its latest rollout of security tools “might be expanded to future elections and other users” beyond the midterms.

“Hacking is a part of elections,” said Reed. But with just two months to go before voters go to the polls, campaigns “have to just keep doing what they’re doing,” he said.

Former Facebook security chief says creating election chaos is still easy

As someone who’s had a years-long front row seat to Russia’s efforts influence US politics, former Facebook Chief Security Officer Alex Stamos has a pretty solid read on what we can expect from the 2018 midterms. Stamos left the company last month to work on cybersecurity education at Stanford. “If there’s no foreign interference during […]

As someone who’s had a years-long front row seat to Russia’s efforts influence US politics, former Facebook Chief Security Officer Alex Stamos has a pretty solid read on what we can expect from the 2018 midterms. Stamos left the company last month to work on cybersecurity education at Stanford.

“If there’s no foreign interference during the midterms, it’s not because we did a great job,” Stamos said in an interview with TechCrunch at Disrupt SF on Thursday. “It’s because our adversaries decided to [show] a little forbearance, which is unfortunate.”

As Stamos sees it, there is an alternate reality in which the US electorate would be better off heading into its next major nationwide voting day but critical steps haven’t been taken.

“As a society, we have not responded to the 2016 election in the way that would’ve been necessary to have a more trustworthy midterms,” he said. “There have been positive changes, but overall security of campaigns [is] not that much better, and the actual election infrastructure isn’t much better.”

Stamos believes that it’s important to remember that foreign adversaries can’t dictate the outcome of an election with any kind of guarantee. What they can do — and what he calls his “big fear” — is that they can still mess everything up in a way that calls the entire system into question.

“In most cases, throwing an election one way or another is going to be very difficult for a foreign adversary but throwing any election into chaos is totally doable right now,” he said. “That’s where we haven’t moved forwards. ”

Stamos gave examples of attacks on voter registration sites that lose voter data or denial-of-service attacks on the day of elections.

“With a disinformation campaign at the same time, you can make it so that you have half the country that thinks the election was thrown,” he said.

To a foreign adversary seeking to undermine US democracy, creating that kind of doubt isn’t very technically difficult. Even with no votes changed and no voting systems breached, a little doubt goes a very long way toward accomplishing the same goals as a more sophisticated hacking campaign.

Stamos cites new ad funding disclosures as one substantive change that will help make US democracy healthier, but more efforts need to be taken.

“Russian interference or not, we do not want a future where campaigns and candidates are cutting up the electorate into smaller and smaller pieces — so I think ad transparency is the first step there,” he said.

In some cases, those efforts will require a major shift in the way both the US government and private social media companies have conducted themselves. For one, as he wrote in Lawfare, the US needs “an independent, defense-only cybersecurity agency with no intelligence, military or law enforcement responsibility” rather than a patchwork of agencies each partially responsible for cybersecurity defense.

The news may not be great for 2018, but a strong dose of realism now will amplify the clarion call to do better before 2020.

Former Facebook security chief Alex Stamos: Being a CSO can be a ‘crappy job’

Alex Stamos has been at the helm of some of the world’s most powerful companies for the past half-decade and is widely regarded as one of the smartest people working in the security space. Now, just a month into his new gig as an academic, he can look back at his time with a dose […]

Alex Stamos has been at the helm of some of the world’s most powerful companies for the past half-decade and is widely regarded as one of the smartest people working in the security space.

Now, just a month into his new gig as an academic, he can look back at his time with a dose of brutal honesty.

“It’s kinda a crappy job to be a chief security officer,” said Stamos, Facebook’s former security chief, in an interview with TechCrunch at Disrupt SF on Thursday.

“It’s like being a [chief financial officer] before accounting was invented,” he said.

“When you decide to take on the [chief security officer] title, you decide that you’re going to run the risk of having decisions made above you or issues created by tens of thousands of people making decisions that will be stapled to your resume,” he said.

Stamos recently joined Stanford University after three years as Facebook’s security chief. Before then, he was Yahoo’s chief information security officer for less than a year before he departed the company, reportedly in conflict with then-Yahoo chief executive Marissa Mayer over the company’s complicity with a secret government surveillance program.

His name is synonymous to many as a fierce defender of user security and rights, but he was at the helm when both his former employers were hit by security scandals — Yahoo had a a three-billion user data breach, and Facebook with the Cambridge Analytica voter profiling incident. Although inherited, he said he wasn’t going to “shirk” the blame.

“I was the CSO when all this stuff happened — it was my responsibility,” he said.

“I also hope I was able to make things better,” he said. “If you’re making individual decisions that you believe are ethical and moral that are pushing the ball in the right direction, in the end if things are imperfect, you have to live with yourself and continue to do good things.”

He said most companies have to navigate security, but also privacy and misuse of their products.

Stamos admits that while he came from a “traditional CSO” background, he quickly learned that the vast majority of harm caused by technology “does not have any interesting technical component.”

Speaking to disinformation, child abuse and harassment, he said that it’s the “technically correct use of the things we build that cause harm.”

He said that the industry needs to vastly expand how companies deal with issues that encompass but don’t fall within the strict realm of cybersecurity. “There’s not really a field around it,” he said, talking to the need to redefine “cybersecurity” to also include issues of trust, safety and privacy — three things that are important for companies to be working to ensure, but don’t necessarily fit into the traditional security model.

“There’s not a tech company starting up right now that is not going to have to worry about these trust, safety and privacy issues,” he said. “And hopefully we can take some of those lessons and spread them out a bit more.”

“I’ve learned a lot of things from the failures I’ve seen up close and I want other people to learn about them,” he said. That, he said, is one of the things he wants to help teach at Stanford, where he’s likely to stay for some time.

Asked if he would ever go back to a previous role as a chief security officer, “not for quite a long time,” he said.

Outgoing Facebook CSO Alex Stamos will join Disrupt SF to talk cybersecurity

At Disrupt SF 2018, Facebook’s soon-to-be-former chief security officer Alex Stamos will join us to chat about his tenure in the top security role for the world’s biggest social network, how it feels to have weathered some of the biggest security and privacy scandals to ever hit the tech industry and securing U.S. elections in […]

At Disrupt SF 2018, Facebook’s soon-to-be-former chief security officer Alex Stamos will join us to chat about his tenure in the top security role for the world’s biggest social network, how it feels to have weathered some of the biggest security and privacy scandals to ever hit the tech industry and securing U.S. elections in the 2018 midterms and beyond.

Following his last day at Facebook on August 17, Stamos will transition to an academic role at Stanford, starting this September. Since March, Stamos has focused on election security at Facebook as the company tries to rid its massive platform of Russian interference and bolster it against disinformation campaigns aiming to disrupt U.S. politics.

“It is critical that we as an industry live up to our collective responsibility to consider the impact of what we build, and I look forward to continued collaboration and partnership with the security and safety teams at Facebook,” Stamos said of the company he is leaving.

At Stanford, Stamos will take on a full-time role as an adjunct professor with the university’s Freeman Spogli Institute for International Studies and plans to conduct research as well. Stamos previously lectured a security class at Stanford and intends to expand on that foundation with a hands-on “hack lab” where students explore real-world hacking techniques and how to defend against them. With the class, open to non-computer science majors, Stamos seeks to expose a broader swath of students to the intricacies of cybersecurity.

Prior to his time at Facebook, Stamos served as the Chief Information Security Officer at Yahoo . Stamos left in 2015 for his new security role at Facebook, reportedly over clashes at the beleaguered company over cybersecurity resources and the implementation of measures like end-to-end encryption. In both roles, Stamos navigated the choppy waters of high profile privacy scandals while trying to chart a more secure path forward.

The full agenda is here. You can purchase tickets here.

Facebook loses its chief security officer Alex Stamos

Alex Stamos, Facebook’s chief security officer since 2015, is leaving the company to take a position at Stanford University. The company has been shedding leadership over the last half a year largely owing to fallout from its response, or lack thereof, to the ongoing troubles relating to user data security and election interference on the social network.

Alex Stamos, Facebook’s chief security officer since 2015, is leaving the company to take a position at Stanford University. The company has been shedding leadership over the last half a year largely owing to fallout from its response, or lack thereof, to the ongoing troubles relating to user data security and election interference on the social network.

Rumors that Stamos was not long for the company spread in March; he was said to have disagreed considerably with the tack Facebook had taken in disclosure and investigation of its role in hosting state-sponsored disinformation seeded by Russian intelligence. To be specific, he is said to have preferred more and better disclosures rather than the slow drip-feed of half-apologies, walkbacks, and admissions we’ve gotten from the company over the last year or so.

He said at in March that “despite the rumors, I’m still fully engaged with my work at Facebook,” though he acknowledged that his role now focused on “emerging security risks and working on election security.”

Funnily enough, that is exactly the topic he will be looking into at Stanford as a new adjunct professor, where he will be joining a new group called Information Warfare, the New York Times reported.

Leaving because of a major policy disagreement with his employer would not be out of character for Stamos. He reportedly left Yahoo (which of course was absorbed into Aol to form TechCrunch’s parent company, Oath) because of the company’s choice to allow U.S. intelligence access to certain user data. One may imagine a similar gulf in understanding between him and others at Facebook, especially on something as powerfully divisive as this election interference story or the Cambridge Analytica troubles.

Stamos is far from the only Facebook official to leave recently; Colin Stretch, chief legal officer, left last month after more than eight years at the company; its similarly long-serving head of policy and comms, Elliot Schrage, left the month before; WhatsApp cofounder Jan Koum left that company in April.

We’ve contacted Facebook and Stamos for comment and will update this post when we hear back.

Facebook really doesn’t want users to go to a fake Unite the Right counter-protest next week

According to COO Sheryl Sandberg, getting ahead of an event called “No Unite the Right 2, DC” is the reason behind Facebook’s decision to disclose new platform behavior that closely resembles previous Russian state-sponsored activity meant to sow political discord in the U.S. “We’re sharing this today because the connection between these actors and the […]

According to COO Sheryl Sandberg, getting ahead of an event called “No Unite the Right 2, DC” is the reason behind Facebook’s decision to disclose new platform behavior that closely resembles previous Russian state-sponsored activity meant to sow political discord in the U.S.

“We’re sharing this today because the connection between these actors and the event planned in Washington next week,” Sandberg said, calling the disclosure “early” and noting that the company still does not have all the facts.

A Facebook Page called “Resisters” created the event, set to take place on August 10, as a protest against Unite the Right 2 — a follow-up event to last year’s deadly rally in Charlottesville, Va. that left peaceful counter-protester Heather Heyer dead.

The Page, which Facebook identified as displaying “coordinated inauthentic behavior,” also worked with the admins from five authentic Facebook Pages to co-host the event and arrange transportation and logistics. Facebook has notified those users of its findings and taken down the event page.

This isn’t the first event coordinated by fake Facebook accounts with the likely intention of further polarizing U.S. voters. In a call today, Facebook noted that the new inauthentic accounts it found had created around 30 events. While the dates for two have yet to pass, “the others have taken place over the past year or so.”

Facebook will not yet formally attribute its new findings to the Russian state-linked Internet Research Agency (IRA). Still, the Resisters Page hosting “No Unite the Right 2, DC” listed a previously identified IRA account as a co-admin for “only seven minutes.”

That link, and whatever else the public doesn’t know at this time, is enough for the Senate Intel committee vice chairman Mark Warner to credit the Russian government with what appears to be an ongoing campaign of political influence.

“Today’s disclosure is further evidence that the Kremlin continues to exploit platforms like Facebook to sow division and spread disinformation, and I am glad that Facebook is taking some steps to pinpoint and address this activity,” Warner said in a statement provided to TechCrunch. “I also expect Facebook, along with other platform companies, will continue to identify Russian troll activity and to work with Congress on updating our laws to better protect our democracy in the future.”

Facebook’s chief security officer, Alex Stamos, maintained that the company “doesn’t think it’s appropriate for Facebook to give public commentary on political motivations of nation states” and calls the IRA link “interesting but not determinant.”

Dodged questions from Facebook’s press call on misinformation

Facebook avoided some of the toughest inquiries from reporters yesterday during a conference call about its efforts to fight election interference and fake news. The company did provide additional transparency on important topics by subjecting itself to intense questioning from a gaggle of its most vocal critics, and a few bits of interesting news did […]

Facebook avoided some of the toughest inquiries from reporters yesterday during a conference call about its efforts to fight election interference and fake news. The company did provide additional transparency on important topics by subjecting itself to intense questioning from a gaggle of its most vocal critics, and a few bits of interesting news did emerge:

  • Facebook’s fact-checking partnerships now extend to 17 countries, up from 14 last month
  • Top searches in its new political ads archive include California, Clinton, Elizabeth Warren, Florida, Kavanaugh, North Carolina and Trump; and its API for researchers will open in August
  • To give political advertisers a quicker path through its new verification system, Facebook is considering a preliminary location check that would later expire unless they verify their physical mailing address

Yet deeper questions went unanswered. Will it be transparent about downranking accounts that spread false news? Does it know if the midterm elections are already being attacked? Are politically divisive ads cheaper?

UNITED STATES – APRIL 11: Facebook CEO Mark Zuckerberg prepares to testify before a House Energy and Commerce Committee in Rayburn Building on the protection of user data on April 11, 2018. (Photo By Tom Williams/CQ Roll Call) // Flickr CC Sean P. Anderson

Here’s a selection of the most important snippets from the call, followed by a discussion of how it evaded some critical topics.

Fresh facts and perspectives

On Facebook’s approach of downranking instead of deleting fake news

Tessa Lyons, product manager for the News Feed: “If you are who you say you are and you’re not violating our Community Standards, we don’t believe we should stop you from posting on Facebook. This approach means that there will be information posted on Facebook that is false and that many people, myself included, find offensive . . . Just because something is allowed to be on Facebook doesn’t mean it should get distribution . . . We know people don’t want to see false information at the top of their News Feed and we believe we have a responsibility to prevent false information from getting broad distribution. This is why our efforts to fight disinformation are focused on reducing its spread. 

-When we take action to reduce the distribution of misinformation in News Feed, what we’re doing is changing the signals and predictions that inform the relevance score for each piece of content. Now, what that means is that information, that content appears lower in everyone’s News Feed who might see it, and so fewer people will actually end up encountering it. 

Image: Bryce Durbin/TechCrunch

Now, the reason that we strike that balance is because we believe we are working to strike the balance between expression and the safety of our community.

If a piece of content or an account violates our Community Standards, it’s removed; if a Page repeatedly violates those standards, the Page is removed. On the side of misinformation — not Community Standards — if an individual piece of content is rated false, its distribution is reduced; if a Page or domain repeatedly shares false information, the entire distribution of that Page or domain is reduced.”

On how Facebook disrupts misinformation operations targeting elections

Nathaniel Gleicher, head of Cybersecurity Policy: “For each investigation, we identify particular behaviors that are common across threat actors. And then we work with our product and engineering colleagues as well as everyone else on this call to automate detection of these behaviors and even modify our products to make those behaviors much more difficult. If manual investigations are like looking for a needle in a haystack, our automated work is like shrinking that haystack. It reduces the noise in the search environment which directly stops unsophisticated threats. And it also makes it easier for our manual investigators to corner the more sophisticated bad actors. 

In turn, those investigations keep turning up new behaviors, which fuels our automated detection and product innovation. Our goal is to create this virtuous circle where we use manual investigations to disrupt sophisticated threats and continually improve our automation and products based on the insights from those investigations. Look for the needle and shrink the haystack.”

TechCrunch/Bryce Durbin

On reactions to political ads labeling, improving the labeling process and the ads archive

Rob Leathern, product manager for Ads: “On the revenue question, the political ads aren’t a large part of our business from a revenue perspective, but we do think it’s very important to be giving people tools so they can understand how these ads are being used. 

-I do think we have definitely seen some folks have some indigestion about the process of getting authorized. We obviously think it’s an important trade-off and it’s the right trade-off to make. We’re definitely exploring ways to reduce the time for them from starting the authorization process to being able to place an ad. We’re considering a preliminary location check that might expire after a certain amount of time, which would then become permanent once they verify their physical mailing address and receive the letter that we send to them.

We’re actively exploring ways to streamline the authorization process and are clarifying our policy by providing examples on what ad copy would require authorization and a label and what would not.

We also plan to add more information to the Info and Ads tab for Pages. Today you can see when the Page was created, previous Page names, but over time we hope to add more context for people there in addition to the ads that that Page may have run as well.”

Dodged questions

On transparency about downranking accounts

Facebook has been repeatedly asked to clarify the lines it draws around content moderation. It’s arrived at a controversial policy where content is allowed even if it spreads fake news, gets downranked in News Feed if fact checkers verify the information is false and gets deleted if it incites violence or harasses other users. Repeat offenders in the second two categories can get their whole profile, Page or Group downranked or deleted.

But that surfaces secondary questions about how transparent it is about these decisions and their impacts on the reach of false news. Hannah Kuchler of The Financial Times and Sheera Frenkel of The New York Times pushed Facebook on this topic. Specifically, the latter asked, “I was wondering if you have any intention going forward to be transparent about who is going — who is down-ranked and are you keeping track of the effect that down-ranking a Page or a person in the News Feed has and do you have those kinds of internal metrics? And then is that also something that you’ll eventually make public?”

Facebook has said that if a post is fact-checked as false, it’s downranked and loses 80 percent of its future views through News Feed. But that ignores the fact that it can take three days for fact checkers to get to some fake news stories, so they’ve likely already received the majority of their distribution. It’s yet to explain how a false rating from fact checkers reduces the story’s total views before and after the decision, or what the ongoing reach reduction is for accounts that are downranked as a whole for repeatedly sharing false-rated news.

Lyons only answered regarding what happens to individual posts rather than providing the requested information about the impact on downranked accounts:

Lyons: “If you’re asking specifically will we be transparent about the impact of fact-checking on demotions, we are already transparent about the rating that fact-checkers provide . . . In terms of how we notify Pages when they share information that’s false, any time any Page or individual shares a link that has been rated false by fact-checkers, if we already have a false rating we warn them before they share, and if we get a false rating after they share, we send them a notification. We are constantly transparent, particularly with Page admins, but also with anybody who shares information about the way in which fact-checkers have evaluated their content.”

On whether politically divisive ads are cheaper and more effective

A persistent question about Facebook’s ads auction is if it preferences inflammatory political ads over neutral ones. The auction system is designed to prioritize more engaging ads because they’re less likely to push users off the social network than boring ads, thereby reducing future ad views. The concern is that Facebook may be incentivizing political candidates and bad actors trying to interfere with elections to polarize society by making more efficient ads that stoke divisions.

Deepa Seetharaman of the The Wall Street Journal surfaced this on the call saying, “I’m talking to a lot of campaign strategists coming up to the 2018 election. One theme that I continuously hear is that the more incendiary ads do better, but the effective CPMs on those particular ads are lower than, I guess, neutral or more positive messaging. Is that a dynamic that you guys are comfortable with? And is there anything that you’re doing to kind of change the kind of ads that succeeds through the Facebook ad auction system?”

Facebook’s Leathern used a similar defense Facebook has relied on to challenge questions about whether Donald Trump got cheaper ad rates during the 2016 election, claiming it was too hard to assess that given all the factors that go into determining ad prices and reach. Meanwhile, he ignored whether, regardless of the data, Facebook wanted to make changes to ensure divisive ads didn’t get preference.

Leathern: “Look, I think that it’s difficult to take a very specific slice of a single ad and use it to draw a broad inference which is one of the reasons why we think it’s important in the spirit of the transparency here to continue to offer additional transparency and give academics, journalists, experts, the ability to analyze this data across a whole bunch of ads. That’s why we’re launching the API and we’re going to be starting to test it next month. We do believe it’s important to give people the ability to take a look at this data more broadly. That, I think, is the key here — the transparency and understanding of this when seen broadly will give us a fuller picture of what is going on.”

On if there’s evidence of midterm elections interference

Facebook failed to adequately protect the 2016 U.S. presidential election from Russian interference. Since then it’s taken a lot of steps to try to safeguard its social network, from hiring more moderators to political advertiser verification systems to artificial intelligence for fighting fake news and the fake accounts that share it.

Internal debates about approaches to the issue and a reorganization of Facebook’s security teams contributed Facebook CSO Alex Stamos’ decision to leave the company next month. Yesterday, BuzzFeed’s Ryan Mac and Charlie Warzel published an internal memo by Stamos from March urging Facebook to change. “We need to build a user experience that conveys honesty and respect, not one optimized to get people to click yes to giving us more access . . . We need to listen to people (including internally) when they tell us a feature is creepy or point out a negative impact we are having in the world.” And today, Facebook’s Chief Legal Officer Colin Stretch announced his departure.

Facebook efforts to stop interference aren’t likely to have completely deterred those seeking to sway or discredit our elections, though. Evidence of Facebook-based attacks on the midterms could fuel calls for government regulation, investments in counter-cyberwarfare, and Robert Mueller’s investigation into Russia’s role.

WASHINGTON, DC – APRIL 11: Facebook co-founder, Chairman and CEO Mark Zuckerberg prepares to testify before the House Energy and Commerce Committee in the Rayburn House Office Building on Capitol Hill April 11, 2018 in Washington, DC. This is the second day of testimony before Congress by Zuckerberg, 33, after it was reported that 87 million Facebook users had their personal information harvested by Cambridge Analytica, a British political consulting firm linked to the Trump campaign. (Photo by Chip Somodevilla/Getty Images)

David McCabe of Axios and Cecilia Kang of The New York Times pushed Facebook to be clear about whether it had already found evidence of interference into the midterms. But Facebook’s Gleicher refused to specify. While it’s reasonable that he didn’t want to jeopardize Facebook or Mueller’s investigation, it’s something that Facebook should at least ask the government if it can disclose.

Gleicher: “When we find things and as we find things — and we expect that we will — we’re going to notify law enforcement and we’re going to notify the public where we can . . . And one of the things we have to be really careful with here is that as we think about how we answer these questions, we need to be careful that we aren’t compromising investigations that we might be running or investigations the government might be running.”

The answers we need

So Facebook, what’s the impact of a false rating from fact checkers on a story’s total views before and after it’s checked? Will you reveal when whole accounts are downranked and what the impact is on their future reach? Do politically incendiary ads that further polarize society cost less and perform better than politically neutral ads, and, if so, will Facebook do anything to change that? And does Facebook already have evidence that the Russians or anyone else are interfering with the U.S. midterm elections?

We’ll see if any of the analysts who get to ask questions on today’s Facebook earnings call will step up.