Outgoing Facebook CSO Alex Stamos will join Disrupt SF to talk cybersecurity

At Disrupt SF 2018, Facebook’s soon-to-be-former chief security officer Alex Stamos will join us to chat about his tenure in the top security role for the world’s biggest social network, how it feels to have weathered some of the biggest security and privacy scandals to ever hit the tech industry and securing U.S. elections in […]

At Disrupt SF 2018, Facebook’s soon-to-be-former chief security officer Alex Stamos will join us to chat about his tenure in the top security role for the world’s biggest social network, how it feels to have weathered some of the biggest security and privacy scandals to ever hit the tech industry and securing U.S. elections in the 2018 midterms and beyond.

Following his last day at Facebook on August 17, Stamos will transition to an academic role at Stanford, starting this September. Since March, Stamos has focused on election security at Facebook as the company tries to rid its massive platform of Russian interference and bolster it against disinformation campaigns aiming to disrupt U.S. politics.

“It is critical that we as an industry live up to our collective responsibility to consider the impact of what we build, and I look forward to continued collaboration and partnership with the security and safety teams at Facebook,” Stamos said of the company he is leaving.

At Stanford, Stamos will take on a full-time role as an adjunct professor with the university’s Freeman Spogli Institute for International Studies and plans to conduct research as well. Stamos previously lectured a security class at Stanford and intends to expand on that foundation with a hands-on “hack lab” where students explore real-world hacking techniques and how to defend against them. With the class, open to non-computer science majors, Stamos seeks to expose a broader swath of students to the intricacies of cybersecurity.

Prior to his time at Facebook, Stamos served as the Chief Information Security Officer at Yahoo . Stamos left in 2015 for his new security role at Facebook, reportedly over clashes at the beleaguered company over cybersecurity resources and the implementation of measures like end-to-end encryption. In both roles, Stamos navigated the choppy waters of high profile privacy scandals while trying to chart a more secure path forward.

The full agenda is here. You can purchase tickets here.

Facebook loses its chief security officer Alex Stamos

Alex Stamos, Facebook’s chief security officer since 2015, is leaving the company to take a position at Stanford University. The company has been shedding leadership over the last half a year largely owing to fallout from its response, or lack thereof, to the ongoing troubles relating to user data security and election interference on the social network.

Alex Stamos, Facebook’s chief security officer since 2015, is leaving the company to take a position at Stanford University. The company has been shedding leadership over the last half a year largely owing to fallout from its response, or lack thereof, to the ongoing troubles relating to user data security and election interference on the social network.

Rumors that Stamos was not long for the company spread in March; he was said to have disagreed considerably with the tack Facebook had taken in disclosure and investigation of its role in hosting state-sponsored disinformation seeded by Russian intelligence. To be specific, he is said to have preferred more and better disclosures rather than the slow drip-feed of half-apologies, walkbacks, and admissions we’ve gotten from the company over the last year or so.

He said at in March that “despite the rumors, I’m still fully engaged with my work at Facebook,” though he acknowledged that his role now focused on “emerging security risks and working on election security.”

Funnily enough, that is exactly the topic he will be looking into at Stanford as a new adjunct professor, where he will be joining a new group called Information Warfare, the New York Times reported.

Leaving because of a major policy disagreement with his employer would not be out of character for Stamos. He reportedly left Yahoo (which of course was absorbed into Aol to form TechCrunch’s parent company, Oath) because of the company’s choice to allow U.S. intelligence access to certain user data. One may imagine a similar gulf in understanding between him and others at Facebook, especially on something as powerfully divisive as this election interference story or the Cambridge Analytica troubles.

Stamos is far from the only Facebook official to leave recently; Colin Stretch, chief legal officer, left last month after more than eight years at the company; its similarly long-serving head of policy and comms, Elliot Schrage, left the month before; WhatsApp cofounder Jan Koum left that company in April.

We’ve contacted Facebook and Stamos for comment and will update this post when we hear back.

Facebook really doesn’t want users to go to a fake Unite the Right counter-protest next week

According to COO Sheryl Sandberg, getting ahead of an event called “No Unite the Right 2, DC” is the reason behind Facebook’s decision to disclose new platform behavior that closely resembles previous Russian state-sponsored activity meant to sow political discord in the U.S. “We’re sharing this today because the connection between these actors and the […]

According to COO Sheryl Sandberg, getting ahead of an event called “No Unite the Right 2, DC” is the reason behind Facebook’s decision to disclose new platform behavior that closely resembles previous Russian state-sponsored activity meant to sow political discord in the U.S.

“We’re sharing this today because the connection between these actors and the event planned in Washington next week,” Sandberg said, calling the disclosure “early” and noting that the company still does not have all the facts.

A Facebook Page called “Resisters” created the event, set to take place on August 10, as a protest against Unite the Right 2 — a follow-up event to last year’s deadly rally in Charlottesville, Va. that left peaceful counter-protester Heather Heyer dead.

The Page, which Facebook identified as displaying “coordinated inauthentic behavior,” also worked with the admins from five authentic Facebook Pages to co-host the event and arrange transportation and logistics. Facebook has notified those users of its findings and taken down the event page.

This isn’t the first event coordinated by fake Facebook accounts with the likely intention of further polarizing U.S. voters. In a call today, Facebook noted that the new inauthentic accounts it found had created around 30 events. While the dates for two have yet to pass, “the others have taken place over the past year or so.”

Facebook will not yet formally attribute its new findings to the Russian state-linked Internet Research Agency (IRA). Still, the Resisters Page hosting “No Unite the Right 2, DC” listed a previously identified IRA account as a co-admin for “only seven minutes.”

That link, and whatever else the public doesn’t know at this time, is enough for the Senate Intel committee vice chairman Mark Warner to credit the Russian government with what appears to be an ongoing campaign of political influence.

“Today’s disclosure is further evidence that the Kremlin continues to exploit platforms like Facebook to sow division and spread disinformation, and I am glad that Facebook is taking some steps to pinpoint and address this activity,” Warner said in a statement provided to TechCrunch. “I also expect Facebook, along with other platform companies, will continue to identify Russian troll activity and to work with Congress on updating our laws to better protect our democracy in the future.”

Facebook’s chief security officer, Alex Stamos, maintained that the company “doesn’t think it’s appropriate for Facebook to give public commentary on political motivations of nation states” and calls the IRA link “interesting but not determinant.”

Dodged questions from Facebook’s press call on misinformation

Facebook avoided some of the toughest inquiries from reporters yesterday during a conference call about its efforts to fight election interference and fake news. The company did provide additional transparency on important topics by subjecting itself to intense questioning from a gaggle of its most vocal critics, and a few bits of interesting news did […]

Facebook avoided some of the toughest inquiries from reporters yesterday during a conference call about its efforts to fight election interference and fake news. The company did provide additional transparency on important topics by subjecting itself to intense questioning from a gaggle of its most vocal critics, and a few bits of interesting news did emerge:

  • Facebook’s fact-checking partnerships now extend to 17 countries, up from 14 last month
  • Top searches in its new political ads archive include California, Clinton, Elizabeth Warren, Florida, Kavanaugh, North Carolina and Trump; and its API for researchers will open in August
  • To give political advertisers a quicker path through its new verification system, Facebook is considering a preliminary location check that would later expire unless they verify their physical mailing address

Yet deeper questions went unanswered. Will it be transparent about downranking accounts that spread false news? Does it know if the midterm elections are already being attacked? Are politically divisive ads cheaper?

UNITED STATES – APRIL 11: Facebook CEO Mark Zuckerberg prepares to testify before a House Energy and Commerce Committee in Rayburn Building on the protection of user data on April 11, 2018. (Photo By Tom Williams/CQ Roll Call) // Flickr CC Sean P. Anderson

Here’s a selection of the most important snippets from the call, followed by a discussion of how it evaded some critical topics.

Fresh facts and perspectives

On Facebook’s approach of downranking instead of deleting fake news

Tessa Lyons, product manager for the News Feed: “If you are who you say you are and you’re not violating our Community Standards, we don’t believe we should stop you from posting on Facebook. This approach means that there will be information posted on Facebook that is false and that many people, myself included, find offensive . . . Just because something is allowed to be on Facebook doesn’t mean it should get distribution . . . We know people don’t want to see false information at the top of their News Feed and we believe we have a responsibility to prevent false information from getting broad distribution. This is why our efforts to fight disinformation are focused on reducing its spread. 

-When we take action to reduce the distribution of misinformation in News Feed, what we’re doing is changing the signals and predictions that inform the relevance score for each piece of content. Now, what that means is that information, that content appears lower in everyone’s News Feed who might see it, and so fewer people will actually end up encountering it. 

Image: Bryce Durbin/TechCrunch

Now, the reason that we strike that balance is because we believe we are working to strike the balance between expression and the safety of our community.

If a piece of content or an account violates our Community Standards, it’s removed; if a Page repeatedly violates those standards, the Page is removed. On the side of misinformation — not Community Standards — if an individual piece of content is rated false, its distribution is reduced; if a Page or domain repeatedly shares false information, the entire distribution of that Page or domain is reduced.”

On how Facebook disrupts misinformation operations targeting elections

Nathaniel Gleicher, head of Cybersecurity Policy: “For each investigation, we identify particular behaviors that are common across threat actors. And then we work with our product and engineering colleagues as well as everyone else on this call to automate detection of these behaviors and even modify our products to make those behaviors much more difficult. If manual investigations are like looking for a needle in a haystack, our automated work is like shrinking that haystack. It reduces the noise in the search environment which directly stops unsophisticated threats. And it also makes it easier for our manual investigators to corner the more sophisticated bad actors. 

In turn, those investigations keep turning up new behaviors, which fuels our automated detection and product innovation. Our goal is to create this virtuous circle where we use manual investigations to disrupt sophisticated threats and continually improve our automation and products based on the insights from those investigations. Look for the needle and shrink the haystack.”

TechCrunch/Bryce Durbin

On reactions to political ads labeling, improving the labeling process and the ads archive

Rob Leathern, product manager for Ads: “On the revenue question, the political ads aren’t a large part of our business from a revenue perspective, but we do think it’s very important to be giving people tools so they can understand how these ads are being used. 

-I do think we have definitely seen some folks have some indigestion about the process of getting authorized. We obviously think it’s an important trade-off and it’s the right trade-off to make. We’re definitely exploring ways to reduce the time for them from starting the authorization process to being able to place an ad. We’re considering a preliminary location check that might expire after a certain amount of time, which would then become permanent once they verify their physical mailing address and receive the letter that we send to them.

We’re actively exploring ways to streamline the authorization process and are clarifying our policy by providing examples on what ad copy would require authorization and a label and what would not.

We also plan to add more information to the Info and Ads tab for Pages. Today you can see when the Page was created, previous Page names, but over time we hope to add more context for people there in addition to the ads that that Page may have run as well.”

Dodged questions

On transparency about downranking accounts

Facebook has been repeatedly asked to clarify the lines it draws around content moderation. It’s arrived at a controversial policy where content is allowed even if it spreads fake news, gets downranked in News Feed if fact checkers verify the information is false and gets deleted if it incites violence or harasses other users. Repeat offenders in the second two categories can get their whole profile, Page or Group downranked or deleted.

But that surfaces secondary questions about how transparent it is about these decisions and their impacts on the reach of false news. Hannah Kuchler of The Financial Times and Sheera Frenkel of The New York Times pushed Facebook on this topic. Specifically, the latter asked, “I was wondering if you have any intention going forward to be transparent about who is going — who is down-ranked and are you keeping track of the effect that down-ranking a Page or a person in the News Feed has and do you have those kinds of internal metrics? And then is that also something that you’ll eventually make public?”

Facebook has said that if a post is fact-checked as false, it’s downranked and loses 80 percent of its future views through News Feed. But that ignores the fact that it can take three days for fact checkers to get to some fake news stories, so they’ve likely already received the majority of their distribution. It’s yet to explain how a false rating from fact checkers reduces the story’s total views before and after the decision, or what the ongoing reach reduction is for accounts that are downranked as a whole for repeatedly sharing false-rated news.

Lyons only answered regarding what happens to individual posts rather than providing the requested information about the impact on downranked accounts:

Lyons: “If you’re asking specifically will we be transparent about the impact of fact-checking on demotions, we are already transparent about the rating that fact-checkers provide . . . In terms of how we notify Pages when they share information that’s false, any time any Page or individual shares a link that has been rated false by fact-checkers, if we already have a false rating we warn them before they share, and if we get a false rating after they share, we send them a notification. We are constantly transparent, particularly with Page admins, but also with anybody who shares information about the way in which fact-checkers have evaluated their content.”

On whether politically divisive ads are cheaper and more effective

A persistent question about Facebook’s ads auction is if it preferences inflammatory political ads over neutral ones. The auction system is designed to prioritize more engaging ads because they’re less likely to push users off the social network than boring ads, thereby reducing future ad views. The concern is that Facebook may be incentivizing political candidates and bad actors trying to interfere with elections to polarize society by making more efficient ads that stoke divisions.

Deepa Seetharaman of the The Wall Street Journal surfaced this on the call saying, “I’m talking to a lot of campaign strategists coming up to the 2018 election. One theme that I continuously hear is that the more incendiary ads do better, but the effective CPMs on those particular ads are lower than, I guess, neutral or more positive messaging. Is that a dynamic that you guys are comfortable with? And is there anything that you’re doing to kind of change the kind of ads that succeeds through the Facebook ad auction system?”

Facebook’s Leathern used a similar defense Facebook has relied on to challenge questions about whether Donald Trump got cheaper ad rates during the 2016 election, claiming it was too hard to assess that given all the factors that go into determining ad prices and reach. Meanwhile, he ignored whether, regardless of the data, Facebook wanted to make changes to ensure divisive ads didn’t get preference.

Leathern: “Look, I think that it’s difficult to take a very specific slice of a single ad and use it to draw a broad inference which is one of the reasons why we think it’s important in the spirit of the transparency here to continue to offer additional transparency and give academics, journalists, experts, the ability to analyze this data across a whole bunch of ads. That’s why we’re launching the API and we’re going to be starting to test it next month. We do believe it’s important to give people the ability to take a look at this data more broadly. That, I think, is the key here — the transparency and understanding of this when seen broadly will give us a fuller picture of what is going on.”

On if there’s evidence of midterm elections interference

Facebook failed to adequately protect the 2016 U.S. presidential election from Russian interference. Since then it’s taken a lot of steps to try to safeguard its social network, from hiring more moderators to political advertiser verification systems to artificial intelligence for fighting fake news and the fake accounts that share it.

Internal debates about approaches to the issue and a reorganization of Facebook’s security teams contributed Facebook CSO Alex Stamos’ decision to leave the company next month. Yesterday, BuzzFeed’s Ryan Mac and Charlie Warzel published an internal memo by Stamos from March urging Facebook to change. “We need to build a user experience that conveys honesty and respect, not one optimized to get people to click yes to giving us more access . . . We need to listen to people (including internally) when they tell us a feature is creepy or point out a negative impact we are having in the world.” And today, Facebook’s Chief Legal Officer Colin Stretch announced his departure.

Facebook efforts to stop interference aren’t likely to have completely deterred those seeking to sway or discredit our elections, though. Evidence of Facebook-based attacks on the midterms could fuel calls for government regulation, investments in counter-cyberwarfare, and Robert Mueller’s investigation into Russia’s role.

WASHINGTON, DC – APRIL 11: Facebook co-founder, Chairman and CEO Mark Zuckerberg prepares to testify before the House Energy and Commerce Committee in the Rayburn House Office Building on Capitol Hill April 11, 2018 in Washington, DC. This is the second day of testimony before Congress by Zuckerberg, 33, after it was reported that 87 million Facebook users had their personal information harvested by Cambridge Analytica, a British political consulting firm linked to the Trump campaign. (Photo by Chip Somodevilla/Getty Images)

David McCabe of Axios and Cecilia Kang of The New York Times pushed Facebook to be clear about whether it had already found evidence of interference into the midterms. But Facebook’s Gleicher refused to specify. While it’s reasonable that he didn’t want to jeopardize Facebook or Mueller’s investigation, it’s something that Facebook should at least ask the government if it can disclose.

Gleicher: “When we find things and as we find things — and we expect that we will — we’re going to notify law enforcement and we’re going to notify the public where we can . . . And one of the things we have to be really careful with here is that as we think about how we answer these questions, we need to be careful that we aren’t compromising investigations that we might be running or investigations the government might be running.”

The answers we need

So Facebook, what’s the impact of a false rating from fact checkers on a story’s total views before and after it’s checked? Will you reveal when whole accounts are downranked and what the impact is on their future reach? Do politically incendiary ads that further polarize society cost less and perform better than politically neutral ads, and, if so, will Facebook do anything to change that? And does Facebook already have evidence that the Russians or anyone else are interfering with the U.S. midterm elections?

We’ll see if any of the analysts who get to ask questions on today’s Facebook earnings call will step up.

Facebook’s chief legal officer to leave this year

Facebook’s chief legal officer Colin Stretch has announced he’ll be out by the end of the year.  In the inevitable Facebook post explaining why he’s moving on, Stretch writes that after he and his wife made a decision to move back to DC from California “a few years ago… we knew it would be difficult for me to […]

Facebook’s chief legal officer Colin Stretch has announced he’ll be out by the end of the year. 

In the inevitable Facebook post explaining why he’s moving on, Stretch writes that after he and his wife made a decision to move back to DC from California “a few years ago… we knew it would be difficult for me to remain in this role indefinitely”.

“As Facebook embraces the broader responsibility Mark [Zuckerberg] has discussed in recent months, I’ve concluded that the company and the Legal team need sustained leadership in Menlo Park,” he adds, saying he’ll stay to the end of the year to help with the transition.

Facebook has had a very awkward two years so far as politically charged scandals go. First revelations about the massive Kremlin-fueled election interference which it totally missed. Then the massive Cambridge Analytica data misuse debacle which Facebook also claims to have totally missed, even though it (still apparently) employs one of the academics whose quiz app was the vehicle used to suck out people’s data.

Since then a bunch of follow-on admissions have flowed from the company confirming that access to user data on its platform wasn’t as locked down as it’s historically liked to claim — albeit, despite masses of evidence to the contrary.

Nor, perhaps, as the FTC might have expected give a 2011 privacy settlement with the company. The regulator has now opened a fresh investigation. Meanwhile Facebook is carrying out a retrospective app audit — a not so tacit admission about its abject lack of enforcement of its own developer policy.

And yet there have not — at least publicly — been any heads rolling at Facebook despite all this failure.

Most likely because, as founder Mark Zuckerberg recently told Recode’s Kara Swisher during a podcast interview: “I designed the platform, so if someone’s going to get fired for this, it should be me.”

Of course Zuckerberg isn’t going to fire himself. Not when he doesn’t have to. Given the structure of the company he’s sitting pretty on his CEO throne, no matter how tarnished that crown now is.

Instead of firing himself — let’s not forget his 2016 attempts to dismiss the notion of Facebook-enabled election interference as a “pretty crazy idea” — Zuckerberg once again fired up his multi-year apology tour for privacy and data-related screw ups, rolling this through 2017 and  2018, as fresh scandals rocked the company’s reputation. And raised the specter of regulation to control damaging activity on the platform that the company has spectacularly failed to control.

Though you’d be hard pressed to read any of this scandalabra just by looking at the company’s earnings and stock price. Perhaps because investors view any regulation as likely to cement Facebook’s dominance, rather than upset the apple cart in a way that could allow a younger model to come in and disrupt its grip on consumers’ eyeballs.

Even so, 2018 has seen Zuckerberg, if not literally dragged but politically compelled to appear in front of US and EU lawmakers — where he faced a barrage of questions; some dumb, others cutting to the heart of the company’s contradictions and its contradictory claims.

Last year Facebook’s chief legal officer Colin Stretch was also in the Senate, alongside reps from Google and Twitter, fielding awkward questions about Russian election interference and the spread of extremist content on the platform.

There Stretch made an unfortunate slip of the tongue during his introductory remarks — seemingly saying “keeping people unsafe on Facebook is critical to our mission” before quickly correcting himself to stress he’d meant to say “keeping people safe”. As Freudian slips go it’s a doozy.

But it’s certainly not a great time for Facebook to be losing its general counsel. Not with so much ongoing political and legal risk. Although if Zuckerberg isn’t going to go then perhaps other Facebook veterans will feel compelled to leave on his behalf.

With the usual departing platitudes, Stretch writes: “This has not been an easy decision. Companies are made up of people, and the people here are talented, caring, and most of all committed to doing the right thing. Even now, eight-and-a-half years after I started, I often stop myself and ask how I got so lucky to be a part of this.”

“There is never a ‘right time’ for a transition like this, but the team and the company boast incredible talent and will navigate this well,” he adds.

In March it also emerged that Facebook would likely be parting ways with its long-time chief security officer, Alex Stamos, this summer — after the New York Times reported on internal disagreements between the CSO and other  execs, saying Stamos had wanted Facebook to be more public about the misuse of its platform by nation states.

This week BuzzFeed News obtained an internal memo sent by Stamos in March, days after he had confirmed his plans to leave the company, in which he writes: “I was the Chief Security Officer during the 2016 election season, and I deserve as much blame (or more) as any other exec at the company.”

Though he demurs on confirming whether he has actually quit for real at that point — but does admit to having had “passionate discussions with other execs”, including, seemingly, about Facebook’s approach to sharing public data on Russian disinformation.

“The world has changed from underneath us in many ways. One change has been the thrusting of private tech companies into the struggle between nation-states,” he writes on this. “Traditionally, the standard has been to report malicious activity by adversary nations to US law enforcement. We are moving into a world where the major platforms are going to be expected to provide our findings, attribution and data directly to the public, making us a visible participant in the battle between cyberwarfare titans.”

“This is an uncomfortable transition, and have not always agree with the compromises we have struck in the process. That being said, I believe my colleagues have all approached the process in good faith, and together we have sorted through legitimate equities that needed to be weighed,” Stamos adds.

Stamos goes on to implore colleagues to make major changes “to win back the world’s trust” — including rethinking the metrics Facebook fixes itself to as a business; being more adversarial in its thinking when building products and processes; and — in what looks very much like a swipe at the company’s use of dark pattern design in its consent flows — re-engineering how it gathers user data to be more honest and minimize (rather than maximize) data collection.

On that it’s worth noting that privacy by design is a core plank of Europe’s new data protection framework, GDPR — which Stamos is seemingly describing at one point in the memo, without giving it a literal name-check.

“We need to build a user experience that conveys honesty and respect, not one optimized to get people to click yes to giving us more access. We need to intentionally not collect data where possible, and to keep it only as long as we are using it to serve people,” he writes [emphasis his]. “We need to find and stop adversaries who will be copying the playbook they saw in 2016. We need to listen to people (including internally) when they tell us a feature is creepy or point out a negative impact we are having in the world. We need to deprioritze short-term growth and revenue and to explain to Wall Street why that is ok. We need to be willing to pick sides when there are clear moral or humanitarian issues. And we need to be open, honest and transparent about challenges and what we are doing to fix them.”