Twitter widens its view of bad actors to fight election fiddlers

Twitter has announced more changes to its rules to try to make it harder for people to use its platform to spread politically charged disinformation and thereby erode democratic processes. In an update on its “elections integrity work” yesterday, the company flagged several new changes to the Twitter Rules which it said are intended to provide “clearer […]

Twitter has announced more changes to its rules to try to make it harder for people to use its platform to spread politically charged disinformation and thereby erode democratic processes.

In an update on its “elections integrity work” yesterday, the company flagged several new changes to the Twitter Rules which it said are intended to provide “clearer guidance” on behaviors it’s cracking down on.

In the problem area of “spam and fake accounts”, Twitter says it’s responding to feedback that, to date, it’s been too conservative in how it thinks about spammers on its platform, and only taking account of “common spam tactics like selling fake goods”. So it’s expanding its net to try to catch more types of “inauthentic activity” — by taking into account more factors when determining whether an account is fake.

As platform manipulation tactics continue to evolve, we are updating and expanding our rules to better reflect how we identify fake accounts, and what types of inauthentic activity violate our guidelines,” Twitter writes. “We now may remove fake accounts engaged in a variety of emergent, malicious behaviors.”

Some of the factors it says it will now also take into account when making a ‘spammer or not’ judgement are:

  •         Use of stock or stolen avatar photos
  •         Use of stolen or copied profile bios
  •         Use of intentionally misleading profile information, including profile location

Kremlin-backed online disinformation agents have been known to use stolen photos for avatars and also to claim accounts are US based, despite spambots being operated out of Russia. So it’s pretty clear why Twitter is cracking down on fake profiles pics and location claims.

Less clear: Why it took so long for Twitter’s spam detection systems to be able to take account of these suspicious signals. But, well, progress is still progress.

(Intentionally satirical ‘Twitter fakes’ (aka parody accounts) should not be caught in this net, as Twitter has had a longstanding policy of requiring parody and fan accounts to be directly labeled as such in their Twitter bios.)

Pulling the threads of spambots

In another major-sounding policy change, the company says it’s targeting what it dubs “attributed activity” — so that when/if it “reliably” identifies an entity behind a rule-breaking account it can apply the same penalty actions against any additional accounts associated with that entity, regardless of whether the accounts themselves were breaking its rules or not.

This is potentially a very important change, given that spambot operators often create accounts long before they make active malicious use of them, leaving these spammer-in-waiting accounts entirely dormant, or doing something totally innocuous, sometimes for years before they get deployed for an active spam or disinformation operation.

So if Twitter is able to link an active disinformation campaign with spambots lurking in waiting to carry out the next operation it could successfully disrupt the long term planning of election fiddlers. Which would be great news.

Albeit, the devil will be in the detail of how Twitter enforces this new policy — such as how high a bar it’s setting itself with the word “reliably”.

Obviously there’s a risk that, if defined too loosely, Twitter could shut innocent newbs off its platform by incorrectly connecting them to a previously identified bad actor. Which it clearly won’t want to do.

The hope is that behind the scenes Twitter has got better at spotting patterns of behavior it can reliably associate with spammers — and will thus be able to put this new policy to good use.

There’s certainly good external research being done in this area. For example, recent work by Duo Security has yielded an open source methodology for identifying account automation on Twitter.

The team also dug into botnet architectures — and were able to spot a cryptocurrency scam botnet which Twitter had previously been recommending other users follow. So, again hopefully, the company has been taking close note of such research, and better botnet analysis underpins this policy change.

There’s also more on this front: “We are expanding our enforcement approach to include accounts that deliberately mimic or are intended to replace accounts we have previously suspended for violating our rules,” Twitter also writes.

This additional element is also notable. It essentially means Twitter has given itself a policy allowing it to act against entire malicious ideologies — i.e. against groups of people trying to spread the same sort of disinformation, not just any a single identified bad actor connected to a number of accounts.

To use the example of the fake news peddler behind InfoWars, Alex Jones, who Twitter finally permanently banned last month, Twitter’s new policy suggests any attempts by followers of Jones to create ‘in the style of’ copycat InfoWars accounts on its platform, i.e. to try to indirectly return Jones’ disinformation to Twitter, would — or, well, could — face the same enforcement action it has already meted out to Jones’ own accounts.

Though Twitter does have a reputation for inconsistently applying its own policies. So it remains to be seen how it will, in fact, act.

And how enthusiastic it will be about slapping down disinformation ideologies — given its longstanding position as a free speech champion, and in the face of criticism that it is ‘censoring’ certain viewpoints.

Hacked materials

Another change being announced by Twitter now is a clampdown on the distribution of hacked materials via its platform.

Leaking hacked emails of political officials at key moments during an election cycle has been a key tactic for democracy fiddlers in recent years — such as the leak of emails sent by top officials in the Democratic National Committee during the 2016 US presidential election.

Or  the last minute email leak in France during the presidential election last year.

Twitter notes that its rules already prohibit the distribution of hacked material which contains “private information or trade secrets, or could put people in harm’s way” — but says it’s now expanding “the criteria for when we will take action on accounts which claim responsibility for a hack, which includes threats and public incentives to hack specific people and accounts”.

So it seems, generally, to be broadening its policy to cover a wider support ecosystem around election hackers — or hacking more generally.

Twitter’s platform does frequently host hackers — who use anonymous Twitter accounts to crow about their hacks and/or direct attack threats at other users…

Presumably Twitter will be shutting that kind of hacker activity down in future.

Though it’s unclear what the new policy might mean for a hacktivist group like Anonymous (which is very active on Twitter).

Twitter’s new policy might also have repercussions for Wikileaks — which was directly involved in the spreading of the DNC leaked emails, for example, yet nonetheless has not previously been penalized by Twitter. (And thus remains on its platform so far.)

One also wonders how Twitter might respond to a future tweet from, say, US president Trump encouraging the hacking of a political opponent….

Safe to say, this policy could get pretty murky and tricky for Twitter.

“Commentary about a hack or hacked materials, such as news articles discussing a hack, are generally not considered a violation of this policy,” it also writes, giving itself a bit of wiggle room on how it will apply (or not apply) the policy.

Daily spam decline

In the same blog post, Twitter gives an update on detection and enforcement actions related to its stated mission of improving “conversational health” and information integrity on its platform — including reiterating the action it took against Iran-based disinformation accounts in August.

It also notes that it removed ~50 accounts that had been misrepresenting themselves as members of various state Republican parties that same month and using Twitter to share “media regarding elections and political issues with misleading or incorrect party affiliation information”.

“We continue to partner closely with the RNC, DNC, and state election institutions to improve how we handle these issues,” it adds. 

On the automated detections front — where Twitter announced a fresh squeeze just three months ago — it reports that in the first half of September it challenged an average of 9.4 million accounts per week. (Though it does not specify how many of those challenges turned out to be bona fide spammers, or at least went unchallenged).

It also reports a continued decline in the average number of spam-related reports from users — down from an average of ~17,000 daily in May, to ~16,000 daily in September.

This summer it introduced a new registration process for developers requesting access to its APIs — intended to prevent the registration of what it describes as “spammy and low quality apps”.

Now it says it’s suspending, on average, ~30,000 applications per month as a result of efforts “to make it more difficult for these kinds of apps to operate in the first place”.

Elsewhere, Twitter also says it’s working on new proprietary systems to identify and remove “ban evaders at speed and scale”, as part of ongoing efforts to improve “proactive enforcements against common policy violations”.

In the blog, the company flags a number of product changes it has made this year too, including a recent change it announced two weeks ago which brings back the chronological timeline (via a setting users can toggle) — and which it now says it has rolled out.

“We recently updated the timeline personalization setting to allow people to select a strictly reverse-chronological experience, without recommended content and recaps. This ensures you have more control of how you experience what’s happening on our service,” it writes, saying this is also intended to help people “stay informed”.

Though, given that a chronological timeline remains not the default on Twitter, with algorithmically surfaced ‘interesting tweets’ instead being most actively pushed at users, it seems unlikely this change will have a major impact on mitigating any disinformation campaigns.

Those in the know (that they can change settings) being able to stay more informed is not how election fiddling will be defeated.

US midterm focus

Twitter also says it’s continuing to roll out new features to show more context around accounts — giving the example of the launch of election labels earlier this year, as a beta for candidates in the 2018 U.S. midterm elections. Though it’s clearly got lots of work to do on that front — given all the other elections continuously taking place in the rest of the world.

With an eye on the security of the US midterms as a first focus, Twitter says it will send election candidates a message prompt to ensure they have two-factor authentication enabled on their account to boost security.

“We are offering electoral institutions increased support via an elections-specific support portal, which is designed to ensure we receive and review critical feedback about emerging issues as quickly as possible. We will continue to expand this program ahead of the elections and will provide information about the feedback we receive in the near future,” it adds, again showing that its initial candidate support efforts are US-focused.

On the civic engagement front, Twitter says it is also actively encouraging US-based users to vote and to register to vote, as well as aiming to increase access to relevant voter registration info.

“As part of our civic engagement efforts, we are building conversation around the hashtag #BeAVoter with a custom emoji, sending U.S.-based users a prompt in their home timeline with information on how to register to vote, and drawing attention to these conversations and resources through the top US trend,” it writes. “This trend is being promoted by @TwitterGov, which will create even more access to voter registration information, including election reminders and an absentee ballot FAQ.”

Alex Jones’ Infowars gets banned from Apple’s App Store

Another domino has fallen in Alex Jones’ media empire. Apple this week announced that it’s pulled the controversial conspiracy theorist/provocateur from the App Store this week, banning the Infowars app over violations to its “objectionable content” rules. Slightly more specifically, the host was determined to have violated the TOS around “defamatory, discriminatory, or mean-spirited content, […]

Another domino has fallen in Alex Jones’ media empire. Apple this week announced that it’s pulled the controversial conspiracy theorist/provocateur from the App Store this week, banning the Infowars app over violations to its “objectionable content” rules.

Slightly more specifically, the host was determined to have violated the TOS around “defamatory, discriminatory, or mean-spirited content, including references or commentary about religion, race, sexual orientation, gender, national/ethnic origin, or other targeted groups, particularly if the app is likely to humiliate, intimidate, or place a targeted individual or group in harm’s way.”

There’s been a cascade effect with many of the major platforms Jones has used to distribute his video content. Facebook, Google and Spotify have all pulled Infowars content from their respective platforms. This week, Twitter and Periscope banned him after widespread criticism.

Among other controversial comments, Jones has come under fire for suggesting that the Sandy Hook shooting, which resulted in the deaths of 20 elementary school students, was a hoax.

We’ve reached out to Apple for comment.

Alex Jones and Infowars permanently suspended from Twitter and Periscope after new content violations

Twitter has finally put an end to the ongoing controversy over how it has refused to completely shut down the accounts of Alex Jones and his online media site Infowars after a number of people complained about abusive content posted by both: it has finally banned both, on Twitter and its video platform Periscope. “Today, […]

Twitter has finally put an end to the ongoing controversy over how it has refused to completely shut down the accounts of Alex Jones and his online media site Infowars after a number of people complained about abusive content posted by both: it has finally banned both, on Twitter and its video platform Periscope.

“Today, we permanently suspended @realalexjones and @infowars from Twitter and Periscope,” the Twitter Safety account Tweeted moments ago. “We took this action based on new reports of Tweets and videos posted yesterday that violate our abusive behavior policy, in addition to the accounts’ past violations.

“As we continue to increase transparency around our rules and enforcement actions, we wanted to be open about this action given the broad interest in this case. We do not typically comment on enforcement actions we take against individual accounts, for their privacy.

“We will continue to evaluate reports we receive regarding other accounts potentially associated with @realalexjones or @infowars and will take action if content that violates our rules is reported or if other accounts are utilized in an attempt to circumvent their ban.

The last 24 hours of Jones’ Twitter feed, which you can still see in its cached form on Google, include Tweets calling CNN fake news, criticism of Marco Rubio and Bob Woodward, and questioning the authenticity of the anonymous source writing in the New York Times about the turmoil in the Trump White House. This is, in one regard, relatively mild compared to some of what Jones has put out in the past.

But the last 24 hours also saw CEO Dorsey appear on Capitol Hill, interrogated by the House Energy Committee over its policies of “shadow banning” and general attitude to conservative politics. The company agreed yesterday to a civil rights audit and abuse transparency reports, so this might potentially be seen as Twitter finally trying way of getting ahead of the process, in what has already become a messy and very tough situation for the company.

The company and Dorsey have been roundly criticised by people in recent weeks, who believed that the company was not being strict enough with enforcing its abusive content policies when it came to Jones. While Dorsey had said that he was doing it in the name of “free speech,” cynics believed it was more related to a reluctance to alienate supporters who make up a substantial chunk of Twitter users. (And to be fair, the criticism has been going on for years at this point, with many people quitting the platform in protest.)

Instead, Twitter took incremental steps to try to handle the situation, including 7-day read-only bans and longer explanations to justify why it was not doing more.

Twitter was essentially the last holdout among a throng of social media platforms — including Facebook and YouTube — that had stopped allowing Jones and Infowars from peddling what many believed not just to be “fake news”, but outright damaging and dangerous false information.

Justice Department’s threat to social media giants is wrong

Never has it been so clear that the attorneys charged with enforcing the laws of the country have a complete disregard for the very laws they’re meant to enforce. As executives of Twitter and Facebook took to the floor of the Senate to testify about their companies’ response to international meddling into U.S. elections and […]

Never has it been so clear that the attorneys charged with enforcing the laws of the country have a complete disregard for the very laws they’re meant to enforce.

As executives of Twitter and Facebook took to the floor of the Senate to testify about their companies’ response to international meddling into U.S. elections and addressed the problem of propagandists and polemicists using their platforms to spread misinformation, the legal geniuses at the Justice Department were focused on a free speech debate that isn’t just unprecedented, but also potentially illegal.

These attorneys general convened to confabulate on the “growing concern” that social media companies are stifling expression and hurting competition. What’s really at issue is a conservative canard and talking point that tries to make a case that private companies have a First Amendment obligation to allow any kind of speech on their platforms.

The simple fact is that they do not. Let me repeat that. They simply do not.

What the government’s lawyers are trying to do is foist a responsibility that they have to uphold the First Amendment onto private companies that are under no such obligation. Why are these legal eagles so up in arms? The simple answer is the decision made by many platforms to silence voices that violate the expressed policies of the platforms they’re using.

Chief among these is Alex Jones who has claimed that the Sandy Hook school shooting was a hoax and accused victims of the Parkland school shooting of being crisis actors.

Last month a number of those social media platforms that distributed Jones finally decided that enough was enough.

The decision to boot Jones is their prerogative as private companies. While Jones has the right to shout whatever he wants from a soapbox in free speech alley (or a back alley, or into a tin can) — and while he can’t be prosecuted for anything that he says (no matter how offensive, absurd or insane) — he doesn’t have the right to have his opinions automatically amplified by every social media platform.

Almost all of the big networking platforms have come to that conclusion.

The technology-lobbying body has already issued a statement excoriating the Department of Justice for its ham-handed approach.

[The] U.S. Department of Justice (DOJ) today released a statement saying that it was convening state attorneys general to discuss its concerns that these companies were “hurting competition and intentionally stifling the free exchange of ideas.” Social media platforms have the right to determine what types of legal speech they will permit on their platforms. It is inappropriate for the federal government to use the threat of law enforcement to limit companies from exercising this right. In particular, law enforcement should not threaten social media companies with unwarranted investigations for their efforts to rid their platforms of extremists who incite hate and violence.

While the Justice Department’s approach muddies the waters and makes it more difficult for legitimate criticism and reasoned regulation of the social media platforms to take hold, there are legitimate issues that legislators need to address.

Indeed, many of them were raised in a white paper from Senator Mark Warner, which was released in the dog days of summer.

Or the Justice Department could focus on the issue that Senator Ron Wyden emphasized in the hours after the hearing:

Instead of focusing on privacy or security, attorneys general for the government are waging a Pyrrhic war against censorship that doesn’t exist and ignoring the real cold war for platform security.

Twitter is a Nazi haven for the same reason its CEO claims no bias

“From a simple business perspective and to serve the public, Twitter is incentivized to keep all voices on the platform”. That’s Twitter CEO Jack Dorsey’s argument for why “Twitter does not use political ideology to make any decisions” according to his prepared statement for his appearance at tomorrow’s hearing with the US House Committee on […]

“From a simple business perspective and to serve the public, Twitter is incentivized to keep all voices on the platform”. That’s Twitter CEO Jack Dorsey’s argument for why “Twitter does not use political ideology to make any decisions” according to his prepared statement for his appearance at tomorrow’s hearing with the US House Committee on Energy and Commerce.

But it’s also validates criticism of why Twitter is reluctant to ban Nazis, hate-mongers, and other trolls that harass people on the service: It makes money off of them.

Twitter has been long-known to ignore reports of threats or abuse. It’s common to see people posting the screenshots of the messages they get back from Twitter saying that sexist, racist, homophobic, and violent remarks don’t violate its policies. Only when they get enough retweets and media attention do those accounts seem to disappear.

In fact, a Wall Street Journal report claims that Dorsey told a confidante that he’d personally intervened to overrule his staff in order to allow Infowars’ Alex Jones to remain on the app and to reinstate alt-right figure Richard Spencer.

To avoid being labeled overly liberal which could lead to a flight of conservative users, Twitter has bowed to the abusers and weakly enforced its own rules. And since these trolls can be highly engaged with Twitter, they can rack up lots of ad views. Dorsey’s statement is emblematic of that stance, prioritizing user count, share price, and revenue over safety and civility.

Elsewhere in the statement, Dorsey makes a much stronger argument for why Twitter isn’t biased against conservatives via data instead of market forces. He says that Twitter compared tweets by Democrats and Republicans and found that “controlling for the same number of followers, a single Tweet by a Republican will be viewed as many times as a single Tweet by a Democrat, even after all filtering and algorithms have been applied by Twitter.” It’s that fact Dorsey should point to, not that Twitter isn’t biased because his hands are tied by Wall Street.

Dorsey also claims Twitter is making progress by tuning its algorithm to limit the distribution of abuse. He notes that signals that reduce a tweet’s prominence include if the author has “no confirmed email address, simultaneous registration for multiple accounts, accounts that repeatedly Tweet and mention accounts that do not follow them, or behavior that might indicate a coordinated attack”, as well as “how accounts are connected to those that violate our rules and how they interact with each other.” That’s supposedly led to “a 4 percent drop in abuse reports from search and 8 percent fewer abuse reports from conversations”.

But that progress would likely to come faster if Twitter was willing to make sacrifices to its bottom line. Facebook pledged to double its security and moderation team from 10,000 to 20,000 members despite the impact that would have on profits. Twitter has yet to make a pledge as direct and quantifiable. Facebook’s COO Sheryl Sandberg will also appear before Congress tomorrow to face tough questions about whether that hiring and its product changes are actually protecting democracy. But at least it’s throwing money at the problem.

Dorsey didn’t say Twitter was “incentivized to keep all civil voices on the platform” or “all voices that abide by our policies” — just “all voices”. But when Twitter lets trolls bully and shout down those they hate, it’s the victims’ voices that are silenced by ‘free speech’. It’s effectively endorsing censorship, not of those with conservative or even extremist views, but of the marginalized who most deserve that voice.

Hopefully during tomorrow’s House hearing, we’ll see members of congress use Dorsey’s own words to question whether his “simple business perspective” is what’s keeping such an ugly place to have a conversation.

It’s time for Facebook and Twitter to coordinate efforts on hate speech

Since the election of Donald Trump in 2016, there has been burgeoning awareness of the hate speech on social media platforms like Facebook and Twitter. While activists have pressured these companies to improve their content moderation, few groups (outside of the German government) have outright sued the platforms for their actions. That’s because of a […]

Since the election of Donald Trump in 2016, there has been burgeoning awareness of the hate speech on social media platforms like Facebook and Twitter. While activists have pressured these companies to improve their content moderation, few groups (outside of the German government) have outright sued the platforms for their actions.

That’s because of a legal distinction between media publications and media platforms that has made solving hate speech online a vexing problem.

Take, for instance, an op-ed published in the New York Times calling for the slaughter of an entire minority group.  The Times would likely be sued for publishing hate speech, and the plaintiffs may well be victorious in their case. Yet, if that op-ed were published in a Facebook post, a suit against Facebook would likely fail.

The reason for this disparity? Section 230 of the Communications Decency Act (CDA), which provides platforms like Facebook with a broad shield from liability when a lawsuit turns on what its users post or share. The latest uproar against Alex Jones and Infowars has led many to call for the repeal of section 230 – but that may lead to government getting into the business of regulating speech online. Instead, platforms should step up to the plate and coordinate their policies so that hate speech will be considered hate speech regardless of whether Jones uses Facebook, Twitter or YouTube to propagate his hate. 

A primer on section 230 

Section 230 is considered a bedrock of freedom of speech on the internet. Passed in the mid-1990s, it is credited with freeing platforms like Facebook, Twitter, and YouTube from the risk of being sued for content their users upload, and therefore powering the exponential growth of these companies. If it weren’t for section 230, today’s social media giants would have long been bogged down with suits based on what their users post, with the resulting necessary pre-vetting of posts likely crippling these companies altogether. 

Instead, in the more than twenty years since its enactment, courts have consistently found section 230 to be a bar to suing tech companies for user-generated content they host. And it’s not only social media platforms that have benefited from section 230; sharing economy companies have used section 230 to defend themselves, with the likes of Airbnb arguing they’re not responsible for what a host posts on their site. Courts have even found section 230 broad enough to cover dating apps. When a man sued one for not verifying the age of an underage user, the court tossed out the lawsuit finding an app user’s misrepresentation of his age not to be the app’s responsibility because of section 230.

Private regulation of hate speech 

Of course, section 230 has not meant that hate speech online has gone unchecked. Platforms like Facebook, YouTube and Twitter all have their own extensive policies prohibiting users from posting hate speech. Social media companies have hired thousands of moderators to enforce these policies and to hold violating users accountable by suspending them or blocking their access altogether. But the recent debacle with Alex Jones and Infowars presents a case study on how these policies can be inconsistently applied.  

Jones has for years fabricated conspiracy theories, like the one claiming that the Sandy Hook school shooting was a hoax and that Democrats run a global child-sex trafficking ring. With thousands of followers on Facebook, Twitter, and YouTube, Jones’ hate speech has had real life consequences. From the brutal harassment of Sandy Hook parents to a gunman storming a pizza restaurant in D.C. to save kids from the restaurant’s nonexistent basement, his messages have had serious deleterious consequences for many. 

Alex Jones and Infowars were finally suspended from ten platforms by our count – with even Twitter falling in line and suspending him for a week after first dithering. But the varying and delayed responses exposed how different platforms handle the same speech.  

Inconsistent application of hate speech rules across platforms, compounded by recent controversies involving the spread of fake news and the contribution of social media to increased polarization, have led to calls to amend or repeal section 230. If the printed press and cable news can be held liable for propagating hate speech, the argument goes, then why should the same not be true online – especially when fully two-thirds of Americans now report getting at least some of their news from social media.  Amid the chorus of those calling for more regulation of tech companies, section 230 has become a consistent target. 

Should hate speech be regulated? 

But if you need convincing as to why the government is not best placed to regulate speech online, look no further than Congress’s own wording in section 230. The section enacted in the mid-90s states that online platforms “offer users a great degree of control over the information that they receive, as well as the potential for even greater control in the future as technology develops” and “a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.”  

Section 230 goes on to declare that it is the “policy of the United States . . . to encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet.”  Based on the above, section 230 offers the now infamous liability protection for online platforms.  

From the simple fact that most of what we see on our social media is dictated by algorithms over which we have no control, to the Cambridge Analytica scandal, to increased polarization because of the propagation of fake news on social media, one can quickly see how Congress’s words in 1996 read today as a catalogue of inaccurate predictions. Even Ron Wyden, one of the original drafters of section 230, himself admits today that drafters never exempted an “individual endorsing (or denying) the extermination of millions of people, or attacking the victims of horrific crimes or the parents of murdered children” to be enabled through the protections offered by section 230.

It would be hard to argue that today’s Congress – having shown little understanding in recent hearings of how social media operates to begin with – is any more qualified at predicting the effects of regulating speech online twenty years from now.   

More importantly, the burden of complying with new regulations will definitely result in a significant barrier to entry for startups and therefore have the unintended consequence of entrenching incumbents. While Facebook, YouTube, and Twitter may have the resources and infrastructure to handle compliance with increased moderation or pre-vetting of posts that regulations might impose, smaller startups will be at a major disadvantage in keeping up with such a burden.

Last chance before regulation 

The answer has to lie with the online platforms themselves. Over the past two decades, they have amassed a wealth of experience in detecting and taking down hate speech. They have built up formidable teams with varied backgrounds to draft policies that take into account an ever-changing internet. Their profits have enabled them to hire away top talent, from government prosecutors to academics and human rights lawyers.  

These platforms also have been on a hiring spree in the last couple of years to ensure that their product policy teams – the ones that draft policies and oversee their enforcement – are more representative of society at large. Facebook proudly announced that its product policy team now includes “a former rape crisis counselor, an academic who has spent her career studying hate organizations . . . and a teacher.” Gone are the days when a bunch of engineers exclusively decided where to draw the lines. Big tech companies have been taking the drafting and enforcement of their policies ever more seriously.

What they now need to do is take the next step and start to coordinate policies so that those who wish to propagate hate speech can no longer game policies across platforms. Waiting for controversies like Infowars to become a full-fledged PR nightmare before taking concrete action will only increase calls for regulation. Proactively pooling resources when it comes to hate speech policies and establishing industry-wide standards will provide a defensible reason to resist direct government regulation.

The social media giants can also build public trust by helping startups get up to speed on the latest approaches to content moderation. While any industry consortium around coordinating hate speech is certain to be dominated by the largest tech companies, they can ensure that policies are easy to access and widely distributed.

Coordination between fierce competitors may sound counterintuitive. But the common problem of hate speech and the gaming of online platforms by those trying to propagate it call for an industry-wide response. Precedent exists for tech titans coordinating when faced with a common threat. Just last year, Facebook, Microsoft, Twitter, and YouTube formalized their “Global Internet Forum to Counter Terrorism” – a partnership to curb the threat of terrorist content online. Fighting hate speech is no less laudable a goal.

Self-regulation is an immense privilege. To the extent that big tech companies want to hold onto that privilege, they have a responsibility to coordinate the policies that underpin their regulation of speech and to enable startups and smaller tech companies to get access to these policies and enforcement mechanisms.

Twitter puts Infowars’ Alex Jones in the ‘read-only’ sin bin for 7 days

Twitter has finally taken action against Infowars creator Alex Jones, but it isn’t what you might think. While Apple, Facebook, Google/YouTube, Spotify and many others have removed Jones and his conspiracy-peddling organization Infowars from their platforms, Twitter has remained unmoved with its claim that Jones hasn’t violated rules on its platform. That was helped in […]

Twitter has finally taken action against Infowars creator Alex Jones, but it isn’t what you might think.

While Apple, Facebook, Google/YouTube, Spotify and many others have removed Jones and his conspiracy-peddling organization Infowars from their platforms, Twitter has remained unmoved with its claim that Jones hasn’t violated rules on its platform.

That was helped in no small way by the mysterious removal of some tweets last week, but now Jones has been found to have violated Twitter’s rules, as CNET first noted.

Twitter is punishing Jones for a tweet that violates its community standards but it isn’t locking him out forever. Instead, a spokesperson for the company confirmed that Jones’ account is in “read-only mode” for up to seven days.

That means he will still be able to use the service and look up content via his account, but he’ll be unable to engage with it. That means no tweets, likes, retweets, comments, etc. He’s also been ordered to delete the offending tweet — more on that below — in order to qualify for a fully functioning account again.

That restoration doesn’t happen immediately, though. Twitter policy states that the read-only sin bin can last for up to seven days “depending on the nature of the violation.” We’re imagining Jones got the full one-week penalty, but we’re waiting on Twitter to confirm that.

The offending tweet in question is a link to a story claiming President “Trump must take action against web censorship.” It looks like the tweet has already been deleted, but not before Twitter judged that it violates its policy on abuse:

Abuse: You may not engage in the targeted harassment of someone, or incite other people to do so. We consider abusive behavior an attempt to harass, intimidate, or silence someone else’s voice.

When you consider the things Infowars and Jones have said or written — 9/11 conspiracies, harassment of Sandy Hook victim families and more — the content in question seems fairly innocuous. Indeed, you could look at President Trump’s tweets and find seemingly more punishable content without much difficulty.

But here we are.

The weirdest part of this Twitter caning is one of the reference points that the company gave to media. These days, it is common for the company to point reporters to specific tweets that it believes encapsulate its position on an issue, or provide additional color in certain situations.

In this case, Twitter pointed us — and presumably other reporters — to this tweet from Infowars’ Paul Joseph Watson:

WTF, Twitter…

Vimeo removed Alex Jones’ account over the weekend

YouTube, Facebook, Spotify, Apple, Pinterest and now Vimeo have removed Infowars content from their services. The video streaming platform is the latest in a growing wave of tech companies pull videos from embattled right wing conspiracy theorist, Alex Jones. Jones has been under fire for years over conspiracy driven output, surrounding events like the Sandy […]

YouTube, Facebook, Spotify, Apple, Pinterest and now Vimeo have removed Infowars content from their services. The video streaming platform is the latest in a growing wave of tech companies pull videos from embattled right wing conspiracy theorist, Alex Jones.

Jones has been under fire for years over conspiracy driven output, surrounding events like the Sandy Hook shooting and 9/11. In spite of what are largely regarded as fringe views, however, he’s amassed a massive viewership, and even scored an interview with Donald Trump in the lead up to the 2016 election.

Vimeo suddenly found itself at the center of the on-going Infowars debate after the show was barred from a number of competing sites. Earlier in the week, it was host to a handful of Jones-produced videos, but that number jumped suddenly when north of 50 more were uploaded to the service on Thursday and Friday.

Vimeo pulled the content over the weekend, citing a Terms of Service violation. The move, which was reported by Business Insider, has since been confirmed by TechCrunch.

“We can confirm that Vimeo removed InfoWars’ account on Sunday, August 12 following the uploading of videos on Thursday and Friday that violated our Terms of Service prohibitions on discriminatory and hateful content. Vimeo has notified the account owner and issued a refund,” a spokesperson told TechCrunch.

Infowars is moving quickly from one platform to the next, as more sites remove content over TOS violations. Twitter remains steadfast in its decision not to remove Jones, however, instead holding journalists accountable for debunking his content. Jones has also apparently found some solace in the social ghost town that is Google+.

Some Infowars tweets vanished today, but Twitter didn’t remove them

A handful of tweets and videos that appear to have been cited in the choice to remove Alex Jones from Facebook and YouTube vanished from Twitter on Thursday after being called out in a CNN piece focused on the company’s hypocrisy. Twitter confirmed to TechCrunch that it did not remove the tweets in question and […]

A handful of tweets and videos that appear to have been cited in the choice to remove Alex Jones from Facebook and YouTube vanished from Twitter on Thursday after being called out in a CNN piece focused on the company’s hypocrisy.

Twitter confirmed to TechCrunch that it did not remove the tweets in question and that someone affiliated with Alex Jones and Infowars or with access to those accounts is behind the removal. The tweets in question spanned the Infowars brand, including accusations that Sandy Hook was staged by crisis actors, slurs against transgender people and a video asserting that Parkland shooting survivor David Hogg is a Nazi.

All of the tweets CNN linked are no longer available, suggesting that Jones might be trying to walk a narrow line on the platform, keeping most of the Infowars content up even as users and reporters surface some of its most objectionable moments. We reached out to Infowars for the reasoning behind taking down the posts and will update this story when we hear more.

On Wednesday in an internal memo that was later tweeted, Twitter’s VP of trust & safety made the claim that if Jones had posted the same content on Twitter that had resulted in action on other platforms, Twitter would have acted, too.

“… At least some of the content Alex Jones published on other platforms (e.g. Facebook and YouTube) that led them to taking enforcement actions against him would also have violated our policies had he posted it on Twitter,” Twitter’s Del Harvey said. “Had he done so, we would have taken action against him as well.”

On Thursday, CNN called Twitter’s bluff. The news site found that the same content that got Jones and Infowars booted from other platforms “were still live on Twitter as of the time this article was published,” according to CNN.

In spite of the missing tweets, at the time of writing, the accounts of both Infowars and Alex Jones remained online and tweeting. In fact, just 30 minutes ago, Infowars accused former president Obama of a “deep state” scheme to purge Infowars from tech platforms.

Facebook now deletes posts that financially endanger/trick people

It’s not just inciting violence, threats and hate speech that will get Facebook to remove posts by you or your least favorite troll. Endangering someone financially, not just physically, or tricking them to earn a profit are now also strictly prohibited. Facebook today spelled out its policy with more clarity in hopes of establishing a […]

It’s not just inciting violence, threats and hate speech that will get Facebook to remove posts by you or your least favorite troll. Endangering someone financially, not just physically, or tricking them to earn a profit are now also strictly prohibited.

Facebook today spelled out its policy with more clarity in hopes of establishing a transparent set of rules it can point to when it enforces its policy in the future. That comes after cloudy rules led to waffling decisions and backlash as it dealt with and finally removed four Pages associated with Infowars conspiracy theorist Alex Jones.

The company started by repeatedly stressing that it is not a government — likely to indicate it does not have to abide by the same First Amendment rules.

“We do not, for example, allow content that could physically or financially endanger people, that intimidates people through hateful language, or that aims to profit by tricking people using Facebook,” its VP of policy Richard Allen published today.

Web searches show this is the first time Facebook has used that language regarding financial attacks. We’ve reached out for comment about exactly how new Facebook considers this policy.

This is important because it means Facebook’s policy encompasses threats of ruining someone’s credit, calling for people to burglarize their homes or blocking them from employment. While not physical threats, these can do real-world damage to victims.

Similarly, the position against trickery for profit gives Facebook a wide berth to fight against spammers, scammers and shady businesses making false claims about products. The question will be how Facebook enforces this rule. Some would say most advertisements are designed to trick people in order for a business to earn a profit. Facebook is more likely to shut down obvious grifts where businesses make impossible assertions about how their products can help people, rather than just exaggerations about their quality or value.

The added clarity offered today highlights the breadth and particularity with which other platforms, notably the wishy-washy Twitter, should lay out their rules about content moderation. While there have long been fears that transparency will allow bad actors to game the system by toeing the line without going over it, the importance of social platforms to democracy necessitates that they operate with guidelines out in the open to deflect calls of biased enforcement.