Twitter announced this afternoon it will begin booting accounts off its service from those who have tried to evade their account suspension. The company says that the accounts in question are users who have been previously suspended on Twitter for their abusive behavior, or for trying to evade a prior suspension. These bad actors have […]
Twitter announced this afternoon it will begin booting accounts off its service from those who have tried to evade their account suspension. The company says that the accounts in question are users who have been previously suspended on Twitter for their abusive behavior, or for trying to evade a prior suspension. These bad actors have been able to work around Twitter’s attempt to remove them by setting up another account, it seems.
The company says the new wave of suspensions will hit this week and will continue in the weeks ahead, as it’s able to identify others who are “attempting to Tweet following an account suspension.”
This week, we are suspending accounts for attempting to evade an account suspension. These accounts were previously suspended for abusive behavior or evading a previous suspension, and are not allowed to continue using Twitter.
Twitter’s announcement on the matter – which came in the form of a tweet – was light on details. We asked the company for more information. It’s unclear, for example, how Twitter was able to identify the same persons had returned to Twitter, how many users will be affected by this new ban, or what impact this will have on Twitter’s currently stagnant user numbers.
Twitter has not responded to our questions.
The company has been more recently focused on aggressively suspending accounts, as part of the effort to stem the flow of disinformation, bots, and abuse on its service. The Washington Post, for example, said last month that Twitter had suspended as many as 70 million accounts between the months of May and June, and was continuing in July at the same pace. The removal of these accounts didn’t affect the company’s user metrics, Twitter’s CFO later clarified.
Even though they weren’t a factor, Twitter’s user base is shrinking. The company actually lost a million monthly active users in Q2, with 335 million overall users and 68 million in the U.S. In part, Twitter may be challenged in growing its audience because it’s not been able to get a handle on the rampant abuse on its platform, and because it makes poor enforcement decisions with regard to its existing policies.
For instance, Twitter is under fire right now for the way it chooses who to suspend, as it’s one of the few remaining platforms that hasn’t taken action against conspiracy theorist Alex Jones.
The Outline even hilariously (???) suggested today that we all abandon Twitter and return to Tumblr. (Disclosure: Oath owns Tumblr and TC. I don’t support The Outline’s plan. Twitter should just fix itself, even if that requires new leadership.)
In any event, today’s news isn’t about a change in how Twitter will implement its rules, but rather in how it will enforce the bans it’s already chosen to enact.
In many cases, banned users would simply create a new account using a new email address and then continue to tweet. Twitter’s means of identifying returning users has been fairly simplistic in the past. To make sure banned users didn’t come back, it used information like the email, phone and IP address to identify them.
For it to now be going after a whole new lot of banned accounts who have been attempting to avoid their suspensions, Twitter may be using the recently acquired technology from anti-abuse firm Smyte. At the time of the deal, Twitter had praised Smyte’s proactive anti-abuse systems, and said it would soon put them to work.
This system may pick up false positives, of course – and that could be why Twitter noted that some accounts could be banned in error in the weeks ahead.
We will continue this work in the coming weeks as we identify others who are attempting to Tweet following an account suspension. If you believe your account has been suspended in error, please let us know.https://t.co/RUWvNoQt2G
As part of Twitter’s attempted crackdown on abusive behavior across its network, the company announced on Friday afternoon a new policy facing those who repeatedly harass, threaten, or otherwise make abusive comments during a Periscope broadcaster’s live stream. According to Twitter, the company will begin to more aggressively enforce its Periscope Community Guidelines by reviewing […]
As part of Twitter’s attempted crackdown on abusive behavior across its network, the company announced on Friday afternoon a new policy facing those who repeatedly harass, threaten, or otherwise make abusive comments during a Periscope broadcaster’s live stream. According to Twitter, the company will begin to more aggressively enforce its Periscope Community Guidelines by reviewing and suspending accounts of habitual offenders.
The plans were announced via a Periscope blog post and tweet that said everyone should be able to feel safe watching live video.
We’re committed to making sure everyone feels safe watching live video, whether you’re broadcasting or just tuning in. To create safer conversation, we're launching more aggressive enforcement of our guidelines. https://t.co/dQdtnxCfx6
That is, when one viewer reports a comment as “abuse,” “spam,” or selects “other reason,” Periscope’s software will then randomly select a few other viewers to take a look and decide if the comment is abuse, spam, or if it looks okay. The randomness factor here prevents a person (or persons) from using the reporting feature to shut down conversations. Only if a majority of the randomly selected voters agree the comment is spam or abuse does the commenter get suspended.
However, this suspension would only disable their ability to chat during the broadcast itself – it didn’t prevent them from continuing to watch other live broadcasts and make further abusive remarks in the comments. Though they would risk the temporary ban by doing so, they could still disrupt the conversation, and make the video creator – and their community – feel threatened or otherwise harassed.
Twitter says that accounts who repeatedly get suspended for violating its guidelines will soon be reviewed and suspended. This enhanced enforcement begins on August 10, and is one of several other changes Twitter is making to its product across Periscope and Twitter focused on user safety.
To what extent thosechanges have been working is questionable. Twitter may have policies in place around online harassment and abuse, but its enforcement has been hit-or-miss. But ridding its platform of unwanted accounts – including spam, despite the impact to monthly active user numbers – is something the company must do for its long-term health. The fact that so much hate and abuse is seemingly tolerated or overlooked on Twitter has been an issue for some time, and the problem continues today. And it could be one of the factors in Twitter’s stagnant user growth. After all, who willingly signs up for harassment?
The company is at least attempting to address the problem, most recently by acquiring the anti-abuse technology provider Smyte. Its transition to Twitter didn’t go so well, but the technology it offers the company could help Twitter to address abuse at a greater scale in the future.
For the nearly 20 percent of Americans who experience severe online harassment, there’s a new company launching in the latest batch of Y Combinator called Tall Poppy that’s giving them the tools to fight back. Co-founded by Leigh Honeywell and Logan Dean, Tall Poppy grew out of the work that Honeywell, a security specialist, had […]
Co-founded by Leigh Honeywell and Logan Dean, Tall Poppy grew out of the work that Honeywell, a security specialist, had been doing to hunt down trolls in online communities since at least 2008.
That was the year that Honeywell first went after a particularly noxious specimen who spent his time sending death threats to women in various Linux communities. Honeywell cooperated with law enforcement to try and track down the troll and eventually pushed the commenter into hiding after he was visited by investigators.
That early success led Honeywell to assume a not-so-secret identity as a security expert by day for companies like Microsoft, Salesforce, and Slack, and a defender against online harassment when she wasn’t at work.
“It was an accidental thing that I got into this work,” says Honeywell. “It’s sort of an occupational hazard of being an internet feminist.”
Honeywell started working one-on-one with victims of online harassment that would be referred to her directly.
“As people were coming forward with #metoo… I was working with a number of high profile folks to essentially batten down the hatches,” says Honeywell. “It’s been satisfying work helping people get back a sense of safety when they feel like they have lost it.”
As those referrals began to climb (eventually numbering in the low hundreds of cases), Honeywell began to think about ways to systematize her approach so it could reach the widest number of people possible.
“The reason we’re doing it that way is to help scale up,” says Honeywell. “As with everything in computer security it’s an arms race… As you learn to combat abuse the abusive people adopt technologies and learn new tactics and ways to get around it.”
Primarily, Tall Poppy will provide an educational toolkit to help people lock down their own presence and do incident response properly, says Honeywell. The company will work with customers to gain an understanding of how to protect themselves, but also to be aware of the laws in each state that they can use to protect themselves and punish their attackers.
The scope of the problem
Based on research conducted by the Pew Foundation, there are millions of people in the U.S. alone, who could benefit from the type of service that Tall Poppy aims to provide.
According to a 2017 study, “nearly one-in-five Americans (18%) have been subjected to particularly severe forms of harassment online, such as physical threats, harassment over a sustained period, sexual harassment or stalking.”
The women and minorities that bear the brunt of these assaults (and, let’s be clear, it is primarily women and minorities who bear the brunt of these assaults), face very real consequences from these virtual assaults.
Take the case of the New York principal who lost her job when an ex-boyfriend sent stolen photographs of her to the New York Post and her boss. In a powerful piece for Jezebel she wrote about the consequences of her harassment.
As a result, city investigators escorted me out of my school pending an investigation. The subsequent investigation quickly showed that I was set up by my abuser. Still, Mayor Bill de Blasio’s administration demoted me from principal to teacher, slashed my pay in half, and sent me to a rubber room, the DOE’s notorious reassignment centers where hundreds of unwanted employees languish until they are fired or forgotten.
In 2016, I took a yearlong medical leave from the DOE to treat extreme post-traumatic stress and anxiety. Since the leave was almost entirely unpaid, I took loans against my pension to get by. I ran out of money in early 2017 and reported back to the department, where I was quickly sent to an administrative trial. There the city tried to terminate me. I was charged with eight counts of misconduct despite the conclusion by all parties that my ex-partner uploaded the photos to the computer and that there was no evidence to back up his salacious story. I was accused of bringing “widespread negative publicity, ridicule and notoriety” to the school system, as well as “failing to safeguard a Department of Education computer” from my abusive ex.
Her story isn’t unique. Victims of online harassment regularly face serious consequences from online harassment.
According to a 2013 Science Daily study, cyber stalking victims routinely need to take time off from work, or change or quit their job or school. And the stalking costs the victims $1200 on average to even attempt to address the harassment, the study said.
“It’s this widespread problem and the platforms have in many ways have dropped the ball on this,” Honeywell says.
Tall Poppy’s co-founders
Creating Tall Poppy
As Honeywell heard more and more stories of online intimidation and assault, she started laying the groundwork for the service that would eventually become Tall Poppy. Through a mutual friend she reached out to Dean, a talented coder who had been working at Ticketfly before its Eventbrite acquisition and was looking for a new opportunity.
That was in early 2015. But, afraid that striking out on her own would affect her citizenship status (Honeywell is Canadian), she and Dean waited before making the move to finally start the company.
What ultimately convinced them was the election of Donald Trump.
“After the election I had a heart-to-heart with myself… And I decided that I could move back to Canada, but I wanted to stay and fight,” Honeywell says.
Initially, Honeywell took on a year-long fellowship with the American Civil Liberties Union to pick up on work around privacy and security that had been handled by Chris Soghoian who had left to take a position with Senator Ron Wyden’s office.
But the idea for Tall Poppy remained, and once Honeywell received her green card, she was “chomping at the bit to start this company.”
A few months in the company already has businesses that have signed up for the services and tools it provides to help companies protect their employees.
“People were shocked and horrified that people were trying this,” Honeywell says. “[But] what is the way [harassers] can do the most damage? Sharing them to Facebook is one of the ways where they can do the most damage. It was a worthwhile experiment.”
To underscore how pervasive a problem online harassment is, out of the four companies where the company is doing business or could do business in the first month and a half there is already an issue that the company is addressing.
“It is an important problem to work on,” says Honeywell. “My recurring realization is that the cavalry is not coming.”