Pew: A majority of U.S. teens are bullied online

A majority of U.S. teens have been subject to online abuse, according to a new study from Pew Research Center, out this morning. Specifically, that means they’ve experienced at least one of a half-dozen types of online cyberbullying, including name-calling, being subject to false rumors, receiving explicit images they didn’t ask for, having explicit images […]

A majority of U.S. teens have been subject to online abuse, according to a new study from Pew Research Center, out this morning. Specifically, that means they’ve experienced at least one of a half-dozen types of online cyberbullying, including name-calling, being subject to false rumors, receiving explicit images they didn’t ask for, having explicit images of themselves shared without their consent, physical threats, or being constantly asked about their location and activities in a stalker-ish fashion by someone who is not their parents.

Of these, name-calling and being subject to false rumors were the top two categories of abuse teens were subject to, with 42% and 32% of teens reporting it had happened to them.

 

 

 

Pew says that texting and digital messaging has paved the way for these types of interactions, and parents and teens alike are both aware of the dangers and concerned.

Parents, in particular, are worried about teens sending and receiving explicit images, with 57% saying that’s a concern, and a quarter who worry about this “a lot.” And parents of girls worry more. (64% do.)

Meanwhile, a large majority – 90% – of teens now believe that online harassment is a problem and 63% say it’s what they consider a “major” problem.

Pew also found that girls and boys are both harassed online in fairly equal measure, with 60% of girls and 59% of boys reporting having experienced some sort of online abuse. That’s a figure that may surprise some. However, it’s important to clarify that this finding is about whether or not the teen had ever had experienced online abuse – not how often or how much.

Not surprisingly, Pew found that girls are more likely than boys to have experienced two or more types of abuse, and 15% of girls have been the target of at least 4 types of abuse, compared with 6% of boys.

Girls are also more likely to be the recipient of explicit images they didn’t ask for, as 29% of teens girls reported this happened to them, versus 20% of boys.

And as the teen girls got older, they receive even more of these types of images, with 35% of girls ages 15 to 17 saying they received them, compared with only 1 out of 5 boys.

Several factors seem to play no role in how often the teens experience abuse, including race, ethnicity, or parents’ educational attainment, Pew noted. But having money does seem to matter somehow – as 24% of teens whose household income was less than $30K per year said they received online threats, compared with only 12% of those whose household incomes was greater than $75K per year. (Pew’s report doesn’t attempt to explain this finding.)

Beyond that factor, receiving or avoiding abuse is directly tied to how much screen time teens put in.

That is, the more teens go online, the more abuse they’ll receive.

45% of teens say they’re online almost constantly, and they are more likely to be harassed, as a result. 67% of them say they’ve been cyberbullied, compared with 53% who use the internet several times a day or less. And half the constantly online teens have been called offensive names, compared with just about a third (36%) who use the internet less often.

Major tech companies, including Apple, Google, and Facebook, have begun to address the issues around device addiction and screen time with software updates and parental controls.

Apple, in iOS 12, rolled out Screen Time controls that allows Apple device users to measure, monitor and restrict how often they’re on their phones, when, what type of content is blocked, and which apps they can use. In adults, the software can nudge them in the right direction, but parents also have the option of locking down their children’s phones using Screen Time controls. (Of course, savvy kids have already found the loopholes to avoid this, according to new reports.)

Google also introduced time management controls in the new version of Android, and offers parental controls around screen time through its Family Link software.

And both Google and Facebook have begun to introduce screen time reminders and settings for addictive apps like YouTube, Facebook and Instagram.

Teens seem to respect parents’ involvement in their digital lives, the report also found.

A majority – 59% – of U.S. teens say their parents are doing a good job with regard to addressing online harassment. However, 79% say elected officials are failing to protect them through legislation, 66% say social media sites are doing a poor job at stamping down abuse, and 58% of teachers are doing a poor job at handling abuse, as well.

Many of the top media sites were largely built by young people when they were first founded, and those people were often men. The sites were created in an almost naive fashion, with regard to online abuse. Protections – like muting, filters, blocking, and reporting, were generally introduced in a reactive fashion, not as proactive controls.

Instagram, for example – one of teens’ most-used apps – only introduced comment filters, blocklists, and comment blocking in 2016, and just four months ago added account muting. The app was launched in October 2010.

Pew’s findings indicate that parents would do well by their kids by using screen time management and control systems – not simply to stop their teenagers from being bullied and abused as often, but also to help the teens practice how to interact with the web in a less addictive fashion as they grow into adults.

After all, device addiction resulting in increased exposure to online abuse is not a plague that only affects teens.

Pew’s full study involves surveys of 743 teens and 1,058 parents living in the U.S. conducted March 7 to April 10, 2018. It counted “teens” as those ages 13 to 17, and “parents of teens” are those who are the parent or guardian of someone in that age range. The full report is here.

Robots can develop prejudices just like humans

In a fascinating study by researchers at Cardiff University and MIT, we learn that robots can develop prejudices when working together. The robots, which ran inside a teamwork simulator, expressed prejudice against other robots not on their team. In short, write the researchers, “groups of autonomous machines could demonstrate prejudice by simply identifying, copying and […]

In a fascinating study by researchers at Cardiff University and MIT, we learn that robots can develop prejudices when working together. The robots, which ran inside a teamwork simulator, expressed prejudice against other robots not on their team. In short, write the researchers, “groups of autonomous machines could demonstrate prejudice by simply identifying, copying and learning this behavior from one another.”

To test the theory, researchers ran a simple game in a simulator. The game involved donating to parties outside or inside the robot’s personal group based on reputation as well as donation strategy. They were able to measure the level of prejudice against outsiders. As the simulation ran, they saw a rise in prejudice against outsiders over time.

The researchers found the prejudice was easy to grow in the simulator, a fact that should give us pause as we give robots more autonomy.

“Our simulations show that prejudice is a powerful force of nature and through evolution, it can easily become incentivised in virtual populations, to the detriment of wider connectivity with others. Protection from prejudicial groups can inadvertently lead to individuals forming further prejudicial groups, resulting in a fractured population. Such widespread prejudice is hard to reverse,” said Cardiff University Professor Roger Whitaker. “It is feasible that autonomous machines with the ability to identify with discrimination and copy others could in future be susceptible to prejudicial phenomena that we see in the human population.”

Interestingly, prejudice fell when there were “more distinct subpopulations being present within a population,” an important consideration in human prejudice as well.

“With a greater number of subpopulations, alliances of non-prejudicial groups can cooperate without being exploited. This also diminishes their status as a minority, reducing the susceptibility to prejudice taking hold. However, this also requires circumstances where agents have a higher disposition towards interacting outside of their group,” Professor Whitaker said.

Twitter suspends more accounts for “engaging in coordinated manipulation”

Following last week’s suspension of 284 accounts for “engaging in coordinated manipulation,” Twitter announced today that it’s kicked an additional 486 accounts off the platform for the same reason, bringing the total to 770 accounts. While many of the accounts removed last week appeared to originate from Iran, Twitter said this time that about 100 […]

Following last week’s suspension of 284 accounts for “engaging in coordinated manipulation,” Twitter announced today that it’s kicked an additional 486 accounts off the platform for the same reason, bringing the total to 770 accounts.

While many of the accounts removed last week appeared to originate from Iran, Twitter said this time that about 100 of the latest batch to be suspended claimed to be in the United States. Many of these were less than a year old and shared “divisive commentary.” These 100 accounts tweeted a total of 867 times and had 1,268 followers between them.

As examples of the “divisive commentary” tweeted, Twitter shared screenshots from several suspended accounts that showed anti-Trump rhetoric, counter to the conservative narrative that the platform unfairly targets Republican accounts.

Twitter also said that the suspended accounts included one advertiser that spent $30 on Twitter ads last year, but added those ads did not target the U.S. and that the billing address was outside of Iran.

“As with prior investigations, we are committed to engaging with other companies and relevant law enforcement entities. Our goal is to assist investigations into these activities and where possible, we will provide the public with transparency and context on our efforts,” Twitter said on its Safety account.

After years of accusations that it doesn’t enforce its own policies about bullying, bots and other abuses, Twitter has taken a much harder line on problematic accounts in the past few months. Despite stalling user growth, especially in the United States, Twitter has been aggressively suspending accounts, including ones that were created by users to evade prior suspensions.

Twitter announced a drop of one million monthly users in the second quarter, causing investors to panic even though it posted a $100 million profit. In its earnings call, Twitter said that its efforts don’t impact user numbers because many of the “tens of millions” of removed accounts were too new or had been inactive for more than a month and were therefore not counted in active user numbers. The company did admit, however, that it’s anti-spam measures had caused it to lose three million monthly active users.

Whatever its impact on user numbers, Twitter’s anti-abuse measures may help it save face during a Senate Intelligence Committee hearing on September 5. Executives from Twitter, Facebook and Google are expected to be grilled by Sen. Mark Warner and other politicians about the use of their platforms by other countries to influence U.S. politics.

Twitter is purging accounts that were trying to evade prior suspensions

Twitter announced this afternoon it will begin booting accounts off its service from those who have tried to evade their account suspension. The company says that the accounts in question are users who have been previously suspended on Twitter for their abusive behavior, or for trying to evade a prior suspension. These bad actors have […]

Twitter announced this afternoon it will begin booting accounts off its service from those who have tried to evade their account suspension. The company says that the accounts in question are users who have been previously suspended on Twitter for their abusive behavior, or for trying to evade a prior suspension. These bad actors have been able to work around Twitter’s attempt to remove them by setting up another account, it seems.

The company says the new wave of suspensions will hit this week and will continue in the weeks ahead, as it’s able to identify others who are “attempting to Tweet following an account suspension.” 

Twitter’s announcement on the matter – which came in the form of a tweet – was light on details. We asked the company for more information. It’s unclear, for example, how Twitter was able to identify the same persons had returned to Twitter, how many users will be affected by this new ban, or what impact this will have on Twitter’s currently stagnant user numbers.

Twitter has not responded to our questions.

The company has been more recently focused on aggressively suspending accounts, as part of the effort to stem the flow of disinformation, bots, and abuse on its service. The Washington Post, for example, said last month that Twitter had suspended as many as 70 million accounts between the months of May and June, and was continuing in July at the same pace. The removal of these accounts didn’t affect the company’s user metrics, Twitter’s CFO later clarified.

Even though they weren’t a factor, Twitter’s user base is shrinking. The company actually lost a million monthly active users in Q2, with 335 million overall users and 68 million in the U.S. In part, Twitter may be challenged in growing its audience because it’s not been able to get a handle on the rampant abuse on its platform, and because it makes poor enforcement decisions with regard to its existing policies.

For instance, Twitter is under fire right now for the way it chooses who to suspend, as it’s one of the few remaining platforms that hasn’t taken action against conspiracy theorist Alex Jones.

The Outline even hilariously (???) suggested today that we all abandon Twitter and return to Tumblr. (Disclosure: Oath owns Tumblr and TC. I don’t support The Outline’s plan. Twitter should just fix itself, even if that requires new leadership.)

In any event, today’s news isn’t about a change in how Twitter will implement its rules, but rather in how it will enforce the bans it’s already chosen to enact.

In many cases, banned users would simply create a new account using a new email address and then continue to tweet. Twitter’s means of identifying returning users has been fairly simplistic in the past. To make sure banned users didn’t come back, it used information like the email, phone and IP address to identify them.

For it to now be going after a whole new lot of banned accounts who have been attempting to avoid their suspensions, Twitter may be using the recently acquired technology from anti-abuse firm Smyte. At the time of the deal, Twitter had praised Smyte’s proactive anti-abuse systems, and said it would soon put them to work.

This system may pick up false positives, of course – and that could be why Twitter noted that some accounts could be banned in error in the weeks ahead.

More to come…

Twitter will suspend repeat offenders posting abusive comments on Periscope live streams

As part of Twitter’s attempted crackdown on abusive behavior across its network, the company announced on Friday afternoon a new policy facing those who repeatedly harass, threaten, or otherwise make abusive comments during a Periscope broadcaster’s live stream. According to Twitter, the company will begin to more aggressively enforce its Periscope Community Guidelines by reviewing […]

As part of Twitter’s attempted crackdown on abusive behavior across its network, the company announced on Friday afternoon a new policy facing those who repeatedly harass, threaten, or otherwise make abusive comments during a Periscope broadcaster’s live stream. According to Twitter, the company will begin to more aggressively enforce its Periscope Community Guidelines by reviewing and suspending accounts of habitual offenders.

The plans were announced via a Periscope blog post and tweet that said everyone should be able to feel safe watching live video.

Currently, Periscope’s comment moderation policy involves group moderation.

That is, when one viewer reports a comment as “abuse,” “spam,” or selects “other reason,” Periscope’s software will then randomly select a few other viewers to take a look and decide if the comment is abuse, spam, or if it looks okay. The randomness factor here prevents a person (or persons) from using the reporting feature to shut down conversations. Only if a majority of the randomly selected voters agree the comment is spam or abuse does the commenter get suspended.

However, this suspension would only disable their ability to chat during the broadcast itself – it didn’t prevent them from continuing to watch other live broadcasts and make further abusive remarks in the comments. Though they would risk the temporary ban by doing so, they could still disrupt the conversation, and make the video creator – and their community – feel threatened or otherwise harassed.

Twitter says that accounts who repeatedly get suspended for violating its guidelines will soon be reviewed and suspended. This enhanced enforcement begins on August 10, and is one of several other changes Twitter is making to its product across Periscope and Twitter focused on user safety.

To what extent those changes have been working is questionable. Twitter may have policies in place around online harassment and abuse, but its enforcement has been hit-or-miss. But ridding its platform of unwanted accounts – including spam, despite the impact to monthly active user numbers – is something the company must do for its long-term health. The fact that so much hate and abuse is seemingly tolerated or overlooked on Twitter has been an issue for some time, and the problem continues today. And it could be one of the factors in Twitter’s stagnant user growth. After all, who willingly signs up for harassment?

The company is at least attempting to address the problem, most recently by acquiring the anti-abuse technology provider Smyte. Its transition to Twitter didn’t go so well, but the technology it offers the company could help Twitter to address abuse at a greater scale in the future.

 

Tall Poppy aims to make online harassment protection an employee benefit

For the nearly 20 percent of Americans who experience severe online harassment, there’s a new company launching in the latest batch of Y Combinator called Tall Poppy that’s giving them the tools to fight back. Co-founded by Leigh Honeywell and Logan Dean, Tall Poppy grew out of the work that Honeywell, a security specialist, had […]

For the nearly 20 percent of Americans who experience severe online harassment, there’s a new company launching in the latest batch of Y Combinator called Tall Poppy that’s giving them the tools to fight back.

Co-founded by Leigh Honeywell and Logan Dean, Tall Poppy grew out of the work that Honeywell, a security specialist, had been doing to hunt down trolls in online communities since at least 2008.

That was the year that Honeywell first went after a particularly noxious specimen who spent his time sending death threats to women in various Linux communities. Honeywell cooperated with law enforcement to try and track down the troll and eventually pushed the commenter into hiding after he was visited by investigators.

That early success led Honeywell to assume a not-so-secret identity as a security expert by day for companies like Microsoft, Salesforce, and Slack, and a defender against online harassment when she wasn’t at work.

“It was an accidental thing that I got into this work,” says Honeywell. “It’s sort of an occupational hazard of being an internet feminist.”

Honeywell started working one-on-one with victims of online harassment that would be referred to her directly.

“As people were coming forward with #metoo… I was working with a number of high profile folks to essentially batten down the hatches,” says Honeywell. “It’s been satisfying work helping people get back a sense of safety when they feel like they have lost it.”

As those referrals began to climb (eventually numbering in the low hundreds of cases), Honeywell began to think about ways to systematize her approach so it could reach the widest number of people possible.

“The reason we’re doing it that way is to help scale up,” says Honeywell. “As with everything in computer security it’s an arms race… As you learn to combat abuse the abusive people adopt technologies and learn new tactics and ways to get around it.”

Primarily, Tall Poppy will provide an educational toolkit to help people lock down their own presence and do incident response properly, says Honeywell. The company will work with customers to gain an understanding of how to protect themselves, but also to be aware of the laws in each state that they can use to protect themselves and punish their attackers.

The scope of the problem

Based on research conducted by the Pew Foundation, there are millions of people in the U.S. alone, who could benefit from the type of service that Tall Poppy aims to provide.

According to a 2017 study, “nearly one-in-five Americans (18%) have been subjected to particularly severe forms of harassment online, such as physical threats, harassment over a sustained period, sexual harassment or stalking.”

The women and minorities that bear the brunt of these assaults (and, let’s be clear, it is primarily women and minorities who bear the brunt of these assaults), face very real consequences from these virtual assaults.

Take the case of the New York principal who lost her job when an ex-boyfriend sent stolen photographs of her to the New York Post and her boss. In a powerful piece for Jezebel she wrote about the consequences of her harassment.

As a result, city investigators escorted me out of my school pending an investigation. The subsequent investigation quickly showed that I was set up by my abuser. Still, Mayor Bill de Blasio’s administration demoted me from principal to teacher, slashed my pay in half, and sent me to a rubber room, the DOE’s notorious reassignment centers where hundreds of unwanted employees languish until they are fired or forgotten.

In 2016, I took a yearlong medical leave from the DOE to treat extreme post-traumatic stress and anxiety. Since the leave was almost entirely unpaid, I took loans against my pension to get by. I ran out of money in early 2017 and reported back to the department, where I was quickly sent to an administrative trial. There the city tried to terminate me. I was charged with eight counts of misconduct despite the conclusion by all parties that my ex-partner uploaded the photos to the computer and that there was no evidence to back up his salacious story. I was accused of bringing “widespread negative publicity, ridicule and notoriety” to the school system, as well as “failing to safeguard a Department of Education computer” from my abusive ex.

Her story isn’t unique. Victims of online harassment regularly face serious consequences from online harassment.

According to a  2013 Science Daily study, cyber stalking victims routinely need to take time off from work, or change or quit their job or school. And the stalking costs the victims $1200 on average to even attempt to address the harassment, the study said.

“It’s this widespread problem and the platforms have in many ways have dropped the ball on this,” Honeywell says.

Tall Poppy’s co-founders

Creating Tall Poppy

As Honeywell heard more and more stories of online intimidation and assault, she started laying the groundwork for the service that would eventually become Tall Poppy. Through a mutual friend she reached out to Dean, a talented coder who had been working at Ticketfly before its Eventbrite acquisition and was looking for a new opportunity.

That was in early 2015. But, afraid that striking out on her own would affect her citizenship status (Honeywell is Canadian), she and Dean waited before making the move to finally start the company.

What ultimately convinced them was the election of Donald Trump.

“After the election I had a heart-to-heart with myself… And I decided that I could move back to Canada, but I wanted to stay and fight,” Honeywell says.

Initially, Honeywell took on a year-long fellowship with the American Civil Liberties Union to pick up on work around privacy and security that had been handled by Chris Soghoian who had left to take a position with Senator Ron Wyden’s office.

But the idea for Tall Poppy remained, and once Honeywell received her green card, she was “chomping at the bit to start this company.”

A few months in the company already has businesses that have signed up for the services and tools it provides to help companies protect their employees.

Some platforms have taken small steps against online harassment. Facebook, for instance, launched an initiative to get people to upload their nude pictures  so that the social network can monitor when similar images are distributed online and contact a user to see if the distribution is consensual.

Meanwhile, Twitter has made a series of changes to its algorithm to combat online abuse.

“People were shocked and horrified that people were trying this,” Honeywell says. “[But] what is the way [harassers] can do the most damage? Sharing them to Facebook is one of the ways where they can do the most damage. It was a worthwhile experiment.”

To underscore how pervasive a problem online harassment is, out of the four companies where the company is doing business or could do business in the first month and a half there is already an issue that the company is addressing. 

“It is an important problem to work on,” says Honeywell. “My recurring realization is that the cavalry is not coming.”