Openbook is the latest dream of a digital life beyond Facebook

As tech’s social giants wrestle with antisocial demons that appear to be both an emergent property of their platform power, and a consequence of specific leadership and values failures (evident as they publicly fail to enforce even the standards they claim to have), there are still people dreaming of a better way. Of social networking beyond outrage-fuelled […]

As tech’s social giants wrestle with antisocial demons that appear to be both an emergent property of their platform power, and a consequence of specific leadership and values failures (evident as they publicly fail to enforce even the standards they claim to have), there are still people dreaming of a better way. Of social networking beyond outrage-fuelled adtech giants like Facebook and Twitter.

There have been many such attempts to build a ‘better’ social network of course. Most have ended in the deadpool. A few are still around with varying degrees of success/usage (Snapchat, Ello and Mastodon are three that spring to mine). None has usurped Zuckerberg’s throne of course.

This is principally because Facebook acquired Instagram and WhatsApp. It has also bought and closed down smaller potential future rivals (tbh). So by hogging network power, and the resources that flow from that, Facebook the company continues to dominate the social space. But that doesn’t stop people imagining something better — a platform that could win friends and influence the mainstream by being better ethically and in terms of functionality.

And so meet the latest dreamer with a double-sided social mission: Openbook.

The idea (currently it’s just that; a small self-funded team; a manifesto; a prototype; a nearly spent Kickstarter campaign; and, well, a lot of hopeful ambition) is to build an open source platform that rethinks social networking to make it friendly and customizable, rather than sticky and creepy.

Their vision to protect privacy as a for-profit platform involves a business model that’s based on honest fees — and an on-platform digital currency — rather than ever watchful ads and trackers.

There’s nothing exactly new in any of their core ideas. But in the face of massive and flagrant data misuse by platform giants these are ideas that seem to sound increasingly like sense. So the element of timing is perhaps the most notable thing here — with Facebook facing greater scrutiny than ever before, and even taking some hits to user growth and to its perceived valuation as a result of ongoing failures of leadership and a management philosophy that’s been attacked by at least one of its outgoing senior execs as manipulative and ethically out of touch.

The Openbook vision of a better way belongs to Joel Hernández who has been dreaming for a couple of years, brainstorming ideas on the side of other projects, and gathering similarly minded people around him to collectively come up with an alternative social network manifesto — whose primary pledge is a commitment to be honest.

“And then the data scandals started happening and every time they would, they would give me hope. Hope that existing social networks were not a given and immutable thing, that they could be changed, improved, replaced,” he tells TechCrunch.

Rather ironically Hernández says it was overhearing the lunchtime conversation of a group of people sitting near him — complaining about a laundry list of social networking ills; “creepy ads, being spammed with messages and notifications all the time, constantly seeing the same kind of content in their newsfeed” — that gave him the final push to pick up the paper manifesto and have a go at actually building (or, well, trying to fund building… ) an alternative platform. 

At the time of writing Openbook’s Kickstarter crowdfunding campaign has a handful of days to go and is only around a third of the way to reaching its (modest) target of $115k, with just over 1,000 backers chipping in. So the funding challenge is looking tough.

The team behind Openbook includes crypto(graphy) royalty, Phil Zimmermann — aka the father of PGP — who is on board as an advisor initially but billed as its “chief cryptographer”, as that’s what he’d be building for the platform if/when the time came. 

Hernández worked with Zimmermann at the Dutch telecom KPN building security and privacy tools for internal usage — so called him up and invited him for a coffee to get his thoughts on the idea.

“As soon as I opened the website with the name Openbook, his face lit up like I had never seen before,” says Hernández. “You see, he wanted to use Facebook. He lives far away from his family and facebook was the way to stay in the loop with his family. But using it would also mean giving away his privacy and therefore accepting defeat on his life-long fight for it, so he never did. He was thrilled at the possibility of an actual alternative.”

On the Kickstarter page there’s a video of Zimmermann explaining the ills of the current landscape of for-profit social platforms, as he views it. “If you go back a century, Coca Cola had cocaine in it and we were giving it to children,” he says here. “It’s crazy what we were doing a century ago. I think there will come a time, some years in the future, when we’re going to look back on social networks today, and what we were doing to ourselves, the harm we were doing to ourselves with social networks.”

“We need an alternative to the social network work revenue model that we have today,” he adds. “The problem with having these deep machine learning neural nets that are monitoring our behaviour and pulling us into deeper and deeper engagement is they already seem to know that nothing drives engagement as much as outrage.

“And this outrage deepens the political divides in our culture, it creates attack vectors against democratic institutions, it undermines our elections, it makes people angry at each other and provides opportunities to divide us. And that’s in addition to the destruction of our privacy by revenue models that are all about exploiting our personal information. So we need some alternative to this.”

Hernández actually pinged TechCrunch’s tips line back in April — soon after the Cambridge Analytica Facebook scandal went global — saying “we’re building the first ever privacy and security first, open-source, social network”.

We’ve heard plenty of similar pitches before, of course. Yet Facebook has continued to harvest global eyeballs by the billions. And even now, after a string of massive data and ethics scandals, it’s all but impossible to imagine users leaving the site en masse. Such is the powerful lock-in of The Social Network effect.

Regulation could present a greater threat to Facebook, though others argue more rules will simply cement its current dominance.

Openbook’s challenger idea is to apply product innovation to try to unstick Zuckerberg. Aka “building functionality that could stand for itself”, as Hernández puts it.

“We openly recognise that privacy will never be enough to get any significant user share from existing social networks,” he says. “That’s why we want to create a more customisable, fun and overall social experience. We won’t follow the footsteps of existing social networks.”

Data portability is an important ingredient to even being able to dream this dream — getting people to switch from a dominant network is hard enough without having to ask them to leave all their stuff behind as well as their friends. Which means that “making the transition process as smooth as possible” is another project focus.

Hernández says they’re building data importers that can parse the archive users are able to request from their existing social networks — to “tell you what’s in there and allow you to select what you want to import into Openbook”.

These sorts of efforts are aided by updated regulations in Europe — which bolster portability requirements on controllers of personal data. “I wouldn’t say it made the project possible but… it provided us a with a unique opportunity no other initiative had before,” says Hernández of the EU’s GDPR.

“Whether it will play a significant role in the mass adoption of the network, we can’t tell for sure but it’s simply an opportunity too good to ignore.”

On the product front, he says they have lots of ideas — reeling off a list that includes the likes of “a topic-roulette for chats, embracing Internet challenges as another kind of content, widgets, profile avatars, AR chatrooms…” for starters.

“Some of these might sound silly but the idea is to break the status quo when it comes to the definition of what a social network can do,” he adds.

Asked why he believes other efforts to build ‘ethical’ alternatives to Facebook have failed he argues it’s usually because they’ve focused on technology rather than product.

“This is still the most predominant [reason for failure],” he suggests. “A project comes up offering a radical new way to do social networking behind the scenes. They focus all their efforts in building the brand new tech needed to do the very basic things a social network can already do. Next thing you know, years have passed. They’re still thousands of miles away from anything similar to the functionality of existing social networks and their core supporters have moved into yet another initiative making the same promises. And the cycle goes on.”

He also reckons disruptive efforts have fizzled out because they were too tightly focused on being just a solution to an existing platform problem and nothing more.

So, in other words, people were trying to build an ‘anti-Facebook’, rather than a distinctly interesting service in its own right. (The latter innovation, you could argue, is how Snap managed to carve out a space for itself in spite of Facebook sitting alongside it — even as Facebook has since sought to crush Snap’s creative market opportunity by cloning its products.)

“This one applies not only to social network initiatives but privacy-friendly products too,” argues Hernández. “The problem with that approach is that the problems they solve or claim to solve are most of the time not mainstream. Such as the lack of privacy.

“While these products might do okay with the people that understand the problems, at the end of the day that’s a very tiny percentage of the market. The solution these products often present to this issue is educating the population about the problems. This process takes too long. And in topics like privacy and security, it’s not easy to educate people. They are topics that require a knowledge level beyond the one required to use the technology and are hard to explain with examples without entering into the conspiracy theorist spectrum.”

So the Openbook team’s philosophy is to shake things up by getting people excited for alternative social networking features and opportunities, with merely the added benefit of not being hostile to privacy nor algorithmically chain-linked to stoking fires of human outrage.

The reliance on digital currency for the business model does present another challenge, though, as getting people to buy into this could be tricky. After all payments equal friction.

To begin with, Hernández says the digital currency component of the platform would be used to let users list secondhand items for sale. Down the line, the vision extends to being able to support a community of creators getting a sustainable income — thanks to the same baked in coin mechanism enabling other users to pay to access content or just appreciate it (via a tip).

So, the idea is, that creators on Openbook would be able to benefit from the social network effect via direct financial payments derived from the platform (instead of merely ad-based payments, such as are available to YouTube creators) — albeit, that’s assuming reaching the necessary critical usage mass. Which of course is the really, really tough bit.

“Lower cuts than any existing solution, great content creation tools, great administration and overview panels, fine-grained control over the view-ability of their content and more possibilities for making a stable and predictable income such as creating extra rewards for people that accept to donate for a fixed period of time such as five months instead of a month to month basis,” says Hernández, listing some of the ideas they have to stand out from existing creator platforms.

“Once we have such a platform and people start using tips for this purpose (which is not such a strange use of a digital token), we will start expanding on its capabilities,” he adds. (He’s also written the requisite Medium article discussing some other potential use cases for the digital currency portion of the plan.)

At this nascent prototype and still-not-actually-funded stage they haven’t made any firm technical decisions on this front either. And also don’t want to end up accidentally getting into bed with an unethical tech.

“Digital currency wise, we’re really concerned about the environmental impact and scalability of the blockchain,” he says — which could risk Openbook contradicting stated green aims in its manifesto and looking hypocritical, given its plan is to plough 30% of its revenues into ‘give-back’ projects, such as environmental and sustainability efforts and also education.

“We want a decentralised currency but we don’t want to rush into decisions without some in-depth research. Currently, we’re going through IOTA’s whitepapers,” he adds.

They do also believe in decentralizing the platform — or at least parts of it — though that would not be their first focus on account of the strategic decision to prioritize product. So they’re not going to win fans from the (other) crypto community. Though that’s hardly a big deal given their target user-base is far more mainstream.

“Initially it will be built on a centralised manner. This will allow us to focus in innovating in regards to the user experience and functionality product rather than coming up with a brand new behind the scenes technology,” he says. “In the future, we’re looking into decentralisation from very specific angles and for different things. Application wise, resiliency and data ownership.”

“A project we’re keeping an eye on and that shares some of our vision on this is Tim Berners Lee’s MIT Solid project. It’s all about decoupling applications from the data they use,” he adds.

So that’s the dream. And the dream sounds good and right. The problem is finding enough funding and wider support — call it ‘belief equity’ — in a market so denuded of competitive possibility as a result of monopolistic platform power that few can even dream an alternative digital reality is possible.

In early April, Hernández posted a link to a basic website with details of Openbook to a few online privacy and tech communities asking for feedback. The response was predictably discouraging. “Some 90% of the replies were a mix between critiques and plain discouraging responses such as “keep dreaming”, “it will never happen”, “don’t you have anything better to do”,” he says.

(Asked this April by US lawmakers whether he thinks he has a monopoly, Zuckerberg paused and then quipped: “It certainly doesn’t feel like that to me!”)

Still, Hernández stuck with it, working on a prototype and launching the Kickstarter. He’s got that far — and wants to build so much more — but getting enough people to believe that a better, fairer social network is even possible might be the biggest challenge of all. 

For now, though, Hernández doesn’t want to stop dreaming.

“We are committed to make Openbook happen,” he says. “Our back-up plan involves grants and impact investment capital. Nothing will be as good as getting our first version through Kickstarter though. Kickstarter funding translates to absolute freedom for innovation, no strings attached.”

You can check out the Openbook crowdfunding pitch here.

Femtech hardware startup Elvie inks strategic partnership with UK’s NHS

Elvie, a femtech hardware startup whose first product is a sleek smart pelvic floor exerciser, has inked a strategic partnership with the UK’s National Health Service that will make the device available nationwide through the country’s free-at-the-point-of-use healthcare service so at no direct cost to the patient. It’s a major win for the startup that was […]

Elvie, a femtech hardware startup whose first product is a sleek smart pelvic floor exerciser, has inked a strategic partnership with the UK’s National Health Service that will make the device available nationwide through the country’s free-at-the-point-of-use healthcare service so at no direct cost to the patient.

It’s a major win for the startup that was co-founded in 2013 by CEO Tania Boler and Jawbone founder, Alexander Asseily, with the aim of building smart technology that focuses on women’s issues — an overlooked and underserved category in the gadget space.

Boler’s background before starting Elvie (née Chiaro) including working for the U.N. on global sex education curriculums. But her interest in pelvic floor health, and the inspiration for starting Elvie, began after she had a baby herself and found there was more support for women in France than the U.K. when it came to taking care of their bodies after giving birth.

With the NHS partnership, which is the startup’s first national reimbursement partnership (and therefore, as a spokeswoman puts it, has “the potential to be transformative” for the still young company), Elvie is emphasizing the opportunity for its connected tech to help reduce symptoms of urinary incontinence, including those suffered by new mums or in cases of stress-related urinary incontinence.

The Elvie kegel trainer is designed to make pelvic floor exercising fun and easy for women, with real-time feedback delivered via an app that also gamifies the activity, guiding users through exercises intended to strengthen their pelvic floor and thus help reduce urinary incontinence symptoms. The device can also alert users when they are contracting incorrectly.

Elvie cites research suggesting the NHS spends £233M annually on incontinence, claiming also that around a third of women and up to 70% of expectant and new mums currently suffer from urinary incontinence. In 70 per cent of stress urinary incontinence cases it suggests symptoms can be reduced or eliminated via pelvic floor muscle training.

And while there’s no absolute need for any device to perform the necessary muscle contractions to strengthen the pelvic floor, the challenge the Elvie Trainer is intended to help with is it can be difficult for women to know they are performing the exercises correctly or effectively.

Elvie cites a 2004 study that suggests around a third of women can’t exercise their pelvic floor correctly with written or verbal instruction alone. Whereas it says that biofeedback devices (generally, rather than the Elvie Trainer specifically) have been proven to increase success rates of pelvic floor training programmes by 10% — which it says other studies have suggested can lower surgery rates by 50% and reduce treatment costs by £424 per patient head within the first year.

“Until now, biofeedback pelvic floor training devices have only been available through the NHS for at-home use on loan from the patient’s hospital, with patient allocation dependent upon demand. Elvie Trainer will be the first at-home biofeedback device available on the NHS for patients to keep, which will support long-term motivation,” it adds.

Commenting in a statement, Clare Pacey, a specialist women’s health physiotherapist at Kings College Hospital, said: “I am delighted that Elvie Trainer is now available via the NHS. Apart from the fact that it is a sleek, discreet and beautiful product, the app is simple to use and immediate visual feedback directly to your phone screen can be extremely rewarding and motivating. It helps to make pelvic floor rehabilitation fun, which is essential in order to be maintained.”

Elvie is not disclosing commercial details of the NHS partnership but a spokeswoman told us the main objective for this strategic partnership is to broaden access to Elvie Trainer, adding: “The wholesale pricing reflects that.”

Discussing the structure of the supply arrangement, she said Elvie is working with Eurosurgical as its delivery partner — a distributor she said has “decades of experience supplying products to the NHS”.

“The approach will vary by Trust, regarding whether a unit is ordered for a particular patient or whether a small stock will be held so a unit may be provided to a patient within the session in which the need is established. This process will be monitored and reviewed to determine the most efficient and economic distribution method for the NHS Supply Chain,” she added.

Apple defends decision not to remove InfoWars’ app

Apple has commented on its decision to continue to allow conspiracy theorist profiteer InfoWars to livestream video podcasts via an app in its App Store, despite removing links to all but one of Alex Jones’ podcast content from its iTunes and podcast apps earlier this week. At the time Apple said the podcasts had violated its […]

Apple has commented on its decision to continue to allow conspiracy theorist profiteer InfoWars to livestream video podcasts via an app in its App Store, despite removing links to all but one of Alex Jones’ podcast content from its iTunes and podcast apps earlier this week.

At the time Apple said the podcasts had violated its community standards, emphasizing that it “does not tolerate hate speech”, and saying: “We believe in representing a wide range of views, so long as people are respectful to those with differing opinions.”

Yet the InfoWars app allows iOS users to livestream the same content Apple just pulled from iTunes.

In a statement given to BuzzFeed News Apple explains its decision not to pull InfoWars app’ — saying:

We strongly support all points of view being represented on the App Store, as long as the apps are respectful to users with differing opinions, and follow our clear guidelines, ensuring the App Store is a safe marketplace for all. We continue to monitor apps for violations of our guidelines and if we find content that violates our guidelines and is harmful to users we will remove those apps from the store as we have done previously.

Multiple tech platforms have moved to close to door or limit Jones’ reach on their platforms in recent weeks, including Google, which shuttered his YouTube channel, and Facebook, which removed a series of videos and banned Jones’ personal account for 30 days as well as issuing the InfoWars page with a warning strike. Spotify, Pinterest, LinkedIn, MailChimp and others have also taken action.

Although Twitter has not banned or otherwise censured Jones — despite InfoWars’ continued presence on its platform threatening CEO Jack Dorsey’s claimed push to want to improve conversational health on his platform. Snapchat is also merely monitoring Jones’ continued presence on its platform.

In an unsurprising twist, the additional exposure Jones/InfoWars has gained as a result of news coverage of the various platform bans appears to have given his apps some passing uplift…

So Apple’s decision to remove links to Jones’ podcasts yet allow the InfoWars app looks contradictory.

The company is certainly treading a fine line here. But there’s a technical distinction between a link to a podcast in a directory, where podcast makers can freely list their stuff (with the content hosted elsewhere), vs an app in Apple’s App Store which has gone through Apple’s review process and the content is being hosted by Apple.

When it removed Jones’ podcasts Apple was, in effect, just removing a pointer to the content, not the content itself. The podcasts also represented discrete content — meaning each episode which was being pointed to could be judged against Apple’s community standards. (And one podcast link was not removed, for example, though five were.)

Whereas Jones (mostly) uses the InfoWars app to livestream podcast shows. Meaning the content in the InfoWars app is more ephemeral — making it more difficult for Apple to cross-check against its community standards. The streamer has to be caught in the act, as it were.

Google has also not pulled the InfoWars app from its Play Store despite shuttering Jones’ YouTube channel, and a spokesperson told BuzzFeed: “We carefully review content on our platforms and products for violations of our terms and conditions, or our content policies. If an app or user violates these, we take action.”

That said, both the iOS and Android versions of the app also include ‘articles’ that can be saved by users, so some of the content appears to be less ephemeral.

The iOS listing further claims the app lets users “stay up to date with articles as they’re published from Infowars.com” — which at least suggests some of the content is identical to what’s being spouted on Jones’ own website (where he’s only subject to his own T&Cs).

But in order to avoid failing foul of Apple and Google’s app store guidelines, Jones is likely carefully choosing which articles are funneled into the apps — to avoid breaching app store T&Cs against abuse and hateful conduct, and (most likely also) to hook more eyeballs with more soft-ball conspiracy nonsense before, once people are pulled into his orbit, blasting them with his full bore BS shotgun on his own platform.

Sample articles depicted in screenshots in the App Store listing for the app include one claiming that George Soros is “literally behind Starbucks’ sensitivity training” and another, from the ‘science’ section, pushing some junk claims about vision correction — so all garbage but not at the same level of anti-truth toxicity that Jones has become notorious for for what he says on his shows; while the Play Store listing flags a different selection of sample articles with a slightly more international flavor — including several on European far right politics, in addition to U.S. focused political stories about Trump and some outrage about domestic ‘political correctness gone mad’. So the static sample content at least isn’t enough to violate any T&Cs.

Still, the livestream component of the apps presents an ongoing problem for Apple and Google — given both have stated that his content elsewhere violates their standards. And it’s not clear how sustainable it will be for them to continue to allow Jones a platform to livestream hate from inside the walls of their commercial app stores.

Beyond that, narrowly judging Jones — a purveyor of weaponized anti-truth (most egregiously his claim that the Sandy Hook Elementary School shooting was a hoax) — by the content he uploads directly to their servers also ignores the wider context (and toxic baggage) around him.

And while no tech companies want their brands to be perceived as toxic to conservative points of view, InfoWars does not represent conservative politics. Jones peddles far right conspiracy theories, whips up hate and spreads junk science in order to generate fear and make money selling supplements. It’s cynical manipulation not conservatism.

Both should revisit their decision. Hateful anti-truth merely damages the marketplace of ideas they claim to want to champion, and chills free speech through violent bullying of minorities and the people it makes into targets and thus victimizes.

Earlier this week 9to5Mac reported that CNN’s Dylan Byers had said the decision to remove links to InfoWars’ podcasts had been made at the top of Apple — after a meeting between CEO Tim Cook and SVP Eddy Cue. Byers’ reported it was also the execs’ decision not to remove the InfoWars app.

We’ve reached out to Apple to ask whether it will be monitoring InfoWars’ livestreams directly for any violations of its community standards and will update this story with any response.

Study flags poor quality working conditions for remote gig workers

An Oxford University study of remote gig economy work conducted on digital platforms has highlighted poor quality working conditions with implications for employees’ well-being. The research comes at a time when political scrutiny is increasingly falling on algorithmically controlled platforms and their societal impacts. Policymakers are also paying greater attention to the precarious reality for […]

An Oxford University study of remote gig economy work conducted on digital platforms has highlighted poor quality working conditions with implications for employees’ well-being.

The research comes at a time when political scrutiny is increasingly falling on algorithmically controlled platforms and their societal impacts. Policymakers are also paying greater attention to the precarious reality for workers on platforms which advertise their gig marketplaces to new recruits with shiny claims of ‘flexibility’ and ‘autonomy’.

Governments in some regions are also actively reassessing employment law to take account of technology-fueled shifts to work and working patterns. Earlier this year, for instance, the UK government announced a package of labor market reforms — and committed to being responsible for quality of work, not just quantity of jobs, for the first time.

The Oxford University study, entitled Good Gig, Bad Big: Autonomy and Algorithmic Control in the Global Gig Economy, looks at remote gig economy work, such as tasks like research, translation and programming carried out via platforms such as Freelancer.com and Fiverr (rather than gig economy platforms such as food delivery platforms, where workers must be in local proximity to the work — albeit, those platforms have their own workforce exploitation critiques).

The researchers note that an estimated 70 million workers worldwide are registered on remote work platforms. Their study methodology involved carrying out face-to-face interviews with just over 100 workers in South East Asia and Sub-Saharan Africa who had been active on one of two unnamed “leading platforms” for at least six months.

They also undertook a remote survey of just over 650 additional gig platform workers, from the same regions, to supplement the interview findings. Participants for the survey portion were recruited via online job ads on the platforms themselves, and had to have completed work through one of the two platforms within the past two months, and to have worked in at least five projects or for five hours in total.

 

Free to get the job done

The study paints a mixed picture, with — on the one hand — gig workers reporting feeling they can remotely access stimulating and challenging work, and experiencing perceived autonomy and discretion over how they get a job done: A large majority (72%) of respondents said they felt able to choose and change the order in which they undertook online tasks, and 74% said they were able to choose or change their methods of work.

At the same time — and here the negatives pile in — workers on the platforms lack collective bargaining so are simultaneously experiencing a hothouse of competitive marketplace and algorithmic management pressure, combined with feelings of social isolation (with most working from home), and the risk of overwork and exhaustion as a result of a lack of regulations and support systems, as well as their own economic needs to get tasks done to earn money.

“Our findings demonstrate evidence that the autonomy of working in the gig economy often comes at the price of long, irregular and anti-social hours, which can lead to sleep deprivation and exhaustion,” said Dr Alex Wood, co-author of the paper, commenting in a statement. “While gig work takes place around the world, employers tend to be from the U.K. and other high-income Western countries, exacerbating the problem for workers in lower-income countries who have to compensate for time differences.

“The competitive nature of online labour platforms leads to high-intensity work, requiring workers to complete as many gigs as possible as quickly as they can and meet the demands of multiple clients no matter how unreasonable.”

The survey results backed the researchers’ interview findings of an oversupply of labour, with 54% of respondents reporting there was not enough work available and just a fifth (20%) disagreeing.

The study also highlights the fearsome power of platforms’ rating and reputation systems as a means of algorithmically controlling remote workers — via the economic threat of loss of future work.

The researchers write:

A far more effective means of control [than non-proximate monitoring mechanisms such as screen monitoring software, which platforms also deployed] was the ‘algorithmic management’ enabled by platform-based rating and reputation systems (Lee et al., 2015; Rosenblat and Stark, 2016). Workers were rated by their clients following the completion of tasks. Workers with the best scores and the most experience tended to receive more work due to clients’ preferences and the platforms’ algorithmic ranking of workers within search results.

This form of control was very effective, as informants stressed the importance of maintaining a high average rating and good accuracy scores. Whereas Uber’s algorithmic management ‘deactivates’ (dismisses) workers with ratings deemed low (Rosenblat and Stark, 2016), online labour platforms, instead, use algorithms to filter work away from those with low ratings, thus making continuing on the platform a less viable means of making a living.

As a result of how platforms are organized, remote gig workers reported that the work could be highly intense, with a majority (54%) of survey respondents saying they had to work at very high speed; 60% working to tight deadlines; and more than a fifth (22%) experiencing pain as a result of their work.

“This is particularly felt by low-skilled workers, who must complete a very high number of gigs in order to make a decent living,” added professor Mark Graham, co-author, in another supporting statement. “As there is an oversupply of low-skill workers and no collective bargaining power, pay remains low. Completing as many jobs as possible is the only way to make a decent living.”

The study also highlights the contradictions inherent in the gig economy’s ‘flexible working’ narrative — with the researchers noting that while algorithms do not formally control where workers work, in reality remote platform workers may have “little real choice but to work from home, and this can lead to a lack of social contact and feelings of social isolation”.

Gig platform workers also run up against the rigid requirements of demanding clients and deadlines in order to get paid for their work — meaning there’s a whip being cracked over them after all. The study found most workers had to work “intense unsocial and irregular hours in order to meet client demand”.

“The autonomy resulting from algorithmic control can lead to overwork, sleep deprivation and exhaustion as a consequence of the weak structural power of workers vis-a-vis clients,” they write. “This weak structural power is an outcome of platform-based rating and ranking systems enabling a form of control which is able to overcome the spatial and temporal barriers that non-proximity places on the effectiveness of direct labour process surveillance and supervision. Online labour platforms thus facilitate clients in connecting with a largely unregulated global oversupply of labour.”

Workers that gained the most in this environment were good at mastering skills independently and navigating platforms’ reputation systems so they could keep winning more work — albeit essentially at other workers’ expense, on account of how the platforms’ algorithms funnel more work towards the best rated (meaning there’s less for the rest).

The study concludes that platform reputations have a ‘symbolic power’ — as “an emerging form of marketplace bargaining power” — and “as a consequence of the algorithmic control inherent to online labour platforms.”

The workers who lacked the individual resources of skills and reputation suffered from low incomes and insecurity.

“Our findings are consistent with remote workers’ experiences across many national contexts,’ added Graham. “Hopefully, this research will shed light on potential pitfalls for remote gig workers and help policymakers understand what working in the online gig economy really looks like. While there are benefits to workers such as autonomy and flexibility, there are also serious areas of concern, especially for lower-skill workers.”

Magic Leap One AR headset for devs costs more than 2x the iPhone X

It’s been a long and trip-filled wait but mixed reality headgear maker Magic Leap will finally, finally be shipping its first piece of hardware this summer. We were still waiting on the price-tag — but it’s just been officially revealed: The developer-focused Magic Leap One ‘creator edition’ headset will set you back at least $2,295. […]

It’s been a long and trip-filled wait but mixed reality headgear maker Magic Leap will finally, finally be shipping its first piece of hardware this summer.

We were still waiting on the price-tag — but it’s just been officially revealed: The developer-focused Magic Leap One ‘creator edition’ headset will set you back at least $2,295.

So a considerable chunk of change — albeit this bit of kit is not intended as a mass market consumer device (although Magic Leap’s founder frothed about it being “at the border of practical for everybody” in an interview with the Verge) but rather an AR headset for developers to create content that could excite future consumers.

A ‘Pro’ version of the kit — with an extra hub cable and some kind of rapid replacement service if the kit breaks — costs an additional $495, according to CNET. While certain (possibly necessary) extras such as prescription lenses also cost more. So it’s pushing towards 3x iPhone Xes at that point.

The augmented reality startup, which has raised at least $2.3 billion, according to Crunchbase, attracting a string of high profile investors including Google, Alibaba, Andreessen Horowitz and others, is only offering its first piece of reality bending eyewear to “creators in cities across the contiguous U.S.”.

Potential buyers are asked to input their zip code via its website to check if it will agree to take their money but it adds that “the list is growing daily”.

We tried the TC SF office zip and — unsurprisingly — got an affirmative of delivery there. But any folks in, for example, Hawaii wanting to spend big to space out are out of luck for now…

CNET reports that the headset is only available in six U.S. cities at this stage: Chicago, Los Angeles, Miami, New York, San Francisco (Bay Area), and Seattle — with Magic Leap saying that “many” more will be added in fall.

The company specifies it will “hand deliver” the package to buyers — and “personally get you set up”. So evidently it wants to try to make sure its first flush of expensive hardware doesn’t get sucked down the toilet of dashed developer expectations.

It describes the computing paradigm it’s seeking to shift, i.e. with the help of enthused developers and content creators, as “spatial computing” — but it really needs a whole crowd of technically and creatively minded people to step with it if it’s going to successfully deliver that.

Analysis backs claim drones were used to attack Venezuela’s president

Analysis of open source information carried out by the investigative website Bellingcat suggests drones that had been repurposed as flying bombs were indeed used in an attack on the president of Venezuela at the weekend. The Venezuelan government claimed three days ago that an attempt had been made to assassination president Maduro using two drones loaded with […]

Analysis of open source information carried out by the investigative website Bellingcat suggests drones that had been repurposed as flying bombs were indeed used in an attack on the president of Venezuela at the weekend.

The Venezuelan government claimed three days ago that an attempt had been made to assassination president Maduro using two drones loaded with explosives. The president had been giving a speech at the time which was being broadcast live on television when the incident occurred.

Initial video from a state-owned television network showed the reaction of Maduro, those around him and a parade of soldiers at the event to what appeared to be two blasts somewhere off camera. But the footage did not include shots of any drones or explosions.

News organization AP also reported that firefighters at scene had shed doubt on the drone attack claim — suggesting there had instead been a gas explosion in a nearby flat.

Since then more footage has emerged, including videos purporting to show a drone exploding and a drone tumbling alongside a building.

Bellingcat has carried out an analysis of publicly available information related to the attack, including syncing timings of the state broadcast of Maduro’s speech, and using frame-by-frame analysis combined with photos and satellite imagery of Caracus to try to pinpoint locations of additional footage that has emerged to determine whether the drone attack claim stands up.

The Venezuelan government has claimed the drones used were DJI Matrice 600s, each carrying approximately 1kg of C4 plastic explosive and, when detonated, capable of causing damage at a radius of around 50 meters.

DJI Matrice 600 drones are a commercial model, normally used for industrial work — with a U.S. price tag of around $5,000 apiece, suggesting the attack could have cost little over $10k to carry out — with 1kg of plastic explosive available commercially (for demolition purposes) at a cost of around $30.

Bellingcat says its analysis supports the government’s claim that the drone model used was a DJI Matrice 600, noting that the drones involved in the event each had six rotors. It also points to a photo of drone wreckage which appears to show the distinctive silver rotor tip of the model, although it also notes the drones appear to have had their legs removed.

Venezuela’s interior minister, Nestor Reverol, also claimed the government thwarted the attack using “special techniques and [radio] signal inhibitors”, which “disoriented” the drone that detonated closest to the presidential stand — a capability Bellingcat notes the Venezuelan security services are reported to have.

The second drone was said by Reverol to have “lost control” and crashed into a nearby building.

Bellingcat says it is possible to geolocate the video of the falling drone to the same location as the fire in the apartment that firefighters had claimed was caused by a gas canister explosion. It adds that images taken of this location during the fire show a hole in the wall of the apartment in the vicinity of where the drone would have crashed.

“It is a very likely possibility that the downed drone subsequently detonated, creating the hole in the wall of this apartment, igniting a fire, and causing the sound of the second explosion which can be heard in Video 2 [of the state TV broadcast of Maduro’s speech],” it further suggests.

Here’s its conclusion:

From the open sources of information available, it appears that an attack took place using two DBIEDs while Maduro was giving a speech. Both the drones appear visually similar to DJI Matrice 600s, with at least one displaying features that are consistent with this model. These drones appear to have been loaded with explosive and flown towards the parade.

The first drone detonated somewhere above or near the parade, the most likely cause of the casualties announced by the Venezuelan government and pictured on social media. The second drone crashed and exploded approximately 14 seconds later and 400 meters away from the stage, and is the most likely cause of the fire which the Venezuelan firefighters described.

It also considers the claim of attribution by a group on social media, calling itself “Soldados de Franelas” (aka ‘T-Shirt Soldiers’ — a reference to a technique used by protestors wrapping a t-shirt around their head to cover their face and protect their identity), suggesting it’s not clear from the group’s Twitter messages that they are “unequivocally claiming responsibility for the event”, owing to use of passive language, and to a claim that the drones were shot down by government snipers — which it says “does not appear to be supported by the open source information available”.

MailChimp bans Alex Jones for hateful conduct

Another tech platform has closed the door on InfoWars’ Alex Jones . Mail messaging platform MailChimp first confirmed the move in a statement to US media watchdog Media Matters which said the accounts had been closed for “hateful conduct”. A MailChimp spokeswoman also confirmed it to TechCrunch via email. In a statement MailChimp said it had terminated […]

Another tech platform has closed the door on InfoWars’ Alex Jones . Mail messaging platform MailChimp first confirmed the move in a statement to US media watchdog Media Matters which said the accounts had been closed for “hateful conduct”. A MailChimp spokeswoman also confirmed it to TechCrunch via email.

In a statement MailChimp said it had terminated InfoWars’ and Jones’ accounts for ToS violations — adding that while it doesn’t usually comment on individual account closures it was making an exception in this case.

“We don’t allow people to use our platform to disseminate hateful content,” it wrote, adding: “We take our responsibility to our customers and employees seriously. The decision to terminate this account was thoughtfully considered and is in line with our company’s values.”

There has been something of a domino effect among tech companies in recent weeks over what to do about Jones/InfoWars, with Facebook, Apple and Google pulling content or shuttering Jones’ channels over ToS violations. Spotify, YouPorn and even Pinterest have also pulled his content for the same reasons. Although Twitter has not — saying Jones has not violated its rules.

Jones, a notorious conspiracy theorist, has peddled anti-truths on his own website for nearly two decades, but has raised his profile and gained greater exposure by using the reach of mainstream tech platforms and tools — enabling him to rabble rouse beyond a niche audience.

As well as spreading toxic disinformation on mainstream social networks, including targeting the victims of the Sandy Hook Elementary school shooting by falsely claiming the massacre was an elaborate hoax, Media Matters notes that Jones has regularly encouraged violence — expounding an impending second U.S. civil war narrative in which he discusses killing minorities.

Jones is spinning the recent tech platform bans as a ‘censorship war’ on him, even as hosting companies continue to provide a platform on the Internet for his website — where he continues to peddle his BS for anyone who wants to listen.

Here’s Twitter’s position on Alex Jones (and hate-peddling anti-truthers) — hint: It’s a fudge

The number of tech platforms taking action against Alex Jones, the far right InfoWars conspiracy theorist and hate speech preacher, has been rising in recent weeks — with bans or partial bans including from Google, Apple and Facebook. However, as we noted earlier, Twitter is not among them. Although it has banned known hate peddlers […]

The number of tech platforms taking action against Alex Jones, the far right InfoWars conspiracy theorist and hate speech preacher, has been rising in recent weeks — with bans or partial bans including from Google, Apple and Facebook.

However, as we noted earlier, Twitter is not among them. Although it has banned known hate peddlers before.

Jones continues to be allowed a presence on Twitter’s platform — and is using his verified Twitter account to scream about being censored all over the mainstream place, hyperventilating at one point in the past 16 hours that ‘censoring Alex Jones is censoring everyone’ — because, and I quote, “we’re all Alex Jones now”.

(Fact check: No, we’re not… And, Alex, if you’re reading this, we suggest you take heart from the ideas in this Onion article and find a spot in your local park.)

We asked Twitter why it has not banned Jones outright, given that its own rules service proscribe hate speech and hateful conduct…

Abuse: You may not engage in the targeted harassment of someone, or incite other people to do so. We consider abusive behavior an attempt to harass, intimidate, or silence someone else’s voice.

Hateful conduct: You may not promote violence against, threaten, or harass other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease. Read more about our hateful conduct policy.

Add to that, CEO Jack Dorsey has made it his high profile mission of late to (try to) improve conversational health on the platform. So it seems fair to wonder how Twitter continuing to enable a peddler of toxic lies and hate is going to achieve that?

While Twitter would not provide a statement about Jones’ continued presence on its platform, a spokesman told us that InfoWars and Jones’ personal account are not in violation of Twitter (or Periscope’s) ToS . At least not yet. Though he pointed out it could of course take action in the future — i.e. if it’s made aware of particular tweets that violate its rules.

Twitter’s position therefore appears to be that the content posted by InfoWars to other social media platforms is different to the content Jones posts to Twitter itself — ergo, its (hedgy & fudgy) argument essentially boils down to saying Jones is walking a fine enough line on Twitter itself to avoid a ban, because he hasn’t literally tweeted content that violates the letter of Twitter’s ToS.

(Though he has tweeted stuff like “the censorship of Infowars just vindicates everything we’ve been saying” — and given the hate-filled, violently untruthful things he has been saying all over the Internet, he’s essentially re-packaged all those lies into that single tweet, so… )

To spell out Twitter’s fudge: The fact of Jones being a known conspiracy theorist and widely visible hate preacher is not being factored into its ToS enforcement decisions.

The company says it’s judging the man by his output on Twitter — which means it’s failing to take into account the wider context around Jones’ tweets, i.e. all the lies and hate he peddles elsewhere (and indeed all the insinuating nods and dog whistles he makes to his followers on Twitter) — and by doing so it is in fact enabling the continued spread of hate via the wink-wink-nod-nod back door.

Twitter’s spokesman did not want to engage in a lengthy back and forth conversation, healthy or otherwise, about Jones/InfoWars so it was not possible to get a response from the company on that point.

However it does argue, i.e. in defense of its fudged position, that keeping purveyors of false news on its platform allows for an open, real-time debate which in turn allows for their lies to be challenged and debunked by people who are in their right minds — so, basically, this is the ‘fight bad speech with more speech argument’ that’s so beloved of people already enjoying powerful privilege.

The problem with that argument (actually, there are many) is it does not factor in the human cost; the people suffering directly because toxic lies impact their lives. Nor the cost to truth itself; To belief in the veracity and authenticity of credible sources of information which are under sustained and vicious attack by anti-truthers like Jones; The corrosive impact on professional journalism from lies being packaged and peddled under the lying banner of self-styled ‘truth journalism’ that Jones misappropriates. Nor the cost to society from hate speech whose very purpose is to rip up the social fabric and take down civic values — and, in the case of Jones’ particular bilious flavor, to further bang the drum of abuse via the medium of toxic disinformation — to further amplify and spread his pollution, via the power of untruth — to whip up masses of non-critically thinking conspiracy-prone followers. I could go on. (I have here.)

The amplification effect of social media platforms — combined with cynical tricks used by hate peddlers to game algorithms, such as bots retweeting and liking content to make it seem more popular than it is — makes this stuff a major, major problem.

‘Bad speech’ on such powerful platforms can become not just something to roll your eyes at and laughingly dismiss, but a toxic force that bullies, beats down and drowns out other types of speech — perhaps most especially truthful speech, because falsehood flies (and online it’s got rocket fuel) — and so can have a very deleterious impact on conversational health.

Really, it needs to be handled in a very different way. Which means Twitter’s position on Jones, and hateful anti-truthers in general, looks both flawed and weak.

It’s also now looking increasingly isolated, as other tech platforms are taking action.

Twitter’s spokesman also implied the company is working on tuning its systems to actively surface high quality counter-narratives and rebuttals to toxic BS — such as in replies to known purveyors of fake news like InfoWars.

But while such work is to be applauded, working on a fix also means you don’t actually have a fix yet. Meanwhile the lies you’re not stopping are spreading on your platform — at horrible and high cost to people and society.

It’s hard to see this as a defensible position.

And while Twitter keeps sitting on its fence, Jones’ hate speech and toxic lies, broadcast to millions as a weapon of violent disinformation, have got his video show booted from YouTube (which, after first issuing a strike yesterday then terminated his page for “violating YouTube’s Community Guidelines”).

The platform had removed ads from his channel back in March — but had not then (as Jones falsely claimed at the time) banned it. That decision took another almost half year for YouTube to arrive at.

Also yesterday, almost all of Jones’ podcasts were pulled by Apple, with the company saying it does not tolerate hate speech. “We believe in representing a wide range of views, so long as people are respectful to those with differing opinions,” it added.

Earlier this month, music streaming service Spotify also removed some of Jones’ podcasts for violating its hate-speech policy.

Even Facebook removed a bunch of Jones’ videos late last month, for violating its community standards — albeit after some dithering, and what looked like a lot of internal confusion.

The social media behemoth also imposed a 30-day ban on Jones’ personal account for posting the videos, and served him a warning notice for the InfoWars Facebook Page he controls.

Facebook later clarified it had banned Jones’ personal profile because he had previously received a warning — whereas the InfoWars Page had not, hence the latter only getting a strike.

There have even been bans from some unlikely quarters: YouPorn just announced action against Jones for a ToS violation — nixing his ability to try to pass off anti-truth hate preaching as a porn alternative on its platform.

Pinterest, too, removed Jones’ ‘hate, lies & supplements’ page after Mashable made enquiries.

So, uh, other responses than Twitter’s (of doing nothing) are widely possible.

On Twitter, Jones also benefits from being able to distinguish his account from any would-be imitators or satirists, because he has a verified account — denoted on the platform by a blue check mark badge.

We asked Twitter why it hasn’t removed Jones’ blue badge — given that the company has, until relatively recently, been rethinking its verification program. And last year it actively removed blue badges from a number of white supremacists because it was worried it looked like it had been endorsing them. Yet Jones — who spins the gigantic lie of ‘white genocide’ — continues to keep his.

Twitter’s spokesman pointed us to this tweet last month from product lead, Kayvon Beykpour, who wrote that updating the program “isn’t a top priority for us right now”.

Beykpour went on to explain that while Twitter had “paused” public verification last November (because “we wanted to address the issue that verifying the authenticity of an account was being conflated with endorsement”), it subsequently paused its own ‘pause for thought’ on having verified some very toxic individuals, with Beykpour writing in an email to staff in July:

Though the current state of Verification is definitely not ideal (opaque criteria and process, inconsistency in our procedures, external frustration from customers), I don’t believe we have the bandwidth to address this holistically (policy, process, product, and a plan around how & when these fit together) without coming at the cost of our other priorities and distracting the team.

At the same time Beykpour admits in the thread that Twitter has been ‘unpausing’ its pause on verification in some circumstances (“we still verify accounts ad hoc when we think it serves the public conversation & is in line with our policy”); but not, evidently, going so far as to unpause its pause on removing badges from hateful people who gain unjustified authenticity and authority from the perceived endorsement of Twitter verification — such as in ‘ad hoc’ situations where doing so might be terribly, terribly appropriate. Like, uh, this one.

Beykpour wrote that verification would be addressed by Twitter post-election. So it’s presumably sticking to its lack of having a policy at all right now, for now. (“I know this isn’t the most satisfying news, but I wanted to be transparent about our priorities,” he concluded.)

Twitter’s spokesman told us it doesn’t have anything further to share on verification at this point.

Jones’ toxic activity on social media has included spreading the horrendous lie that children who died in the Sandy Hook U.S. school shooting were ‘crisis actors’.

So, for now, a man who lies about the violent death of little children continues to be privileged with a badge on his not-at-all-banned Twitter account.

Two of the parents of a child who died at the school wrote an open letter to Facebook’s founder, Mark Zuckerberg, last month, describing how toxic lies about the school shooting spread via social media had metastasized into violent hate and threats directed at them.

“Our families are in danger as a direct result of the hundreds of thousands of people who see and believe the lies and hate speech, which you have decided should be protected,” wrote Lenny Pozner and Veronique De La Rosa, the parents of Noah, who died on 14 December, 2012, at the age of six.

“What makes the entire situation all the more horrific is that we have had to wage an almost inconceivable battle with Facebook to provide us with the most basic of protections to remove the most offensive and incendiary content.”

Is it time to remove Zuckerberg from (his) office?

A colleague, who shall remain nameless (because privacy is not dead), gave a thumbs down to a recent column in the NYT. The complaint was that the writer had attacked tech companies (mostly but not exclusively Facebook) without offering any solutions for these all-powerful techbro CEOs’ orchestral failures to grasp the messy complexities of humanity […]

A colleague, who shall remain nameless (because privacy is not dead), gave a thumbs down to a recent column in the NYT. The complaint was that the writer had attacked tech companies (mostly but not exclusively Facebook) without offering any solutions for these all-powerful techbro CEOs’ orchestral failures to grasp the messy complexities of humanity at a worldwide scale.

Challenge accepted.

Here’s the thought experiment: Fixing Facebook 

We’ll start with Facebook because, while it’s by no means the only tech company whose platform contains a bottomless cesspit of problems, it is the most used social platform in the West; the de facto global monopoly outside China.

And, well, even Zuckerberg’ thinks it needs fixing. Or at least that its PR needs fixing — given he made “Fixing Facebook” his ‘personal challenge’ of the year this year — proof, if any more were needed, of his incredible capacity for sounding tone-deaf.

For a little more context on these annual personal challenges, Zuckerberg once previously set himself the challenge of reading a new book every two weeks. So it seems fair to ask: Is Facebook a 26-book sized fix?

If we’re talking in book metaphor terms, the challenge of fixing Facebook seems at least on the scale of the Library of Alexandria, say, given the volume of human content being daily fenced. It may, more likely, be multiple libraries of Alexandria. Just as, if Facebook content was housed in a physical library, the company would require considerably more real estate that the largest library of the ancient world to house its staggeringly-massive-and-expanding-by-the-second human content collection — which also of course forms the foundation of its business.

Zuckerberg himself has implied that his 2018 challenge — to fix the company he founded years before the iPhone arrived to supercharge the smartphone revolution and, down that line, mobilize Facebook’s societal ‘revolution’ — is his toughest yet, and likely to take at least two or three years before it bears fruit, not just the one. So Facebook’s founder is already managing our expectations and he’s barely even started.

In all likelihood, if Facebook were left alone to keep standing ethically aloof, shaping and distributing information at vast scale while simultaneously denying that’s editing — to enjoy another decade of unforgivably bad judgement calls (so, basically, to ‘self-regulate’; or, as the New York Times put it, for Zuckerberg to be educated at societal expense) — then his 2018 personal challenge would become just ‘Chapter One, Volume One’ in a neverending life’s ‘work-in-progress’.

Great for Mark, far less great for humans and democratic societies all over the world.

Frankly, there has to be a better way. So here’s an alternative plan for fixing Facebook — or at least a few big ideas to get policymakers’ juices flowing… Bear in mind this is a thought exercise so we make no suggestions for how to enact the plan — we’re just throwing ideas out there to get folks thinking.

 

Step 1: Goodbye network of networks

Facebook has been allowed to acquire several other social communication networks — most notably photo-focused social network Instagram [1BN monthly active users] and messaging app platform WhatsApp [1.5BN] — so Zuckerberg has not just ONE massively popular social network (Facebook: [2.2BN]) but a saccharine suite of eyeball-harvesting machines.

Last month he revealed his sunless empire casts its shadow across a full 2.5BN individuals if you factor in all his apps — albeit, that was an attempt to distract investors from the stock price car crash conference call that was to follow. But the staggering size of the empire is undeniable.

So the first part of fixing Facebook is really simple: No dominant social network should be allowed to possess (or continue to possess) multiple dominant social networks.

There’s literally no good argument for why this is good for anyone other than (in Facebook’s case) Zuckerberg and Zuckerberg’s shareholders. Which is zero reason not to do something that’s net good for the rest of humanity. On one level it’s just basic math.

Setting aside (for just a second) the tangible damages inflicted upon humans by unregulated social media platforms with zero editorial values and a threadbare minimum of morality which wafts like gauze in the slipstream of supercharged and continuously re-engineered growth and engagement engines that DO NOT FACTOR HUMAN COST into their algorithmic calculations — allowing their masters to preside over suprasocietal revenue stripping mega-platforms — which, to be clear, is our primary concern here — the damage to competition and innovation alone from Zuckerberg owning multiple social networks is both visible and quantifiable.

Just ask Snapchat. Because, well, you can’t ask the social networks that don’t exist because Zuckerberg commands a full flush of attention-harvesting networks. So take a good, long, hard look at all those Stories clones he’s copypasted right across his social network of social networks. Not very innovative is it?

And even if you don’t think mega-platforms cause harm by eroding civic and democratic values (against, well, plenty of evidence to the contrary), if you value creativity, competition and consumer choice it’s equally a no brainer to tend your markets in a way that allows multiple distinct networks to thrive, rather than let one megacorp get so powerful it’s essentially metastasized into a Borg-like entity capable of enslaving and/or destroying any challenger, idea or even value in its path. (And doing all that at the same time as monopolizing its users’ attention.)

We see this too in how Facebook applies its technology in a way that seeks to reshape laws in its business model’s favor. Because while individuals break laws, massively powerful megacorps merely lean their bulk to squash them into a more pleasing shape.

Facebook is not just spending big on lobbying lawmakers (and it sure is doing that), it’s using technology and the brute force of its platform to pound on and roll over the rule of law by deforming foundational tenets of society. Privacy being just one of them.

And it’s not doing this reshaping for the good of humanity. Oh no. While democratic societies have rules to protect the vulnerable and foster competition and choice because they are based on recognizing value in human life, Facebook’s motives are 100% self-interested and profit-driven.

The company wants to rewrite rules globally to further expand its bottom line. Hence its mission to pool all humans into a single monetizable bucket — no matter if people don’t exactly mesh together because people aren’t actually bits of data. If you want to be that reductive make soup, not a “global community”.

So step one to fixing Facebook is simple: Break up Zuckerberg’s empire.

In practical terms that means forcing Facebook to sell Instagram and WhatsApp — at a bare minimum. A single network is necessarily less potent than a network of networks. And it becomes, at least theoretically possible for Facebook to be at risk from competitive forces.

You would also need to at keep a weather eye on social VR, in case Oculus needs to be taken out of Zuckerberg’s hands too. There’s less of an immediate imperative there, certainly. This VR cycle is still as dead as the tone of voice the Facebook founder used to describe the things his avatar was virtually taking in when he indulged in a bit of Puerto Rico disaster tourism for an Oculus product demo last year.

That said, there’s still a strong argument to say that Facebook, the dominant force of the social web and then the social mobile web, should not be allowed to shape and dictate even a nascent potential future disruptor in the same social technology sphere.

Not if you value diversity and creativity — and, well, a lot more besides.

But all these enforced sells-offs would just raise lots more money for Facebook! I hear you cry. That’s not necessarily a bad thing — so long as it gets, shall we say, well spent. The windfall could be used to fund a massive recruitment drive to properly resource Facebook’s business in every market where it operates.

And I do mean MASSIVE. Not the ‘10,000 extra security and moderation staff’ Facebook has said will hire by the end of this year (raising the headcount it has working on these critical tasks to around 20k in total).

To be anywhere near capable of properly contextualizing content across a platform that’s actively used by 2BN+ humans — and therefore to be able to rapidly and effectively spot and quash malicious manipulation, hateful conduct and so on, and thus responsibly manage and sustain a genuine global ‘community’ — the company would likely need to add hundreds of thousands of content reviewers/moderators. Which would be very expensive indeed.

Yet Facebook paid a cool $19BN for WhatsApp back in 2014 — so an enforced sell off of its other networks should raise a truck tonne of cash to held fund a vastly larger ‘trust and safety’ personnel bill. (While AI systems and technologies can help with the moderation challenge, Zuckerberg himself has admitted that AI alone won’t scale to the content challenge for “many years” to come — if indeed it can scale at all.)

Unfortunately there’s another problem though. The human labor involved in carrying out content moderation across Facebook’s 2BN+ user mega-platform is ethically horrifying because the people who Facebook contracts for ‘after the fact’ moderation necessarily live neck deep in its cesspit. Their sweating toil is to keep paddling the shit so Facebook’s sewers don’t back up entirely and flood the platform with it.

So, in a truly ideal ‘fixed Facebook’ scenario, there wouldn’t be a need for this kind of dehumanizing, industrialized content review system — which necessitates that eyes be averted and empathy disengaged from any considerations of a traumatized ‘clean up’ workforce.

Much like Thomas Moore’s Utopia, Zuckerberg’s mega-platform requires an unfortunate underclass of worker doing its dirty work. And just as the existence of slaves in Utopia made it evident that the ‘utopian vision’ being presented was not really all it seemed, Facebook’s outsourced teams of cheap labor — whose day job is to sit and watch videos of human beheadings, torture, violence etc; or make a microsecond stress-judgement on whether a piece of hate speech is truly hateful enough to be rendered incapable of monetization and pulled from the platform — the awful cost on both sides of that human experience undermines Zuckerberg’s claim that he’s “building global community”.

Moore coined the word ‘utopia’ from the Greek — and its two components suggest an intended translation of ‘no place’. Or perhaps, better yet, it was supposed to be a pun — as Margaret Atwood has suggested — meaning something along the lines of ‘the good place that simply doesn’t exist’. Which might be a good description for Zuckerberg’s “global community”.

So we’ll come back to that.

Because the next step in the plan should help cut the Facebook moderation challenge down to a more manageable size…

 

Step 2) Break up Facebook into lots of market specific Facebooks

Instead of there being just one Facebook (comprised of two core legal entities: Facebook USA and Facebook International, in Ireland), it’s time to break up Facebook’s business into hundreds of market specific Facebooks that can really start to serve their local communities. You could go further still and subdivide at a state, county or community level.

A global social network is an oxymoron. Humans are individuals and humanity is made up of all sorts of peoples, communities and groupings. So to suggest the whole of humanity needs to co-exist on the exact same platform, under the exact same overarching set of ‘community standards’, is — truly — the stuff of megalomaniacs.

To add insult to societal and cultural injury, Facebook — the company that claims it’s doing this (while ignoring the ‘awkward’ fact that what it’s building isn’t functioning equally everywhere, even in its own backyard) — has an executive team that’s almost exclusively white and male, and steeped in a very particular Valley ‘Kool Aid’ techno-utopian mindset that’s wrapped in the U.S. flag and bound to the U.S. constitution.

Which is another way of saying that’s the polar opposite of thinking global.

Facebook released its fifth annual diversity report this year which revealed it making little progress in increasing diversity over the past five years. In senior leadership roles, Facebook’s 2018 skew is 70:30 male female, and a full 69.7% white. While the company was fully 77% male and 74% white in 2014.

Facebook’s ongoing lack of diversity is not representative of the U.S. population, let alone reflective of the myriad regions its product reaches around the planet. So the idea that an executive team with such an inexorably narrow, U.S.-focused perspective could meaningfully — let alone helpfully — serve the whole of humanity is a nonsense. And the fact that Zuckerberg is still talking in those terms merely spotlights an abject lack of corporate diversity and global perspective at his company.

If he genuinely believes his own “global community” rhetoric he’s failing even harder than he looks. Most probably, though, it’s just a convenient marketing label to wallpaper the growth strategy that’s delivered for Facebook’s shareholders for years — by the company pushing into and dominating international markets.

Yet, and here’s the rub, without making commensurate investments in resourcing its business in international markets….

This facet of Facebook’s business becomes especially problematic when you consider how the company has been pouring money into subsidizing (or seeking to) Internet access in emerging markets. So it is spending lots and lots of money, just not on keeping people safe.

Initially, Facebook spent money to expand the reach of its platform via its Internet.org ‘Free Basics’ initiative which was marketed as a ‘humanitarian’, quasi-philanthropic mission to ‘wire the world’ — though plenty of outsiders and some target countries viewed it not as charity but as a self-serving and competitive-crushing business development tactic. (Including India — which blocked Free Basics, but not before Facebook had spent millions on ads trying to get locals to lobby the regulator on its behalf).

More recently it’s been putting money into telecom infrastructure a bit less loudly — presumably hoping a less immediately self-serving approach to investing in infrastructure in target growth markets will avoid another highly politicized controversy.

It’s more wallpapering though: Connectivity investments are a business growth strategy predicated on Facebook removing connectivity barriers that stand in the way of Facebook onboarding more eyeballs.

And given the amounts of money Facebooks has been willing to spend to try to lodge its product in the hands of more new Internet users — to the point where, in some markets, Facebook effectively is the Internet — it’s even less forgivable that the company has failed to properly resource its international operations and stop its products from having some truly tragic consequences.

The cost to humanity for Facebook failing to operate with due care is painfully visible and horribly difficult to quantify.

Not that Zuckerberg has let those inconvenient truths stop him from continuing to suggest he’s the man to build a community for the planet. But again that rather implies Facebook’s problems grow out of Facebook’s lack of external perspective.

Aside from the fact that we are all equally human, there is no one homogenous human community that spans the entire world. So when Zuckerberg talks about Facebook’s ‘global community’ he is, in effect, saying nothing — or saying something almost entirely meaningless as to render down to a platitudinous sludge. (At least unless his desire is indeed a Borg-esque absorption of other cultures — into a ‘resistance is futile’ homogenous ‘Californormification’ of the planet. And we must surely hope it’s not. Although Facebook’s Free Basics have been accused of amounting to digital colonialism.)

Zuckerberg does seem to have quasi-realized the contradiction lurking at the the tin heart of his ‘global’ endeavor, though. Which is why he’s talked suggestively about creating a ‘Supreme Court of Facebook‘ — i.e. to try to reboot the pitifully unfit for purpose governance structure.

But talk of ‘community-oriented governance’ has neither been firmed up nor formalized into a tangible structural reform plan.

While the notion of a Supreme Court of Facebook, especially, does risk sounding worryingly like Zuckerberg fancies his own personal Star Chamber, the fact he’s even saying this sort of stuff shows he knows Facebook has planet-straddling problems that are far, far too big for its minimalist Libertarian ‘guardrails’ to manage or control. And in turn that suggests the event horizon of scaling Facebook’s business model has been reached.

Aka: Hello $120BN market cap blackhole.

“It’s just not clear to me that us sitting in an office here in California are best placed to always determine what the policies should be for people all around the world,” Zuckerberg said earlier THIS YEAR — 2018! — in what must surely count as the one of the tardiest enlightenments of a well educated public person in the Western world, period.

“I’ve been working on and thinking through,” he continued his mental perambulation. “How can you set up a more democratic or community-oriented process that reflects the values of people around the world?”

Well, Mark, here’s an idea to factor into your thinking: Facebook’s problem is Facebook’s massive size.

So why not chop the platform up into market specific operations that are free to make some of their own decisions and let them develop diverse corporate cultures of their own. Most importantly empower them to be operationally sensitive to the needs of local communities — and so well placed to responsively serve them.

Imagine the Facebook brand as a sort of loose ‘franchise’, with each little Facebook at liberty to intelligently adapt the menu to local tastes. And each of these ‘content eateries’ taking pride in the interior of its real estate, with dedicated managers who make their presence felt and whose jobs are to ensure great facilities but no violent food fights.

There would also need to be some core principles too, of course. A set of democratic and civic values that all the little Facebooks are bound to protect — to push back against attempts by states or concerted external forces seeking to maliciously hijack and derail speech.

But switch around the current reality — a hulkingly massive platform attached to a relatively tiny (in resources terms) business operation — and the slavering jabberwocky that Zuckerberg is now on a personal mission to slay might well cease to exist, as multiple messy human challenges get cut down to a more manageable size. Not every single content judgement call on Facebook needs to scale planet-wide.

Multiple, well resourced market-specific Facebooks staffed locally so they can pro-actively spot problems and manage their communities would not be the same business at all. Facebook would become an even more biodiverse ecosystem — of linked but tonally distinct communities — which could even, in time, diverge a bit on the feature front, via adding non-core extras, based on market specific appetites and tastes.

There would obviously have to be basic core social function interoperability — so that individual users of different Facebooks could still connect and communicate. But beyond a bit of interplay (a sort of ‘Facebook Basics’) why should there be a requirement that everyone’s Facebook experience be exactly the same?

While Facebook talks as if it has a single set of community standards, the reality is fuzzier. For example it applies stricter hate speech rules to content moderation in a market like Germany, which passed a social media hate speech law last year. Those sorts of exceptions aren’t going to go away either; as more lawmakers wake up to the challenges posed by the platform more demands will be made to regulate the content on the platform.

So, Zuckerberg, why not step actively into a process of embracing greater localization — in a way that’s sensitive to cultural and societal norms — and use the accrued political capital from that to invest in defending the platform’s core principles?

This approach won’t work in every market, clearly. But allowing for a greater tonality of content — a more risqué French Facebook, say, vs the ‘no-nipples please’ U.S. flavor — coupled with greater sensitivity to market mood and feedback could position Facebook to work with democracies and strengthen civic and cultural values, instead of trying to barge its way along by unilaterally imposing the U.S. constitution on the rest of the planet.

Facebook as it is now, globally scaled but under-resourced, is not in a position to enforce its own community standards. It only does so when or if it receives repeat complaints (and even then it won’t always act on them).

Or when a market has passed legislation enforcing action with a regime of fines (a recent report by a UK parliamentary committee, examining the democratic implications of social media fueled disinformation, notes that one in six of Facebook’s moderators now works in Germany — citing that as “practical evidence that legislation can work”).

So there are very visible cracks in both its claim to be “building global community” or even that it has community standards at all, given it doesn’t pro-actively enforce them (in most markets). So why not embrace a full fragmentation of its platform — and let a thousand little blue ships set sail!

And if Facebook really wants one community principle to set as its pole star, one rule to rule them all (and to vanquish its existential jabberwocky), it should swear to put life before data.

Locally tuned, culturally sensitive Facebooks that stand up for democratic values and civic standards could help rework the moderation challenge — removing the need for Facebook to have the equivalent of sweat shops based on outsourcing repeat human exposure to violent and toxic content.

This element is one of the ugliest sides of the social media platform business. But with empowered, smaller businesses operating in closer proximities to the communities being served, Facebook stands a better chance of getting on top of its content problems — getting out of a reactive crisis mode piled high with problems where it’s currently stuck to taking up a position in the community intelligence vanguard where its workers can root out damaging abuse before it gets to go viral, metastasize and wreak wider societal harms.

Proper community management could also, over time, encourage a more positive sharing environment to develop — where posting hateful stuff doesn’t get rewarded with feedback loops. Certainly not algorithmically, as it indeed has been.

As an additional measure, a portion of the financial windfall gained from selling off Facebook’s other social networks could be passed directly to independent trustees appointed to the Chan Zuckerberg Foundation for spending on projects intended to counter the corrosive effects of social media on information veracity and authenticity — such as by funding school age educational programs in critical thinking.

Indeed, UK lawmakers have already called for a social media levy for a similar purpose.

 

Step 3) Open the black boxes

There would still be a Facebook board and a Facebook exec team in a head office in California sitting atop all these community-oriented Facebooks — which, while operationally liberated, would still be making use of its core technology and getting limited corporate steerage. So there would still be a need for regulators to understand what Facebook’s code is doing.

Algorithmic accountability of platform technologies is essential. Regulators need to be able to see the inputs underlying the information hierarchies that these AI engines generate, and compare those against the outputs of that shaping. Which means audits. So opening the commercial black boxes — and the data holdings — to regulatory oversight.

Discrimination is easier to get away with in darkness. But Mega-platforms have shielded their commercial IP from public scrutiny and it’s only when damaging effects have surfaced in the public consciousness that users have got a glimpse of what’s been going on.

Facebook’s defense has been to say it was naive in the face of malicious activity like Russian-backed election meddling. That’s hardly an argument for more obscurity and more darkness. If you lack awareness and perspective, ask for expert help Mark.

Lawmakers have also accused the company of willfully obstructing good faith attempts at investigating scandals such as Cambridge Analytica data misuse, Kremlin-backed election interference, or how foreign money flowed into its platform seeking to influence the UK’s Brexit referendum result.

Willful obstruction to good faith, democratically minded political interrogation really isn’t a sustainable strategy. Nor an ethically defensible one.

Given the vast, society-deforming size of these platforms politicians are simply not just going to give up and go home. There will have to be standards to ensure these mega-powerful information distribution systems aren’t at risk of being gamed or being biased or otherwise misused and those standards will have to be enforced. And the enforcement must also be able to be checked and verified. So, yes, more audits.

Mega-platforms have also benefited from self-sustaining feedback loops based on their vast reach and data holdings, allowing them to lock in and double down on a market dominating position by, for example, applying self-learning algorithms trained on their own user data or via A/B testing at vast, vast scale to optimize UX design to maximize engagement and monopolize attention.

User choice in this scenario is radically denuded, and competition increasingly gets pushed back and even locked out, without such easy access to equivalently massive pools of data.

If a mega-platform has optimized the phasing and positioning of — for example — a consent button by running comparative tests to determine which combination yields the fewest opt outs, is it fair or right to the user being asked to ‘choose’? Are people being treated with respect? Or, well, like lab rats?

Breaking Facebook’s platform into lots of Facebooks could also be an opportunity to rethink its data monopoly. To argue that its central business should not have an absolute right to the data pool generated by each smaller, market specific Facebook.

Part of the regulatory oversight could include a system of accountability over how Facebook’s parent business can and cannot use pooled data holdings.

If Facebook’s executive team had to make an ethics application to a relevant regulatory panel to request and justify access each time the parent business wanted to dip into the global data pool or tap data from a particular regional cluster of Facebooks, how might that change thought processes within the leadership team?

Facebook’s own (now former) CSO, Alex Stamos, identified problems baked into the current executive team’s ‘business as usual’ thinking — writing emphatically in an internal memo earlier this year: “We need to build a user experience that conveys honesty and respect, not one optimized to get people to click yes to giving us more access. We need to intentionally not collect data where possible, and to keep it only as long as we are using it to serve people… We need to be willing to pick sides when there are clear moral or humanitarian issues. And we need to be open, honest and transparent about challenges and what we are doing to fix them.”

It seems unlikely that an application to the relevant regulators asking for ‘Europe-wide data so we can A/B test user consent flows to get more Europeans to switch on facial recognition‘ would pass the ‘life before data’ community standard test.

And, well, it’s well established that the fact of being watched and knowing it’s happening has the power to change behavior. After all, Facebook’s platform is a major testament to that.

So it may be more that it’s external guidance — rather than a new internal governance model — which Facebook sorely lacks. Some external watchers to watch its internal watchmen.

 

Step 4) Remove Zuckerberg from (his) office

Public companies are supposed to be answerable to their shareholders. Thanks to the share structure that Mark Zuckerberg put in place at Facebook, Mark Zuckerberg is answerable to no one except himself. And despite Facebook’s years of scandals, he does not appear to have ever felt the urge to sack himself.

When the idea of personal accountability was brought up with him, in a recent podcast interview with Kara Swisher, he had a moment of making a light joke of it — quipping “do you really want me to fire myself right now? For the news?” before falling back on his line that: “I think we should do what’s gonna be right for the community.”

And, you know what, the joke was exactly right: The idea that Zuckerberg would terminate his own position is both laughable and ludicrous. It is a joke.

Which means Facebook’s executive structure is also a joke because there is zero accountability at the highest level — beyond Mark’s personal threshold for shame or empathy — and that’s now a global problem.

Zuckerberg has more power than most of the world’s elected politicians (and arguably some of the world’s political leaders). Yet he can’t be kicked out of his office, nor lose his CEO seat at any ballot box. He’s a Facebook fixture — short of a literal criminal conviction or otherwise reputation terminating incident.

While you could argue that not being answerable to the mercenary whims of shareholder pressure is a good thing because it frees Zuckerberg to raise business transformation needs above returns-focused investor considerations (albeit, let’s see how his nerve holds after that $120BN investor punch) — his record in the CEO’s chair counters any suggestion that he’s a person who makes radical and sweeping changes to Facebook’s modus operandi. On the contrary, he’s shown himself a master of saying ‘oops we did it again!’ and then getting right back to screwing up as usual.

He’s also demonstrated a consistent disbelief that Facebook’s platform creates problems — preferring to couch connecting people as a glorious humanitarian mission from whence life-affirming marriages and children flow. Rather than seeing risks in putting global megaphones in the hands of anyone with an urge to shout.

As recently as November 2016 he was still dismissing the idea that political disinformation spread via Facebook had been in any way impactful on the US presidential election — as a “pretty crazy idea” — yet his own business had staffed divisions dedicated to working with US politicians to get their campaign messages out. It shouldn’t be rocket science to see a contradiction there. But until very recently Zuckerberg apparently couldn’t.

The fact of him also being the original founder of the business does not help in the push for disruptive change to Facebook itself. The best person to fix a radically broken product is unlikely to be the person whose entire adult life has been conjoined to a late night college dorm room idea spat online — and then which ended up spinning up and out into a fortune. And then into a major, major global mess.

The ‘no better person than me to fix it’ line can be countered by pointing to Zuckerberg’s personal history of playing fast and loose with other people’s data (from the “dumb fucks” comment all the way back in his student days to years of deliberate platform choices at Facebook that made people’s information public by default); and by suggesting entrenched challenges would surely benefit from fresh eyes, new thinking and a broader perspective.

Add to that, Zuckerberg has arguably boxed himself in, politically speaking, thanks to a series of disingenuous, misleading and abstruse claims and statements made to lawmakers — limiting his room for manoeuvre or for rethinking his approach; let alone being able to genuinely compromise or make honest platform changes.

His opportunity to be radically honest about Facebook’s problems probably passed years and years back — when he was busy working hard on his personal challenge to wear a tie everyday [2009]. Or only eat animals he kills himself [2011].

By 2013’s personal challenge, it’s possible that Zuckerberg had sensed something new in the data stream that was maybe coming down the pipes at him — as he set himself the challenge of expanding his personal horizons (not that he put it that way) by “meeting a new person every day who does not work at Facebook”.

Meeting a new person every day who did work at Facebook would have been far too easy, see.

Is it even possible to think outside the box when your entire adult life has been spent tooling away inside the same one?

 

Step 5) Over to you… 

What are your radical solutions for fixing Facebook? Should Zuckerberg stay or should he go? What do you want lawmakers to do about social media? What kinds of policy interventions might set these mega-platforms on a less fractious path? Or do you believe all this trouble on social media is a storm in a teacup that will blow over if we but screw our courage to the sticking place and wait for everyone to catch up with the cardinal Internet truth that nothing online is what it seems…

Ideas in the comments pls…

Duo Security researchers’ Twitter ‘bot or not’ study unearths crypto botnet

A team of researchers at Duo Security has unearthed a sophisticated botnet operating on Twitter — and being used to spread a cryptocurrency scam. The botnet was discovered during the course of a wider research project to create and publish a methodology for identifying Twitter account automation — to help support further research into bots […]

A team of researchers at Duo Security has unearthed a sophisticated botnet operating on Twitter — and being used to spread a cryptocurrency scam.

The botnet was discovered during the course of a wider research project to create and publish a methodology for identifying Twitter account automation — to help support further research into bots and how they operate.

The team used Twitter’s API and some standard data enrichment techniques to create a large data set of 88 million public Twitter accounts, comprising more than half a billion tweets. (Although they say they focused on the last 200 tweets per account for the study.)

They then used classic machine learning methods to train a bot classifier, and later applied other tried and tested data science techniques to map and analyze the structure of botnets they’d uncovered.

They’re open sourcing their documentation and data collection system in the hopes that other researchers will pick up the baton and run with it — such as, say, to do a follow up study focused on trying to ID good vs bad automation.

Their focus for their own classifier was on pure-play bots, rather than hybrid accounts which intentionally blend automation with some human interactions to make bots even harder to spot.

They also not look at sentiment for this study — but were rather fixed on addressing the core question of whether a Twitter account is automated or not.

They say it’s likely a few ‘cyborg’ hybrids crept into their data-set, such as customer service Twitter accounts which operate with a mix of automation and staff attention. But, again, they weren’t concerned specifically with attempting to identify the (even more slippery) bot-human-agent hybrids — such as those, for example, involved in state-backed efforts to fence political disinformation.

The study led them into some interesting analysis of botnet architectures — and their paper includes a case study on the cryptocurrency scam botnet they unearthed (which they say was comprised of at least 15,000 bots “but likely much more”), and which attempts to syphon money from unsuspecting users via malicious “giveaway” links…

‘Attempts’ being the correct tense because, despite reporting the findings of their research to Twitter, they say this crypto scam botnet is still functioning on its platform — by imitating otherwise legitimate Twitter accounts, including news organizations (such as the below example), and on a much smaller scale, hijacking verified accounts…

They even found Twitter recommending users follow other spam bots in the botnet under the “Who to follow” section in the sidebar. Ouch.

A Twitter spokeswoman would not answer our specific questions about its own experience and understanding of bots and botnets on its platform, so it’s not clear why it hasn’t been able to totally vanquish this crypto botnet yet. Although in a statement responding to the research, the company suggests this sort of spammy automation may be automatically detected and hidden by its anti-spam countermeasures (which would not be reflected in the data the Duo researchers had access to via the Twitter API).

Twitter said:

We are aware of this form of manipulation and are proactively implementing a number of detections to prevent these types of accounts from engaging with others in a deceptive manner. Spam and certain forms of automation are against Twitter’s rules. In many cases, spammy content is hidden on Twitter on the basis of automated detections. When spammy content is hidden on Twitter from areas like search and conversations, that may not affect its availability via the API. This means certain types of spam may be visible via Twitter’s API even if it is not visible on Twitter itself. Less than 5% of Twitter accounts are spam-related.

Twitter’s spokeswoman also make the (obvious) point that not all bots and automation is bad — pointing to a recent company blog which reiterates this, with the company highlighting the “delightful and fun experiences” served up by certain bots such as Pentametron, for example, a veteran automated creation which finds rhyming pairs of Tweets written in (accidental) iambic pentameter.

Certainly no one in their right mind would complain about a bot that offers automated homage to Shakespeare’s preferred meter. Even as no one in their right mind would not complain about the ongoing scourge of cryptocurrency scams on Twitter…

One thing is crystal clear: The tricky business of answering the ‘bot or not’ question is important — and increasingly so, given the weaponization of online disinformation. It may become a quest so politicized and imperative that platforms end up needing to display a ‘bot score’ alongside every account (Twitter’s spokeswoman did not respond when we asked if it might consider doing this).

While there are existing research methodologies and techniques for trying to determine Twitter automation, the team at Duo Security say they often felt frustrated by a lack of supporting data around them — and that that was one of their impetuses for carrying out the research.

“In some cases there was an incomplete story,” says data scientist Olabode Anise. “Where they didn’t really show how they got their data that they said that they used. And they maybe started with the conclusion — or most of the research talked about the conclusion and we wanted to give people the ability to take on this research themselves. So that’s why we’re open sourcing all of our methods and the tools. So that people can start from point ‘A’: First gathering the data; training a model; and then finding bots on Twitter’s platform locally.”

“We didn’t do anything fancy or investigative techniques,” he adds. “We were really outlying how we could do this at scale because we really think we’ve built one of the largest data sets associated with public twitter accounts.”

Anise says their classifier model was trained on data that formed part of a 2016 piece of research by researchers at the University of Southern California, along with some data from the crypto botnet they uncovered during their own digging in the data set of public tweets they created (because, as he puts it, it’s “a hallmark of automation” — so turns out cryptocurrency scams are good for something.)

In terms of determining the classifier’s accuracy, Anise says the “hard part” is the ongoing lack of data on how many bots are on Twitter’s platform.

You’d imagine (or, well, hope) Twitter knows — or can at least estimate that. But, either way, Twitter isn’t making that data-point public. Which means it’s difficult for researchers to verify the accuracy of their ‘bot or not’ models against public tweet data. Instead they have to cross-check classifiers against (smaller) data sets of labeled bot accounts. Ergo, accurately determining accuracy is another (bot-spotting related) problem.

Anise says their best model was ~98% “in terms of identifying different types of accounts correctly” when measured via a cross-check (i.e. so not checking against the full 88M data set because, as he puts it, “we don’t have a foolproof way of knowing if these accounts are bots or not”).

Still, the team sounds confident that their approach — using what they dub as “practical data science techniques” — can bear fruit to create a classifier that’s effective at finding Twitter bots.

“Basically we showed — and this was what we were really were trying to get across — is that some simple machine learning approaches that people who maybe watched a machine learning tutorial could follow and help identify bots successfully,” he adds.

One more small wrinkle: Bots that the model was trained on weren’t all forms of automation on Twitter’s platform. So he concedes that may also impact its accuracy. (Aka: “The model that you build is only going to be as good as the data that you have.” And, well, once again, the people with the best Twitter data all work at Twitter… )

The crypto botnet case study the team have included in their research paper is not just there for attracting attention: It’s intended to demonstrate how, using the tools and techniques they describe, other researchers can also progress from finding initial bots to pulling on threads, discovering and unraveling an entire botnet.

So they’ve put together a sort of ‘how to guide’ for Twitter botnet hunting.

The crypto botnet they analyze for the study, using social network mapping, is described in the paper as having a “unique three-tiered hierarchical structure”.

“Traditionally when Twitter botnets are found they typically follow a very flat structure where every bot in the botnet has the same job. They’re all going to spread a certain type of tweet or a certain type of spam. Usually you don’t see much co-ordination and segmentation in terms of the jobs that they have to do,” explains principal security engineer Jordan Wright.

“This botnet was unique because whenever we started mapping out the social connections between different bots — figuring out who did they follow and who follows them — we were able to enumerate a really clear structure showing bots that are connected in one particular way and an entire other cluster that were connected in a separate way.

“This is important because we see how the bot owners are changing their tactics in terms of how they were organizing these bots over time.”

They also discovered the spam tweets being published by the botnet were each being boosted by other bots in the botnet to amplify the overall spread of the cryptocurrency scam — Wright describes this as a process of “artificial inflation”, and says it works by the botnet owner making new bots whose sole job is to like or, later on, retweet the scammy tweets.

“The goal is to give them an artificial popularity so that if i’m the victim and I’m scrolling through Twitter and I come across these tweets I’m more likely to think that they’re legitimate based on how often they’ve been retweeted or how many times they’ve been liked,” he adds.

“Mapping out these connections between likes and, as well as the social network we have already gathered, really gives is us a multi layered botnet — that’s pretty unique, pretty sophisticated and very much organized where each bot had one, and really only one job, to do to try to help support the larger goal. That was unique to this botnet.”

Twitter has been making a bunch of changes recently intended to crack down on inauthentic platform activity which spammers have exploited to try to lend more authenticity and authority to their scams.

Clearly, though, there’s more work for Twitter to do.

“There are very practical reasons why we would consider it sophisticated,” adds Wright of the crypto botnet the team have turned into a case study. “It’s ongoing, it’s evolving and it’s changed its structure over time. And the structure that it has is hierarchical and organized.”

Anise and Wright will be presenting their Twitter botnet research on Wednesday, August 8 at the Black Hat conference.