Social media should have “duty of care” towards kids, UK MPs urge

Social media platforms are being urged to be far more transparent about how their services operate and to make “anonymised high-level data” available to researchers so the technology’s effects on users — and especially on children and teens — can be better understood. The calls have been made in a report by the UK parliament’s […]

Social media platforms are being urged to be far more transparent about how their services operate and to make “anonymised high-level data” available to researchers so the technology’s effects on users — and especially on children and teens — can be better understood.

The calls have been made in a report by the UK parliament’s Science and Technology Committee which has been looking into the impacts of social media and screen use among children — to consider whether such tech is “healthy or harmful”.

“Social media companies must also be far more open and transparent regarding how they operate and particularly how they moderate, review and prioritise content,” it writes.

Concerns have been growing about children’s use of social media and mobile technology for some years now, with plenty of anecdotal evidence and also some studies linking tech use to developmental problems, as well as distressing stories connecting depression and even suicide to social media use.

Although the committee writes that its dive into the topic was hindered by “the limited quantity and quality of academic evidence available”. But it also asserts: “The absence of good academic evidence is not, in itself, evidence that social media and screens have no effect on young people.”

“We found that the majority of published research did not provide a clear indication of causation, but instead indicated a possible correlation between social media/screens and a particular health effect,” it continues. “There was even less focus in published research on exactly who was at risk and if some groups were potentially more vulnerable than others when using screens and social media.”

The UK government expressed its intention to legislate in this area, announcing a plan last May to “make social media safer” — promising new online safety laws to tackle concerns.

The committee writes that it’s therefore surprised the government has not commissioned “any new, substantive research to help inform its proposals”, and suggests it get on and do so “as a matter of urgency” — with a focus on identifying people at risk of experiencing harm online and on social media; the reasons for the risk factors; and the longer-term consequences of the tech’s exposure on children.

It further suggests the government should consider what legislation is required to improve researchers’ access to this type of data, given platforms have failed to provide enough access for researchers of their own accord.

The committee says it heard evidence of a variety of instances where social media could be “a force for good” but also received testimonies about some of the potential negative impacts of social media on the health and emotional wellbeing of children.

“These ranged from detrimental effects on sleep patterns and body image through to cyberbullying, grooming and ‘sexting’,” it notes. “Generally, social media was not the root cause of the risk but helped to facilitate it, while also providing the opportunity for a large degree of amplification. This was particularly apparent in the case of the abuse of children online, via social media.

“It is imperative that the government leads the way in ensuring that an effective partnership is in place, across civil society, technology companies, law enforcement agencies, the government and non-governmental organisations, aimed at ending child sexual exploitation (CSE) and abuse online.”

The committee suggests the government commission specific research to establish the scale and prevalence of online CSE — pushing it to set an “ambitious target” to halve reported online CSE in two years and “all but eliminate it in four”.

A duty of care

A further recommendation will likely send a shiver down tech giants’ spines, with the committee urging a duty of care principle be enshrined in law for social media users under 18 years of age to protect them from harm when on social media sites.

Such a duty would up the legal risk stakes considerably for user generated content platforms which don’t bar children from accessing their services.

The committee suggests the government could achieve that by introducing a statutory code of practice for social media firms, via new primary legislation, to provide “consistency on content reporting practices and moderation mechanisms”.

It also recommends a requirement in law for social media companies to publish detailed Transparency Reports every six months.

It is also for a 24 hour takedown law for illegal content, saying that platforms should have to review reports of potentially illegal content and take a decision on whether to remove, block or flag it — and reply the decision to the individual/organisation who reported it — within 24 hours.

Germany already legislated for such a law, back in 2017 — though in that case the focus is on speeding up hate speech takedowns.

In Germany social media platforms can be fined up to €50 million if they fail to comply with the NetzDG law, as its truncated German name is known. (The EU executive has also been pushing platforms to remove terrorist related material within an hour of a report, suggesting it too could legislate on this front if they fail to moderate content fast enough.)

The committee suggests the UK’s media and telecoms regulator, Ofcom would be well-placed to oversee how illegal content is handled under any new law.

It also recommends that social media companies use AI to identify and flag to users (or remove as appropriate) content that “may be fake” — pointing to the risk posed by new technologies such as “deep fake videos”.

More robust systems for age verification are also needed, in the committee’s view. It writes that these must go beyond “a simple ‘tick box’ or entering a date of birth”.

Looking beyond platforms, the committee presses the government to take steps to improve children’s digital literacy and resilience, suggesting PSHE (personal, social and health) education should be made mandatory for primary and secondary school pupils — delivering “an age-appropriate understanding of, and resilience towards, the harms and benefits of the digital world”.

Teachers and parents should also not be overlooked, with the committee suggesting training and resources for teachers and awareness and engagement campaigns for parents.

Facebook cracks down on opioid dealers after years of neglect

Facebook’s role in the opioid crisis could become another scandal following yesterday’s release of harrowing new statistics from the Center for Disease Control. It estimated there were nearly 30,000 synthetic opioid overdose deaths in the US in 2017, up from roughly 20,000 the year before. When recreational drugs like Xanax and OxyContin are adulterated with […]

Facebook’s role in the opioid crisis could become another scandal following yesterday’s release of harrowing new statistics from the Center for Disease Control. It estimated there were nearly 30,000 synthetic opioid overdose deaths in the US in 2017, up from roughly 20,000 the year before. When recreational drugs like Xanax and OxyContin are adulterated with the more powerful synthetic opioid Fentanyl, the misdosage can prove fatal. Xanax, OxyContin, and other pain killers are often bought online, with dealers promoting themselves on social media including Facebook.

Hours after the new stats were reported by the New York Times and others, a source spotted that Facebook’s internal search engine stopped returning posts, Pages, and Groups for searches of “OxyContin”, “Xanax”, “Fentanyl”, and other opioids, as well as other drugs like “LSD”. Only videos, often news reports deploring opiate abuse, and user profiles whose names match the searches are now returned. This makes it significantly harder for potential buyers or addicts to connect with dealers through Facebook.

However, some dealers have taken to putting drug titles into their Facebook profile names, allowing accounts like “Fentanyl Kingpin Kilo” to continue showing up in search results. It’s not exactly clear when the search changes occurred.

On some search result pages for queries like “Buy Xanax”, Facebook is now showing a “Can we help?” box that says “If you or someone you know struggles with opioid misuse, we would like to help you find ways to get free and confidential treatment referrals, as well as information about substance use, prevention and recovery.” A “Get support” button opens the site of The Substance Abuse and Mental Health Services Administration, a branch of the US department of health and human services that provides addiction resources. Facebook had promised back in June that this feature was coming.

Facebook search results for many drug names now only surface people and video news reports, and no longer show posts, Pages, or Groups which often offered access to dealers

When asked, Facebook confirmed that it’s recently made it harder to find content that facilitates the sale of opioids on the social network. Facebook tells me it’s constantly updating its approach to thwart bad actors who look for new ways to bypass its safeguards. The company confirms it’s now removing content violating its drug policies, it’s blocked hundreds of terms associated with drug sales from showing results other than links to news about drug abuse awareness. It’s also removed thousands of terms from being suggested as searches in its typeahead.

Prior to recent changes, buyers could easily search for drugs and find posts from dealers with phone numbers to contact

Regarding the “Can we help?” box, Facebook tells me this resource will be available on Instagram in the coming weeks, and it provided this statement:

“We recently launched the “Get Help Feature” in our Facebook search function that directs people looking for help or attempting to purchase illegal substances to the SAMHSA national helpline. When people search for help with opioid misuse or attempt to buy opioids, they will be prompted with content at the top of the search results page that will ask them if they would like help finding free and confidential treatment referrals. This will then direct them to the SAMHSA National Helpline. We’ve partnered with the Substance Abuse & Mental Health Services Administration to identify these search terms and will continue to review and update to ensure we are showing this information at the most relevant times.”

Facebook’s new drug abuse resource feature

The new actions follow Facebook shutting down some hashtags like “#Fentanyl” on Instagram back in April that could let buyers connect with dealers. That only came after activists like Glassbreakers’ Eileen Carey aggressively criticized the company demanding change. In some cases, when users would report Facebook Groups or Pages’ posts as violating its policy prohibiting the sale of regulated goods like drugs, the posts would be removed but Facebook would leave up the Pages. This mirrors some of the problems it’s had with Infowars around determining the threshold of posts inciting violence or harassing other users necessary to trigger a Page or profile suspension or deletion.

Facebook in some cases deleted posts selling drugs but not the Pages or Groups carrying them

Before all these changes, users could find tons of vendors illegally selling opioids through posts, photos, and Pages on Facebook and Instagram. Facebook also introduced a new ads policy last week requiring addiction treatment centers that want to market to potential patients be certified first to ensure they’re not actually dealers preying on addicts.

Much of the recent criticism facing Facebook has focused on it failing to prevent election interference, privacy scandals, and the spread of fake news, plus how hours of browsing its feeds can impact well-being. But its negligence regarding illegal opioid sales has likely contributed to some of the 72,000 drug overdose deaths in America last year. It serves as another example of how Facebook’s fixation on the positive benefits of social networking blinded it to the harsh realities of how its service can be misused.

Last year, Facebook CEO Mark Zuckerberg said that learning of the depths of the opioid crisis was the “biggest surprise” from his listening tour visiting states across the U.S, and that it was “really saddening to see.” The fact that he called this a “surprise” when some of the drugs causing the crisis were changing hands via his website is something Facebook hasn’t fully atoned for, nor done enough to stop. The new changes should be the start of a long road to recovery for Facebook itself.

Facebook now deletes posts that financially endanger/trick people

It’s not just inciting violence, threats and hate speech that will get Facebook to remove posts by you or your least favorite troll. Endangering someone financially, not just physically, or tricking them to earn a profit are now also strictly prohibited. Facebook today spelled out its policy with more clarity in hopes of establishing a […]

It’s not just inciting violence, threats and hate speech that will get Facebook to remove posts by you or your least favorite troll. Endangering someone financially, not just physically, or tricking them to earn a profit are now also strictly prohibited.

Facebook today spelled out its policy with more clarity in hopes of establishing a transparent set of rules it can point to when it enforces its policy in the future. That comes after cloudy rules led to waffling decisions and backlash as it dealt with and finally removed four Pages associated with Infowars conspiracy theorist Alex Jones.

The company started by repeatedly stressing that it is not a government — likely to indicate it does not have to abide by the same First Amendment rules.

“We do not, for example, allow content that could physically or financially endanger people, that intimidates people through hateful language, or that aims to profit by tricking people using Facebook,” its VP of policy Richard Allen published today.

Web searches show this is the first time Facebook has used that language regarding financial attacks. We’ve reached out for comment about exactly how new Facebook considers this policy.

This is important because it means Facebook’s policy encompasses threats of ruining someone’s credit, calling for people to burglarize their homes or blocking them from employment. While not physical threats, these can do real-world damage to victims.

Similarly, the position against trickery for profit gives Facebook a wide berth to fight against spammers, scammers and shady businesses making false claims about products. The question will be how Facebook enforces this rule. Some would say most advertisements are designed to trick people in order for a business to earn a profit. Facebook is more likely to shut down obvious grifts where businesses make impossible assertions about how their products can help people, rather than just exaggerations about their quality or value.

The added clarity offered today highlights the breadth and particularity with which other platforms, notably the wishy-washy Twitter, should lay out their rules about content moderation. While there have long been fears that transparency will allow bad actors to game the system by toeing the line without going over it, the importance of social platforms to democracy necessitates that they operate with guidelines out in the open to deflect calls of biased enforcement.

Snapchat monitors Infowars as Alex Jones promotes “Censorship” gag AR filter

Snapchat has largely escaped scrutiny about fake news and election interference since its content quickly disappears and its publisher hub Discover is a closed platform. But now the Infowars mess that’s plagued Facebook and YouTube has landed at Snap’s feet, as conspiracy theorist Alex Jones has begun tweeting to promote an augmented reality Snapchat Lens […]

Snapchat has largely escaped scrutiny about fake news and election interference since its content quickly disappears and its publisher hub Discover is a closed platform. But now the Infowars mess that’s plagued Facebook and YouTube has landed at Snap’s feet, as conspiracy theorist Alex Jones has begun tweeting to promote an augmented reality Snapchat Lens built by someone in his community that puts a piece of masking tape with the word “censorship” written over it across the mouth of the user with a “Free Infowars” logo in the screen’s corner. He’s also encouraging his followers to follow Infowars’ official Snapchat page.

The situation highlights the whack-a-mole game of trying to police the fragmented social media space. There always seems to be another platform for those kicked off others for inciting violence, harassing people, or otherwise breaking the rules. A cross-industry committee that helps coordinate enforcement might be necessary to ensure that as someone is booted from one platform, their presences elsewhere are swiftly reviewed and monitored for similar offenses.

“If they can shut me down, they can shut you down,” Jones says at the start of his 42-second video. He cites Facebook, Twitter and Google among those that are getting mobilised by “the Democrats” in aid of defeating opposing candidates in future elections.

(In actual fact, Twitter and related sites like Periscope have, to the consternation of many, not removed Jones’ or Infowars’ accounts from its platform, and for that matter neither has LinkedInGoogle+, or Instagram. Others like Pinterest and Facebook itself have now gotten behind a wider move to start to take action against accounts like these to reduce the amount of sensationalised information being spread around in the name of “free speech.” You can see the full list of Infowars’ and Alex Jones’ active and now inactive social accounts here.)

Jones himself doesn’t seem to have a Snapchat account, but Infowars’ website cites the ‘Infowarslive’ handle as its official Snapchat profile, and it’s what Jones is now pointing fans towards. However, from what we understand from sources, the account has been inactive since early this year. Snap, according to these sources, is currently monitoring it to see what it does and whether that content violates community guidelines, which prohibit hate speech and harassment.

In the mean time, say the sources, Snap is also looking into the Lens that Jones is promoting to determine whether it violates Snap’s community guidelines. These guidelines include prohibiting content that may incite or glorify violence or the use of weapons; may be considered offensive by a particular group of individuals, or that could foster negative stereotypes, such as slurs or other derogatory language; promotes dangerous, harmful, or illegal activity, or that encourages Snapping while driving; contains hashtags or usernames; or threatens to harm a person, group of people, or property. 

The interesting thing with a Lens, however, is that while the Lens itself may be innocuous, how it gets appropriated might be less so, and that’s not something that might get caught as quickly by Snap. Users can unlock the Lens for 24 hours with a link or screenshot of its QR Snapcode. From there they can do whatever they want with it, including reactivating it each day for further use. Lenses are one of the least ephemeral parts of Snapchat, making them a potent vector for persistently spreading a controversial viewpoint, and indeed viewpoints that might well violate those community standards, even if the Lens itself does not.

The insight that’s emerging from the whole Infowars debacle is that problems exist not only with how public figures use social platforms, but with how their audiences interpret or mutate their messages as they get shared, again and again.

Snap itself — as its earnings showed us yesterday — is still a smaller platform compared to some social networks. That’s another reason it may have avoided becoming a tool for information operations by malicious actors like the Russian agents that attacked the 2016 presidential election via Facebook.

But Snapchat is in a vulnerable place right now. Yesterday’s Q2 earnings report revealed that its daily active user count actually shrank from 191 million to 188 million. If took a hard stance against fake or controversial accounts, either blocking on driving away users, that could further weigh on its growth. Snap is meanwhile starting to see momentum in its ad business, beating expectations with $262.3 million in revenue last quarter. That’s a trend it doesn’t want to mess with.

Now that Jones can’t spread his false news on Facebook and YouTube, he may look increasingly to platforms like Snapchat or his mobile app the Apple hasn’t removed. And if these platforms allow him to stay, that may light a beacon attracting more questionable content creators.

Is it time to remove Zuckerberg from (his) office?

A colleague, who shall remain nameless (because privacy is not dead), gave a thumbs down to a recent column in the NYT. The complaint was that the writer had attacked tech companies (mostly but not exclusively Facebook) without offering any solutions for these all-powerful techbro CEOs’ orchestral failures to grasp the messy complexities of humanity […]

A colleague, who shall remain nameless (because privacy is not dead), gave a thumbs down to a recent column in the NYT. The complaint was that the writer had attacked tech companies (mostly but not exclusively Facebook) without offering any solutions for these all-powerful techbro CEOs’ orchestral failures to grasp the messy complexities of humanity at a worldwide scale.

Challenge accepted.

Here’s the thought experiment: Fixing Facebook 

We’ll start with Facebook because, while it’s by no means the only tech company whose platform contains a bottomless cesspit of problems, it is the most used social platform in the West; the de facto global monopoly outside China.

And, well, even Zuckerberg’ thinks it needs fixing. Or at least that its PR needs fixing — given he made “Fixing Facebook” his ‘personal challenge’ of the year this year — proof, if any more were needed, of his incredible capacity for sounding tone-deaf.

For a little more context on these annual personal challenges, Zuckerberg once previously set himself the challenge of reading a new book every two weeks. So it seems fair to ask: Is Facebook a 26-book sized fix?

If we’re talking in book metaphor terms, the challenge of fixing Facebook seems at least on the scale of the Library of Alexandria, say, given the volume of human content being daily fenced. It may, more likely, be multiple libraries of Alexandria. Just as, if Facebook content was housed in a physical library, the company would require considerably more real estate that the largest library of the ancient world to house its staggeringly-massive-and-expanding-by-the-second human content collection — which also of course forms the foundation of its business.

Zuckerberg himself has implied that his 2018 challenge — to fix the company he founded years before the iPhone arrived to supercharge the smartphone revolution and, down that line, mobilize Facebook’s societal ‘revolution’ — is his toughest yet, and likely to take at least two or three years before it bears fruit, not just the one. So Facebook’s founder is already managing our expectations and he’s barely even started.

In all likelihood, if Facebook were left alone to keep standing ethically aloof, shaping and distributing information at vast scale while simultaneously denying that’s editing — to enjoy another decade of unforgivably bad judgement calls (so, basically, to ‘self-regulate’; or, as the New York Times put it, for Zuckerberg to be educated at societal expense) — then his 2018 personal challenge would become just ‘Chapter One, Volume One’ in a neverending life’s ‘work-in-progress’.

Great for Mark, far less great for humans and democratic societies all over the world.

Frankly, there has to be a better way. So here’s an alternative plan for fixing Facebook — or at least a few big ideas to get policymakers’ juices flowing… Bear in mind this is a thought exercise so we make no suggestions for how to enact the plan — we’re just throwing ideas out there to get folks thinking.

 

Step 1: Goodbye network of networks

Facebook has been allowed to acquire several other social communication networks — most notably photo-focused social network Instagram [1BN monthly active users] and messaging app platform WhatsApp [1.5BN] — so Zuckerberg has not just ONE massively popular social network (Facebook: [2.2BN]) but a saccharine suite of eyeball-harvesting machines.

Last month he revealed his sunless empire casts its shadow across a full 2.5BN individuals if you factor in all his apps — albeit, that was an attempt to distract investors from the stock price car crash conference call that was to follow. But the staggering size of the empire is undeniable.

So the first part of fixing Facebook is really simple: No dominant social network should be allowed to possess (or continue to possess) multiple dominant social networks.

There’s literally no good argument for why this is good for anyone other than (in Facebook’s case) Zuckerberg and Zuckerberg’s shareholders. Which is zero reason not to do something that’s net good for the rest of humanity. On one level it’s just basic math.

Setting aside (for just a second) the tangible damages inflicted upon humans by unregulated social media platforms with zero editorial values and a threadbare minimum of morality which wafts like gauze in the slipstream of supercharged and continuously re-engineered growth and engagement engines that DO NOT FACTOR HUMAN COST into their algorithmic calculations — allowing their masters to preside over suprasocietal revenue stripping mega-platforms — which, to be clear, is our primary concern here — the damage to competition and innovation alone from Zuckerberg owning multiple social networks is both visible and quantifiable.

Just ask Snapchat. Because, well, you can’t ask the social networks that don’t exist because Zuckerberg commands a full flush of attention-harvesting networks. So take a good, long, hard look at all those Stories clones he’s copypasted right across his social network of social networks. Not very innovative is it?

And even if you don’t think mega-platforms cause harm by eroding civic and democratic values (against, well, plenty of evidence to the contrary), if you value creativity, competition and consumer choice it’s equally a no brainer to tend your markets in a way that allows multiple distinct networks to thrive, rather than let one megacorp get so powerful it’s essentially metastasized into a Borg-like entity capable of enslaving and/or destroying any challenger, idea or even value in its path. (And doing all that at the same time as monopolizing its users’ attention.)

We see this too in how Facebook applies its technology in a way that seeks to reshape laws in its business model’s favor. Because while individuals break laws, massively powerful megacorps merely lean their bulk to squash them into a more pleasing shape.

Facebook is not just spending big on lobbying lawmakers (and it sure is doing that), it’s using technology and the brute force of its platform to pound on and roll over the rule of law by deforming foundational tenets of society. Privacy being just one of them.

And it’s not doing this reshaping for the good of humanity. Oh no. While democratic societies have rules to protect the vulnerable and foster competition and choice because they are based on recognizing value in human life, Facebook’s motives are 100% self-interested and profit-driven.

The company wants to rewrite rules globally to further expand its bottom line. Hence its mission to pool all humans into a single monetizable bucket — no matter if people don’t exactly mesh together because people aren’t actually bits of data. If you want to be that reductive make soup, not a “global community”.

So step one to fixing Facebook is simple: Break up Zuckerberg’s empire.

In practical terms that means forcing Facebook to sell Instagram and WhatsApp — at a bare minimum. A single network is necessarily less potent than a network of networks. And it becomes, at least theoretically possible for Facebook to be at risk from competitive forces.

You would also need to at keep a weather eye on social VR, in case Oculus needs to be taken out of Zuckerberg’s hands too. There’s less of an immediate imperative there, certainly. This VR cycle is still as dead as the tone of voice the Facebook founder used to describe the things his avatar was virtually taking in when he indulged in a bit of Puerto Rico disaster tourism for an Oculus product demo last year.

That said, there’s still a strong argument to say that Facebook, the dominant force of the social web and then the social mobile web, should not be allowed to shape and dictate even a nascent potential future disruptor in the same social technology sphere.

Not if you value diversity and creativity — and, well, a lot more besides.

But all these enforced sells-offs would just raise lots more money for Facebook! I hear you cry. That’s not necessarily a bad thing — so long as it gets, shall we say, well spent. The windfall could be used to fund a massive recruitment drive to properly resource Facebook’s business in every market where it operates.

And I do mean MASSIVE. Not the ‘10,000 extra security and moderation staff’ Facebook has said will hire by the end of this year (raising the headcount it has working on these critical tasks to around 20k in total).

To be anywhere near capable of properly contextualizing content across a platform that’s actively used by 2BN+ humans — and therefore to be able to rapidly and effectively spot and quash malicious manipulation, hateful conduct and so on, and thus responsibly manage and sustain a genuine global ‘community’ — the company would likely need to add hundreds of thousands of content reviewers/moderators. Which would be very expensive indeed.

Yet Facebook paid a cool $19BN for WhatsApp back in 2014 — so an enforced sell off of its other networks should raise a truck tonne of cash to held fund a vastly larger ‘trust and safety’ personnel bill. (While AI systems and technologies can help with the moderation challenge, Zuckerberg himself has admitted that AI alone won’t scale to the content challenge for “many years” to come — if indeed it can scale at all.)

Unfortunately there’s another problem though. The human labor involved in carrying out content moderation across Facebook’s 2BN+ user mega-platform is ethically horrifying because the people who Facebook contracts for ‘after the fact’ moderation necessarily live neck deep in its cesspit. Their sweating toil is to keep paddling the shit so Facebook’s sewers don’t back up entirely and flood the platform with it.

So, in a truly ideal ‘fixed Facebook’ scenario, there wouldn’t be a need for this kind of dehumanizing, industrialized content review system — which necessitates that eyes be averted and empathy disengaged from any considerations of a traumatized ‘clean up’ workforce.

Much like Thomas Moore’s Utopia, Zuckerberg’s mega-platform requires an unfortunate underclass of worker doing its dirty work. And just as the existence of slaves in Utopia made it evident that the ‘utopian vision’ being presented was not really all it seemed, Facebook’s outsourced teams of cheap labor — whose day job is to sit and watch videos of human beheadings, torture, violence etc; or make a microsecond stress-judgement on whether a piece of hate speech is truly hateful enough to be rendered incapable of monetization and pulled from the platform — the awful cost on both sides of that human experience undermines Zuckerberg’s claim that he’s “building global community”.

Moore coined the word ‘utopia’ from the Greek — and its two components suggest an intended translation of ‘no place’. Or perhaps, better yet, it was supposed to be a pun — as Margaret Atwood has suggested — meaning something along the lines of ‘the good place that simply doesn’t exist’. Which might be a good description for Zuckerberg’s “global community”.

So we’ll come back to that.

Because the next step in the plan should help cut the Facebook moderation challenge down to a more manageable size…

 

Step 2) Break up Facebook into lots of market specific Facebooks

Instead of there being just one Facebook (comprised of two core legal entities: Facebook USA and Facebook International, in Ireland), it’s time to break up Facebook’s business into hundreds of market specific Facebooks that can really start to serve their local communities. You could go further still and subdivide at a state, county or community level.

A global social network is an oxymoron. Humans are individuals and humanity is made up of all sorts of peoples, communities and groupings. So to suggest the whole of humanity needs to co-exist on the exact same platform, under the exact same overarching set of ‘community standards’, is — truly — the stuff of megalomaniacs.

To add insult to societal and cultural injury, Facebook — the company that claims it’s doing this (while ignoring the ‘awkward’ fact that what it’s building isn’t functioning equally everywhere, even in its own backyard) — has an executive team that’s almost exclusively white and male, and steeped in a very particular Valley ‘Kool Aid’ techno-utopian mindset that’s wrapped in the U.S. flag and bound to the U.S. constitution.

Which is another way of saying that’s the polar opposite of thinking global.

Facebook released its fifth annual diversity report this year which revealed it making little progress in increasing diversity over the past five years. In senior leadership roles, Facebook’s 2018 skew is 70:30 male female, and a full 69.7% white. While the company was fully 77% male and 74% white in 2014.

Facebook’s ongoing lack of diversity is not representative of the U.S. population, let alone reflective of the myriad regions its product reaches around the planet. So the idea that an executive team with such an inexorably narrow, U.S.-focused perspective could meaningfully — let alone helpfully — serve the whole of humanity is a nonsense. And the fact that Zuckerberg is still talking in those terms merely spotlights an abject lack of corporate diversity and global perspective at his company.

If he genuinely believes his own “global community” rhetoric he’s failing even harder than he looks. Most probably, though, it’s just a convenient marketing label to wallpaper the growth strategy that’s delivered for Facebook’s shareholders for years — by the company pushing into and dominating international markets.

Yet, and here’s the rub, without making commensurate investments in resourcing its business in international markets….

This facet of Facebook’s business becomes especially problematic when you consider how the company has been pouring money into subsidizing (or seeking to) Internet access in emerging markets. So it is spending lots and lots of money, just not on keeping people safe.

Initially, Facebook spent money to expand the reach of its platform via its Internet.org ‘Free Basics’ initiative which was marketed as a ‘humanitarian’, quasi-philanthropic mission to ‘wire the world’ — though plenty of outsiders and some target countries viewed it not as charity but as a self-serving and competitive-crushing business development tactic. (Including India — which blocked Free Basics, but not before Facebook had spent millions on ads trying to get locals to lobby the regulator on its behalf).

More recently it’s been putting money into telecom infrastructure a bit less loudly — presumably hoping a less immediately self-serving approach to investing in infrastructure in target growth markets will avoid another highly politicized controversy.

It’s more wallpapering though: Connectivity investments are a business growth strategy predicated on Facebook removing connectivity barriers that stand in the way of Facebook onboarding more eyeballs.

And given the amounts of money Facebooks has been willing to spend to try to lodge its product in the hands of more new Internet users — to the point where, in some markets, Facebook effectively is the Internet — it’s even less forgivable that the company has failed to properly resource its international operations and stop its products from having some truly tragic consequences.

The cost to humanity for Facebook failing to operate with due care is painfully visible and horribly difficult to quantify.

Not that Zuckerberg has let those inconvenient truths stop him from continuing to suggest he’s the man to build a community for the planet. But again that rather implies Facebook’s problems grow out of Facebook’s lack of external perspective.

Aside from the fact that we are all equally human, there is no one homogenous human community that spans the entire world. So when Zuckerberg talks about Facebook’s ‘global community’ he is, in effect, saying nothing — or saying something almost entirely meaningless as to render down to a platitudinous sludge. (At least unless his desire is indeed a Borg-esque absorption of other cultures — into a ‘resistance is futile’ homogenous ‘Californormification’ of the planet. And we must surely hope it’s not. Although Facebook’s Free Basics have been accused of amounting to digital colonialism.)

Zuckerberg does seem to have quasi-realized the contradiction lurking at the the tin heart of his ‘global’ endeavor, though. Which is why he’s talked suggestively about creating a ‘Supreme Court of Facebook‘ — i.e. to try to reboot the pitifully unfit for purpose governance structure.

But talk of ‘community-oriented governance’ has neither been firmed up nor formalized into a tangible structural reform plan.

While the notion of a Supreme Court of Facebook, especially, does risk sounding worryingly like Zuckerberg fancies his own personal Star Chamber, the fact he’s even saying this sort of stuff shows he knows Facebook has planet-straddling problems that are far, far too big for its minimalist Libertarian ‘guardrails’ to manage or control. And in turn that suggests the event horizon of scaling Facebook’s business model has been reached.

Aka: Hello $120BN market cap blackhole.

“It’s just not clear to me that us sitting in an office here in California are best placed to always determine what the policies should be for people all around the world,” Zuckerberg said earlier THIS YEAR — 2018! — in what must surely count as the one of the tardiest enlightenments of a well educated public person in the Western world, period.

“I’ve been working on and thinking through,” he continued his mental perambulation. “How can you set up a more democratic or community-oriented process that reflects the values of people around the world?”

Well, Mark, here’s an idea to factor into your thinking: Facebook’s problem is Facebook’s massive size.

So why not chop the platform up into market specific operations that are free to make some of their own decisions and let them develop diverse corporate cultures of their own. Most importantly empower them to be operationally sensitive to the needs of local communities — and so well placed to responsively serve them.

Imagine the Facebook brand as a sort of loose ‘franchise’, with each little Facebook at liberty to intelligently adapt the menu to local tastes. And each of these ‘content eateries’ taking pride in the interior of its real estate, with dedicated managers who make their presence felt and whose jobs are to ensure great facilities but no violent food fights.

There would also need to be some core principles too, of course. A set of democratic and civic values that all the little Facebooks are bound to protect — to push back against attempts by states or concerted external forces seeking to maliciously hijack and derail speech.

But switch around the current reality — a hulkingly massive platform attached to a relatively tiny (in resources terms) business operation — and the slavering jabberwocky that Zuckerberg is now on a personal mission to slay might well cease to exist, as multiple messy human challenges get cut down to a more manageable size. Not every single content judgement call on Facebook needs to scale planet-wide.

Multiple, well resourced market-specific Facebooks staffed locally so they can pro-actively spot problems and manage their communities would not be the same business at all. Facebook would become an even more biodiverse ecosystem — of linked but tonally distinct communities — which could even, in time, diverge a bit on the feature front, via adding non-core extras, based on market specific appetites and tastes.

There would obviously have to be basic core social function interoperability — so that individual users of different Facebooks could still connect and communicate. But beyond a bit of interplay (a sort of ‘Facebook Basics’) why should there be a requirement that everyone’s Facebook experience be exactly the same?

While Facebook talks as if it has a single set of community standards, the reality is fuzzier. For example it applies stricter hate speech rules to content moderation in a market like Germany, which passed a social media hate speech law last year. Those sorts of exceptions aren’t going to go away either; as more lawmakers wake up to the challenges posed by the platform more demands will be made to regulate the content on the platform.

So, Zuckerberg, why not step actively into a process of embracing greater localization — in a way that’s sensitive to cultural and societal norms — and use the accrued political capital from that to invest in defending the platform’s core principles?

This approach won’t work in every market, clearly. But allowing for a greater tonality of content — a more risqué French Facebook, say, vs the ‘no-nipples please’ U.S. flavor — coupled with greater sensitivity to market mood and feedback could position Facebook to work with democracies and strengthen civic and cultural values, instead of trying to barge its way along by unilaterally imposing the U.S. constitution on the rest of the planet.

Facebook as it is now, globally scaled but under-resourced, is not in a position to enforce its own community standards. It only does so when or if it receives repeat complaints (and even then it won’t always act on them).

Or when a market has passed legislation enforcing action with a regime of fines (a recent report by a UK parliamentary committee, examining the democratic implications of social media fueled disinformation, notes that one in six of Facebook’s moderators now works in Germany — citing that as “practical evidence that legislation can work”).

So there are very visible cracks in both its claim to be “building global community” or even that it has community standards at all, given it doesn’t pro-actively enforce them (in most markets). So why not embrace a full fragmentation of its platform — and let a thousand little blue ships set sail!

And if Facebook really wants one community principle to set as its pole star, one rule to rule them all (and to vanquish its existential jabberwocky), it should swear to put life before data.

Locally tuned, culturally sensitive Facebooks that stand up for democratic values and civic standards could help rework the moderation challenge — removing the need for Facebook to have the equivalent of sweat shops based on outsourcing repeat human exposure to violent and toxic content.

This element is one of the ugliest sides of the social media platform business. But with empowered, smaller businesses operating in closer proximities to the communities being served, Facebook stands a better chance of getting on top of its content problems — getting out of a reactive crisis mode piled high with problems where it’s currently stuck to taking up a position in the community intelligence vanguard where its workers can root out damaging abuse before it gets to go viral, metastasize and wreak wider societal harms.

Proper community management could also, over time, encourage a more positive sharing environment to develop — where posting hateful stuff doesn’t get rewarded with feedback loops. Certainly not algorithmically, as it indeed has been.

As an additional measure, a portion of the financial windfall gained from selling off Facebook’s other social networks could be passed directly to independent trustees appointed to the Chan Zuckerberg Foundation for spending on projects intended to counter the corrosive effects of social media on information veracity and authenticity — such as by funding school age educational programs in critical thinking.

Indeed, UK lawmakers have already called for a social media levy for a similar purpose.

 

Step 3) Open the black boxes

There would still be a Facebook board and a Facebook exec team in a head office in California sitting atop all these community-oriented Facebooks — which, while operationally liberated, would still be making use of its core technology and getting limited corporate steerage. So there would still be a need for regulators to understand what Facebook’s code is doing.

Algorithmic accountability of platform technologies is essential. Regulators need to be able to see the inputs underlying the information hierarchies that these AI engines generate, and compare those against the outputs of that shaping. Which means audits. So opening the commercial black boxes — and the data holdings — to regulatory oversight.

Discrimination is easier to get away with in darkness. But Mega-platforms have shielded their commercial IP from public scrutiny and it’s only when damaging effects have surfaced in the public consciousness that users have got a glimpse of what’s been going on.

Facebook’s defense has been to say it was naive in the face of malicious activity like Russian-backed election meddling. That’s hardly an argument for more obscurity and more darkness. If you lack awareness and perspective, ask for expert help Mark.

Lawmakers have also accused the company of willfully obstructing good faith attempts at investigating scandals such as Cambridge Analytica data misuse, Kremlin-backed election interference, or how foreign money flowed into its platform seeking to influence the UK’s Brexit referendum result.

Willful obstruction to good faith, democratically minded political interrogation really isn’t a sustainable strategy. Nor an ethically defensible one.

Given the vast, society-deforming size of these platforms politicians are simply not just going to give up and go home. There will have to be standards to ensure these mega-powerful information distribution systems aren’t at risk of being gamed or being biased or otherwise misused and those standards will have to be enforced. And the enforcement must also be able to be checked and verified. So, yes, more audits.

Mega-platforms have also benefited from self-sustaining feedback loops based on their vast reach and data holdings, allowing them to lock in and double down on a market dominating position by, for example, applying self-learning algorithms trained on their own user data or via A/B testing at vast, vast scale to optimize UX design to maximize engagement and monopolize attention.

User choice in this scenario is radically denuded, and competition increasingly gets pushed back and even locked out, without such easy access to equivalently massive pools of data.

If a mega-platform has optimized the phasing and positioning of — for example — a consent button by running comparative tests to determine which combination yields the fewest opt outs, is it fair or right to the user being asked to ‘choose’? Are people being treated with respect? Or, well, like lab rats?

Breaking Facebook’s platform into lots of Facebooks could also be an opportunity to rethink its data monopoly. To argue that its central business should not have an absolute right to the data pool generated by each smaller, market specific Facebook.

Part of the regulatory oversight could include a system of accountability over how Facebook’s parent business can and cannot use pooled data holdings.

If Facebook’s executive team had to make an ethics application to a relevant regulatory panel to request and justify access each time the parent business wanted to dip into the global data pool or tap data from a particular regional cluster of Facebooks, how might that change thought processes within the leadership team?

Facebook’s own (now former) CSO, Alex Stamos, identified problems baked into the current executive team’s ‘business as usual’ thinking — writing emphatically in an internal memo earlier this year: “We need to build a user experience that conveys honesty and respect, not one optimized to get people to click yes to giving us more access. We need to intentionally not collect data where possible, and to keep it only as long as we are using it to serve people… We need to be willing to pick sides when there are clear moral or humanitarian issues. And we need to be open, honest and transparent about challenges and what we are doing to fix them.”

It seems unlikely that an application to the relevant regulators asking for ‘Europe-wide data so we can A/B test user consent flows to get more Europeans to switch on facial recognition‘ would pass the ‘life before data’ community standard test.

And, well, it’s well established that the fact of being watched and knowing it’s happening has the power to change behavior. After all, Facebook’s platform is a major testament to that.

So it may be more that it’s external guidance — rather than a new internal governance model — which Facebook sorely lacks. Some external watchers to watch its internal watchmen.

 

Step 4) Remove Zuckerberg from (his) office

Public companies are supposed to be answerable to their shareholders. Thanks to the share structure that Mark Zuckerberg put in place at Facebook, Mark Zuckerberg is answerable to no one except himself. And despite Facebook’s years of scandals, he does not appear to have ever felt the urge to sack himself.

When the idea of personal accountability was brought up with him, in a recent podcast interview with Kara Swisher, he had a moment of making a light joke of it — quipping “do you really want me to fire myself right now? For the news?” before falling back on his line that: “I think we should do what’s gonna be right for the community.”

And, you know what, the joke was exactly right: The idea that Zuckerberg would terminate his own position is both laughable and ludicrous. It is a joke.

Which means Facebook’s executive structure is also a joke because there is zero accountability at the highest level — beyond Mark’s personal threshold for shame or empathy — and that’s now a global problem.

Zuckerberg has more power than most of the world’s elected politicians (and arguably some of the world’s political leaders). Yet he can’t be kicked out of his office, nor lose his CEO seat at any ballot box. He’s a Facebook fixture — short of a literal criminal conviction or otherwise reputation terminating incident.

While you could argue that not being answerable to the mercenary whims of shareholder pressure is a good thing because it frees Zuckerberg to raise business transformation needs above returns-focused investor considerations (albeit, let’s see how his nerve holds after that $120BN investor punch) — his record in the CEO’s chair counters any suggestion that he’s a person who makes radical and sweeping changes to Facebook’s modus operandi. On the contrary, he’s shown himself a master of saying ‘oops we did it again!’ and then getting right back to screwing up as usual.

He’s also demonstrated a consistent disbelief that Facebook’s platform creates problems — preferring to couch connecting people as a glorious humanitarian mission from whence life-affirming marriages and children flow. Rather than seeing risks in putting global megaphones in the hands of anyone with an urge to shout.

As recently as November 2016 he was still dismissing the idea that political disinformation spread via Facebook had been in any way impactful on the US presidential election — as a “pretty crazy idea” — yet his own business had staffed divisions dedicated to working with US politicians to get their campaign messages out. It shouldn’t be rocket science to see a contradiction there. But until very recently Zuckerberg apparently couldn’t.

The fact of him also being the original founder of the business does not help in the push for disruptive change to Facebook itself. The best person to fix a radically broken product is unlikely to be the person whose entire adult life has been conjoined to a late night college dorm room idea spat online — and then which ended up spinning up and out into a fortune. And then into a major, major global mess.

The ‘no better person than me to fix it’ line can be countered by pointing to Zuckerberg’s personal history of playing fast and loose with other people’s data (from the “dumb fucks” comment all the way back in his student days to years of deliberate platform choices at Facebook that made people’s information public by default); and by suggesting entrenched challenges would surely benefit from fresh eyes, new thinking and a broader perspective.

Add to that, Zuckerberg has arguably boxed himself in, politically speaking, thanks to a series of disingenuous, misleading and abstruse claims and statements made to lawmakers — limiting his room for manoeuvre or for rethinking his approach; let alone being able to genuinely compromise or make honest platform changes.

His opportunity to be radically honest about Facebook’s problems probably passed years and years back — when he was busy working hard on his personal challenge to wear a tie everyday [2009]. Or only eat animals he kills himself [2011].

By 2013’s personal challenge, it’s possible that Zuckerberg had sensed something new in the data stream that was maybe coming down the pipes at him — as he set himself the challenge of expanding his personal horizons (not that he put it that way) by “meeting a new person every day who does not work at Facebook”.

Meeting a new person every day who did work at Facebook would have been far too easy, see.

Is it even possible to think outside the box when your entire adult life has been spent tooling away inside the same one?

 

Step 5) Over to you… 

What are your radical solutions for fixing Facebook? Should Zuckerberg stay or should he go? What do you want lawmakers to do about social media? What kinds of policy interventions might set these mega-platforms on a less fractious path? Or do you believe all this trouble on social media is a storm in a teacup that will blow over if we but screw our courage to the sticking place and wait for everyone to catch up with the cardinal Internet truth that nothing online is what it seems…

Ideas in the comments pls…