Instead of archiving a WhatsApp chat, you decide to delete it. What’s the difference? Well, archived chats can be recovered very easily within WhatsApp; recovering deleted messages isn’t as easy. But be assured it is possible. In this tutorial, we’ll show you how to get deleted WhatsApp messages back. And then teach you to set up an advanced WhatsApp backup plan so you will never lose any of your WhatsApp messages again. Understanding WhatsApp Chat Backups In order to recover deleted WhatsApp messages, you need to enable “Chat Backup” in Settings. Under “Auto Backup”, WhatsApp offers several backup frequencies: Daily…
Instead of archiving a WhatsApp chat, you decide to delete it. What’s the difference? Well, archived chats can be recovered very easily within WhatsApp; recovering deleted messages isn’t as easy. But be assured it is possible.
In this tutorial, we’ll show you how to get deleted WhatsApp messages back. And then teach you to set up an advanced WhatsApp backup plan so you will never lose any of your WhatsApp messages again.
Understanding WhatsApp Chat Backups
In order to recover deleted WhatsApp messages, you need to enable “Chat Backup” in Settings.
Under “Auto Backup”, WhatsApp offers several backup frequencies:
It’s also important to note that WhatsApp will only retain the latest backup file in iOS and the two most recent backup files in Android.
By having daily auto backups, you will be able to easily recover messages soon after they are deleted.
On the other hand, weekly auto backups will allow you to go further back in time to recover deleted WhatsApp messages from less than seven days ago at the expense of losing recent chat messages.
Pick the auto backup frequency that best suits your needs.
For a more advanced backup plan for Android devices, please scroll down to the “Advanced WhatsApp Backup Strategy” section.
Restoring Whatsapp Chats
In order to recover WhatsApp messages, you will first need to identify how the messages were removed.
Restore Archived WhatsApp Messages
If they were archived, then you would simply reverse the archival by giving your iPhone a shake and choose “Undo” to recover the archived chat.
If the chat was archived a while ago, you can still recover them.
In iOS, scroll up in the chat list until Archived Chats appears. Tap on it, then swipe left on the chat you would like to restore and tap “Unarchive”.
In Android, tap on the “Archived chats” text at the bottom of the chat list. Tap and hold on the chat you would like to recover and then tap on the “Unarchive” icon.
How to Recover Deleted WhatsApp Messages
It’s surprisingly simple to retrieve deleted WhatsApp messages, but it has to be within your selected auto backup frequency. For instance, if your auto backup frequency is set to daily, then you can easily recover any deleted chats before the next backup occurs.
Simply uninstall and reinstall WhatsApp. When you reinstall the app, you will be prompted to restore your message history from the backup file. Simply tap “Restore” and everything from the latest backup will be restored.
Done. This method of recovering deleted WhatsApp messages works on both Android and iOS.
Recovering Older Chats
Getting deleted messages back after a backup has run is slightly more difficult. It will depend on your selected auto backup frequency and will only work on Android devices.
As mentioned before, WhatsApp will retain two of the most recent chat backup files. One will be the latest chat backup; the other, depending on your auto backup frequency, could be one day, seven days, or 30 days old. They are located on your Android device’s local storage.
Open your device’s File Explorer, and navigate to WhatsApp > Databases. For more information on how to browse Android folders, please read our beginner’s guide to Android.
Within that folder, you’ll find the two chat backup files, named msgstore.db.crytp12 (which is the most recent backup) and the other formatted msgstore-YYYY-MM-DD.1.db.crypt12 where “YYYY-MM-DD” is the year, month, and date respectively.
To retrieve WhatsApp messages deleted some time between the two backups, simply:
Rename msgstore.db.crytp12 to msgstore-latest.db.crytp12
Rename the msgstore-YYYY-MM-DD.1.db.crypt12 file to msgstore.db.crytp12
After that, uninstall WhatsApp.
Perform this step if you enabled Google Drive backups as well: open your Google Drive app, tap on Backups, and delete the WhatsApp backup file.
Reinstall WhatsApp. Again, you will be prompted to restore from the backup file which you’ve just edited (and is x days old instead of the most recent backup file).
This method will allow you to retrieve deleted WhatsApp messages from the second most recent backup. You may continue to use WhatsApp from that point on. Or you could export the retrieved chat messages and restore WhatsApp back to the latest backup by running through the entire process again.
This time, in your your phone’s File Explorer, reverse the process:
rename the existing msgstore.db.crytp12 to msgstore-YYYY-MM-DD.1.db.crypt12
rename msgstore-latest.db.crytp12 back to msgstore.db.crytp12
Uninstall and reinstall WhatsApp, restore from backup, and your latest chats will reappear!
Locating Deleted Images, Video, and Audio
It may come as a surprise, but when you delete images and videos from WhatsApp, they aren’t really erased until you delete them. Therefore, recovering WhatsApp images and videos that you have deleted from the chat is fairly straightforward.
Open your device’s File Explorer, navigate to WhatsApp > Media. From there, simply sift through the folders until you find the deleted files to want to recover.
Accidental deletion of messages is a common problem across all messaging platforms—not just WhatsApp. Worse, you could have accidentally deleted some important documents, like Microsoft Office files. Thankfully we can also help you to recover deleted Microsoft Office files.
Advanced WhatsApp Backup Strategy
Based off the fact that you can easily view and edit WhatsApp chat backup files on Android devices, it’s pretty easy to manipulate them to your advantage.
Although WhatsApp will only keep two of the most recent backup files, you can hack it so you have as many as you want.
All you need to do is rename msgstore-YYYY-MM-DD.1.db.crypt12 to something else, perhaps oct11.db.crypt12. And thus, this file is now protected from being overwritten by WhatsApp. In the next backup, WhatsApp will create a new msgstore-YYYY-MM-DD.1.db.crypt12 file. And you’ll now have three restore points.
Making WhatsApp Work for You
As you can see it’s fairly easy to recover deleted WhatsApp messages. However, it pays to prepare in advance for such an eventuality. Which is where our advanced backup strategy proves its worth.
You can repeat our advanced backup strategy as many times as you want. And without much in the way of effort you will be able to go back further in time to recover deleted WhatsApp messages.
Americans looking to reduce their reliance on products from tech’s most alarmingly megalithic companies might be surprised to learn just how far their reach extends. Privacy-minded browser company DuckDuckGo conducted a small study to look into that phenomenon and the results were pretty striking. “… As Facebook usage wanes, messaging apps like WhatsApp are growing […]
Americans looking to reduce their reliance on products from tech’s most alarmingly megalithic companies might be surprised to learn just how far their reach extends.
Privacy-minded browser company DuckDuckGo conducted a small study to look into that phenomenon and the results were pretty striking.
“… As Facebook usage wanes, messaging apps like WhatsApp are growing in popularity as a ‘more private (and less confrontational) space to communicate,'” DuckDuckGo wrote in the post. “That shift didn’t make much sense to us because both services are owned by the same company, so we tried to find an explanation.”
DuckDuckGo gathered a random sample of 1,297 adult Americans who are “collectively demographically similar to the general population of U.S. adults” (i.e. not just DuckDuckGo diehards) using SurveyMonkey’s audience tools. The survey found that 50.4% of those surveyed who had used WhatsApp in the prior 6 months (247 participants) did not know that the company is owned by Facebook.
Similarly, DuckDuckGo found that 56.4% of those surveyed who had used Waze in the past 6 months (291 participants) had no idea that the navigation app is owned by Google. A similar study conducted back in April found the same phenomenon when it came to Facebook/Instagram and Google/YouTube, though for Instagram the effect was even stronger (wow).
If you’re reading TechCrunch it’s probably almost impossible to imagine that average people aren’t tracing the lines between tech’s biggest companies and the products scooped up or built under their wings. And yet, it is so.
Even as companies like Google and Facebook suffer blowback from privacy crises, it’s clear that they can lean on the products they’ve picked up along the way to chart a path forward. If this survey is any indication, half of U.S. consumers will have no idea that they’ve jumped ship from a big tech product into a lifeboat captained by the very same company they sought to escape.
And for the biggest tech companies, it’s at least one reason that keeping satellite products at arm’s length from their respective motherships is advantageous for maintaining trust — especially while aggressive data sharing happens behind the scenes.
A serious security vulnerability has just been discovered in WhatsApp. It allows attackers to hijack your smartphone via video calls. In this article, we reveal how hackers can exploit this bug and which WhatsApp versions are safe for your iPhone. What…
A serious security vulnerability has just been discovered in WhatsApp. It allows attackers to hijack your smartphone via video calls. In this article, we reveal how hackers can exploit this bug and which WhatsApp versions are safe for your iPhone. WhatsApp video calls can lead to account compromise Natalie Silvanovich, a Google Project Zero security researcher, discovered a vulnerability in WhatsApp that allowed a hacker to spy on a user’s smartphone while he/she makes a video call, one of the most used functions of the app. According to Silvanovich, the memory corruption vulnerability lies in the implementation of non-webRTC video
Consumer messaging apps like WhatsApp are not only insanely popular for chatting with friends but have pushed deep into the workplace too, thanks to the speed and convenience they offer. They have even crept into hospitals, as time-strapped doctors reach for a quick and easy way to collaborate over patient cases on the ward. Yet […]
Consumer messaging apps like WhatsApp are not only insanely popular for chatting with friends but have pushed deep into the workplace too, thanks to the speed and convenience they offer. They have even crept into hospitals, as time-strapped doctors reach for a quick and easy way to collaborate over patient cases on the ward.
Yet WhatsApp is not specifically designed with the safe sharing of highly sensitive medical information in mind. This is where Dutch startup Siilo has been carving a niche for itself for the past 2.5 years — via a free-at-the-point-of-use encrypted messaging app that’s intended for medical professions to securely collaborate on patient care, such as via in-app discussion groups and being able to securely store and share patient notes.
A business goal that could be buoyed by tighter EU regulations around handling personal data, say if hospital managers decide they need to address compliance risks around staff use of consumer messaging apps.
The app’s WhatsApp-style messaging interface will be instantly familiar to any smartphone user. But Siilo bakes in additional features for its target healthcare professional users, such as keeping photos, videos and files sent via the app siloed in an encrypted vault that’s entirely separate from any personal media also stored on the device.
Messages sent via Siilo are also automatically deleted after 30 days unless the user specifies a particular message should be retained. And the app does not make automated back-ups of users’ conversations.
Other doctor-friendly features include the ability to blur images (for patient privacy purposes); augment images with arrows for emphasis; and export threaded conversations to electronic health records.
There’s also mandatory security for accessing the app — with a requirement for either a PIN-code, fingerprint or facial recognition biometric to be used. While a remote wipe functionality to nix any locally stored data is baked into Siilo in the event of a device being lost or stolen.
Like WhatsApp, Siilo also uses end-to-end encryption — though in its case it says this is based on the opensource NaCl library
It also specifies that user messaging data is stored encrypted on European ISO-27001 certified servers — and deleted “as soon as we can”.
It also says it’s “possible” for its encryption code to be open to review on request.
Another addition is a user vetting layer to manually verify the medical professional users of its app are who they say they are.
Siilo says every user gets vetted. Though not prior to being able to use the messaging functions. But users that have passed verification unlock greater functionality — such as being able to search among other (verified) users to find peers or specialists to expand their professional network. Siilo says verification status is displayed on profiles.
“At Siilo, we coin this phenomenon ‘network medicine’, which is in contrast to the current old-fashioned, siloed medicine,” says CEO and co-founder Joost Bruggeman in a statement. “The goal is to improve patient care overall, and patients have a network of doctors providing input into their treatment.”
While Bruggeman brings the all-important medical background to the startup, another co-founder, Onno Bakker, has been in the mobile messaging game for a long time — having been one of the entrepreneurs behind the veteran web and mobile messaging platform, eBuddy.
A third co-founder, CFO Arvind Rao, tells us Siilo transplanted eBuddy’s messaging dev team — couching this ported in-house expertise as an advantage over some of the smaller rivals also chasing the healthcare messaging opportunity.
It is also of course having to compete technically with the very well-resourced and smoothly operating WhatsApp behemoth.
“Our main competitor is always WhatsApp,” Rao tells TechCrunch. “Obviously there are also other players trying to move in this space. TigerText is the largest in the US. In the UK we come across local players like Hospify and Forward.
“A major difference we have very experienced in-house dev team… The experience of this team has helped to build a messenger that really can compete in usability with WhatsApp that is reflected in our rapid adoption and usage numbers.”
“Having worked in the trenches as a surgery resident, I’ve experienced the challenges that healthcare professionals face firsthand,” adds Bruggeman. “With Siilo, we’re connecting all healthcare professionals to make them more efficient, enable them to share patient information securely and continue learning and share their knowledge. The directory of vetted healthcare professionals helps ensure they’re successful teamplayers within a wider healthcare network that takes care of the same patient.”
Siilo launched its app in May 2016 and has since grown to ~100,000 users, with more than 7.5 million messages currently being processed monthly and 6,000+ clinical chat groups active monthly.
“We haven’t come across any other secure messenger for healthcare in Europe with these figures in the App Store/Google Play rankings and therefore believe we are the largest in Europe,” adds Rao. “We have multiple large institutions across Western-Europe where doctors are using Siilo.”
On the security front, as well flagging the ISO 27001 certification the company has gained, he notes that it obtained “the highest NHS IG Toolkit level 3” — aka the now replaced system for organizations to self-assess their compliance with the UK’s National Health Service’s information governance processes, claiming “we haven’t seen [that] with any other messaging company”.
Siilo’s toolkit assessment was finalized at the end of Febuary 2018, and is valid for a year — so will be up for re-assessment under the replacement system (which was introduced this April) in Q1 2019. (Rao confirms they will be doing this “new (re-)assessment” at the end of the year.)
As well as being in active use in European hospitals such as St. George’s Hospital, London, and Charité Berlin, Germany, Siilo says its app has had some organic adoption by medical pros further afield — including among smaller home healthcare teams in California, and “entire transplantation teams” from Astana, Kazakhstan.
It also cites British Medical Journal research that found that of the 98.9% of U.K. hospital clinicians who now have smartphones, around a third are using consumer messaging apps in the clinical workplace. Persuading those healthcare workers to ditch WhatsApp at work is Siilo’s mission and challenge.
The team has just announced a €4.5 million (~$5.1M) seed to help it get onto the radar of more doctors. The round is led by EQT Ventures, with participation from existing investors. It says it will be using the funding to scale up its user base across Europe, with a particular focus on the UK and Germany.
Commenting on the funding in a statement, EQT Ventures’ Ashley Lundström, a venture lead and investment advisor at the VC firm, said: “The team was impressed with Siilo’s vision of creating a secure global network of healthcare professionals and the organic traction it has already achieved thanks to the team’s focus on building a product that’s easy to use. The healthcare industry has long been stuck using jurassic technologies and Siilo’s realtime messaging app can significantly improve efficiency
and patient care without putting patients’ data at risk.”
While the messaging app itself is free for healthcare professions to use, Siilo also offers a subscription service to monetize the freemium product.
This service, called Siilo Connect offers organisations and professional associations what it bills as “extensive management, administration, networking and software integration tools”, or just data regulation compliance services if they want the basic flavor of the paid tier.
Truecaller, the app that helps screen spam calls and messages, is becoming a chat app as it continues to develop into a social service. The company announced today that it is introducing a chat feature to its Android and iOS apps, although it is already live for Android beta users. The move follows Truecaller’s recent foray […]
Truecaller, the app that helps screen spam calls and messages, is becoming a chat app as it continues to develop into a social service.
The company announced today that it is introducing a chat feature to its Android and iOS apps, although it is already live for Android beta users.
The move follows Truecaller’s recent foray into payments. That’s a localized push in India — Truecaller’s largest market based on users — based on the acquisition of startup Chillr in June. Beyond adding person-to-person transfers and bill/utility payments through that deal, Truecaller is preparing to allow third-parties to integrate their services into its app. In that context, adding chat makes a lot of sense.
The feature could actually be quite handy for Android users. A Truecaller representative explained to TechCrunch that it will work much like Apple’s iMessage — messages sent between Truecaller users will be handled in the app for free, while messages sent to non-users will go over as SMS, which is supported by the app.
Truecaller also said its move to add messaging will help combat “fake news,” an issue that has plagued WhatsApp in India. The company said it’ll rely on its community to vet and report links via a reporting button, and there are plans to add AI and machine learning to the process.
While it is doubtless correct that Truecaller has a strong community — the information used to identify spam SMS and phone numbers inside the app comes from community reporting — but it remains to be seen if the proposed solution will be much different to what Facebook and WhatsApp have talked up. Truecaller won’t have dedicated fact-checkers either. It’s strategy may work within smaller circles, but if the app gains a lot of traction it remains to be seen how it’ll manage the false information problem.
The messaging feature is global, but it promises to make the biggest impact in India, where it highlights how a number of different companies are converging on messaging and payments from very different starting points.
While it is smaller than WhatsApp it has more users than Paytm’s 120 million monthlies. Truecaller boasts an impressive 100 million daily users, 60 percent of whom are in India, a representative confirmed. With chat, Truecaller will hope to grow that number further still before it opens its platform to third parties. That could happen before the end of this year, or in early next year, the company told TechCrunch.
Note: The original version of this article has been updated to provide Truecaller’s India-based user numbers and correct that its payment feature covers bill and utility payments.
At a Senate hearing this week in which US lawmakers quizzed tech giants on how they should go about drawing up comprehensive Federal consumer privacy protection legislation, Apple’s VP of software technology described privacy as a “core value” for the company. “We want your device to know everything about you but we don’t think we should,” Bud […]
At a Senate hearing this week in which US lawmakers quizzed tech giants on how they should go about drawing up comprehensive Federal consumer privacy protection legislation, Apple’s VP of software technology described privacy as a “core value” for the company.
“We want your device to know everything about you but we don’t think we should,” Bud Tribble told them in his opening remarks.
Facebook was not at the commerce committee hearing which, as well as Apple, included reps from Amazon, AT&T, Charter Communications, Google and Twitter.
But the company could hardly have made such a claim had it been in the room, given that its business is based on trying to know everything about you in order to dart you with ads.
Earlier this year one US senator wondered of Mark Zuckerberg how Facebook could run its service given it doesn’t charge users for access. “Senator we run ads,” was the almost startled response, as if the Facebook founder couldn’t believe his luck at the not-even-surface-level political probing his platform was getting.
But there have been tougher moments of scrutiny for Zuckerberg and his company in 2018, as public awareness about how people’s data is being ceaselessly sucked out of platforms and passed around in the background, as fuel for a certain slice of the digital economy, has grown and grown — fuelled by a steady parade of data breaches and privacy scandals which provide a glimpse behind the curtain.
The DCMS committee wanted Zuckerberg to testify to unpick how Facebook’s platform contributes to the spread of disinformation online. The company sent various reps to face questions (including its CTO) — but never the founder (not even via video link). And committee chair Damian Collins was withering and public in his criticism of Facebook sidestepping close questioning — saying the company had displayed a “pattern” of uncooperative behaviour, and “an unwillingness to engage, and a desire to hold onto information and not disclose it.”
But three sessions in a handful of months is still a lot more political grillings than Zuckerberg has ever faced before.
He’s going to need to get used to awkward questions now that lawmakers have woken up to the power and risk of his platform.
What has become increasingly clear from the growing sound and fury over privacy and Facebook (and Facebook and privacy), is that a key plank of the company’s strategy to fight against the rise of consumer privacy as a mainstream concern is misdirection and cynical exploitation of valid security concerns.
Simply put, Facebook is weaponizing security to shield its erosion of privacy.
Privacy legislation is perhaps the only thing that could pose an existential threat to a business that’s entirely powered by watching and recording what people do at vast scale. And relying on that scale (and its own dark pattern design) to manipulate consent flows to acquire the private data it needs to profit.
In Europe lawmakers have already tightened privacy oversight on digital businesses and massively beefed up penalties for data misuse. Under the region’s new GDPR framework compliance violations can attract fines as high as 4% of a company’s global annual turnover.
Though fines aren’t the real point; if Facebook is forced to change its processes, so how it harvests and mines people’s data, that could knock a major, major hole right through its profit-center.
Hence the existential nature of the threat.
The GDPR came into force in May and multiple investigations are already underway. This summer the EU’s data protection supervisor, Giovanni Buttarelli, told the Washington Post to expect the first results by the end of the year.
Which means 2018 could result in some very well known tech giants being hit with major fines. And — more interestingly — being forced to change how they approach privacy.
One target for GDPR complainants is so-called ‘forced consent‘ — where consumers are told by platforms leveraging powerful network effects that they must accept giving up their privacy as the ‘take it or leave it’ price of accessing the service. Which doesn’t exactly smell like the ‘free choice’ EU law actually requires.
It’s not just Europe, either. Regulators across the globe are paying greater attention than ever to the use and abuse of people’s data. And also, therefore, to Facebook’s business — which profits, so very handsomely, by exploiting privacy to build profiles on literally billions of people in order to dart them with ads.
US lawmakers are now directly asking tech firms whether they should implement GDPR style legislation at home.
Unsurprisingly, tech giants are not at all keen — arguing, as they did at this week’s hearing, for the need to “balance” individual privacy rights against “freedom to innovate”.
So a lobbying joint-front to try to water down any US privacy clampdown is in full effect. (Though also asked this week whether they would leave Europe or California as a result of tougher-than-they’d-like privacy laws none of the tech giants said they would.)
The state of California passed its own robust privacy law, the California Consumer Privacy Act, this summer, which is due to come into force in 2020. And the tech industry is not a fan. So its engagement with federal lawmakers now is a clear attempt to secure a weaker federal framework to ride over any more stringent state laws.
Europe and its GDPR obviously can’t be rolled over like that, though. Even as tech giants like Facebook have certainly been seeing how much they can get away with — to force a expensive and time-consuming legal fight.
While ‘innovation’ is one oft-trotted angle tech firms use to argue against consumer privacy protections, Facebook included, the company has another tactic too: Deploying the ‘S’ word — security — both to fend off increasingly tricky questions from lawmakers, as they finally get up to speed and start to grapple with what it’s actually doing; and — more broadly — to keep its people-mining, ad-targeting business steamrollering on by greasing the pipe that keeps the personal data flowing in.
In recent years multiple major data misuse scandals have undoubtedly raised consumer awareness about privacy, and put greater emphasis on the value of robustly securing personal data. Scandals that even seem to have begun to impact how some Facebook users Facebook. So the risks for its business are clear.
Part of its strategic response, then, looks like an attempt to collapse the distinction between security and privacy — by using security concerns to shield privacy hostile practices from critical scrutiny, specifically by chain-linking its data-harvesting activities to some vaguely invoked “security purposes”, whether that’s security for all Facebook users against malicious non-users trying to hack them; or, wider still, for every engaged citizen who wants democracy to be protected from fake accounts spreading malicious propaganda.
So the game Facebook is here playing is to use security as a very broad-brush to try to defang legislation that could radically shrink its access to people’s data.
It’s very important that we don’t have people who aren’t Facebook users that are coming to our service and trying to scrape the public data that’s available. And one of the ways that we do that is people use our service and even if they’re not signed in we need to understand how they’re using the service to prevent bad activity.
At this point in the meeting Zuckerberg also suggestively referenced MEPs’ concerns about election interference — to better play on a security fear that’s inexorably close to their hearts. (With the spectre of re-election looming next spring.) So he’s making good use of his psychology major.
“On the security side we think it’s important to keep it to protect people in our community,” he also said when pressed by MEPs to answer how a person who isn’t a Facebook user could delete its shadow profile of them.
He was also questioned about shadow profiles by the House Energy and Commerce Committee in April. And used the same security justification for harvesting data on people who aren’t Facebook users.
“Congressman, in general we collect data on people who have not signed up for Facebook for security purposes to prevent the kind of scraping you were just referring to [reverse searches based on public info like phone numbers],” he said. “In order to prevent people from scraping public information… we need to know when someone is repeatedly trying to access our services.”
He claimed not to know “off the top of my head” how many data points Facebook holds on non-users (nor even on users, which the congressman had also asked for, for comparative purposes).
These sorts of exchanges are very telling because for years Facebook has relied upon people not knowing or really understanding how its platform works to keep what are clearly ethically questionable practices from closer scrutiny.
But, as political attention has dialled up around privacy, and its become harder for the company to simply deny or fog what it’s actually doing, Facebook appears to be evolving its defence strategy — by defiantly arguing it simply must profile everyone, including non-users, for user security.
No matter this is the same company which, despite maintaining all those shadow profiles on its servers, famously failed to spot Kremlin election interference going on at massive scale in its own back yard — and thus failed to protect its users from malicious propaganda.
Nor was Facebook capable of preventing its platform from being repurposed as a conduit for accelerating ethnic hate in a country such as Myanmar — with some truly tragic consequences. Yet it must, presumably, hold shadow profiles on non-users there too. Yet was seemingly unable (or unwilling) to use that intelligence to help protect actual lives…
So when Zuckerberg invokes overarching “security purposes” as a justification for violating people’s privacy en masse it pays to ask critical questions about what kind of security it’s actually purporting to be able deliver. Beyond, y’know, continued security for its own business model as it comes under increasing attack.
What Facebook indisputably does do with ‘shadow contact information’, acquired about people via other means than the person themselves handing it over, is to use it to target people with ads. So it uses intelligence harvested without consent to make money.
Facebook confirmed as much this week, when Gizmodo asked it to respond to a study by some US academics that showed how a piece of personal data that had never been knowingly provided to Facebook by its owner could still be used to target an ad at that person.
Responding to the study, Facebook admitted it was “likely” the academic had been shown the ad “because someone else uploaded his contact information via contact importer”.
“People own their address books. We understand that in some cases this may mean that another person may not be able to control the contact information someone else uploads about them,” it told Gizmodo.
So essentially Facebook has finally admitted that consentless scraped contact information is a core part of its ad targeting apparatus.
Safe to say, that’s not going to play at all well in Europe.
Basically Facebook is saying you own and control your personal data until it can acquire it from someone else — and then, er, nope!
Yet given the reach of its network, the chances of your data not sitting on its servers somewhere seems very, very slim. So Facebook is essentially invading the privacy of pretty much everyone in the world who has ever used a mobile phone. (Something like two-thirds of the global population then.)
In other contexts this would be called spying — or, well, ‘mass surveillance’.
It’s also how Facebook makes money.
And yet when called in front of lawmakers to asking about the ethics of spying on the majority of the people on the planet, the company seeks to justify this supermassive privacy intrusion by suggesting that gathering data about every phone user without their consent is necessary for some fuzzily-defined “security purposes” — even as its own record on security really isn’t looking so shiny these days.
WASHINGTON, DC – APRIL 11: Facebook co-founder, Chairman and CEO Mark Zuckerberg prepares to testify before the House Energy and Commerce Committee in the Rayburn House Office Building on Capitol Hill April 11, 2018 in Washington, DC. This is the second day of testimony before Congress by Zuckerberg, 33, after it was reported that 87 million Facebook users had their personal information harvested by Cambridge Analytica, a British political consulting firm linked to the Trump campaign. (Photo by Chip Somodevilla/Getty Images)
It’s as if Facebook is trying to lift a page out of national intelligence agency playbooks — when governments claim ‘mass surveillance’ of populations is necessary for security purposes like counterterrorism.
Except Facebook is a commercial company, not the NSA.
So it’s only fighting to keep being able to carpet-bomb the planet with ads.
Profiting from shadow profiles
Another example of Facebook weaponizing security to erode privacy was also confirmed via Gizmodo’s reportage. The same academics found the company uses phone numbers provided to it by users for the specific (security) purpose of enabling two-factor authentication, which is a technique intended to make it harder for a hacker to take over an account, to also target them with ads.
In a nutshell, Facebook is exploiting its users’ valid security fears about being hacked in order to make itself more money.
Any security expert worth their salt will have spent long years encouraging web users to turn on two factor authentication for as many of their accounts as possible in order to reduce the risk of being hacked. So Facebook exploiting that security vector to boost its profits is truly awful. Because it works against those valiant infosec efforts — so risks eroding users’ security as well as trampling all over their privacy.
It’s just a double whammy of awful, awful behavior.
I spend a lot of time trying to convince people to lock down their social media accounts with 2FA. Boy does this undermine my efforts. https://t.co/tPo4keQkT7
A third example of how Facebook seeks to play on people’s security fears to enable deeper privacy intrusion comes by way of the recent rollout of its facial recognition technology in Europe.
In this region the company had previously been forced to pull the plug on facial recognition after being leaned on by privacy conscious regulators. But after having to redesign its consent flows to come up with its version of ‘GDPR compliance’ in time for May 25, Facebook used this opportunity to revisit a rollout of the technology on Europeans — by asking users there to consent to switching it on.
Now you might think that asking for consent sounds okay on the surface. But it pays to remember that Facebook is a master of dark pattern design.
So can it be a free consent if ‘individual choice’ is set against a powerful technology platform that’s both in charge of the consent wording, button placement and button design, and which can also data-mine the behavior of its 2BN+ users to further inform and tweak (via A/B testing) the design of the aforementioned ‘consent flow’? (Or, to put it another way, is it still ‘yes’ if the tiny greyscale ‘no’ button fades away when your cursor approaches while the big ‘YES’ button pops and blinks suggestively?)
In the case of facial recognition, Facebook used a manipulative consent flow that included a couple of self-serving ‘examples’ — selling the ‘benefits’ of the technology to users before they landed on the screen where they could choose either yes switch it on, or no leave it off.
One of which explicitly played on people’s security fears — by suggesting that without the technology enabled users were at risk of being impersonated by strangers. Whereas, by agreeing to do what Facebook wanted you to do, Facebook said it would help “protect you from a stranger using your photo to impersonate you”…
That example shows the company is not above actively jerking on the chain of people’s security fears, as well as passively exploiting similar security worries when it jerkily repurposes 2FA digits for ad targeting.
There’s even more too; Facebook has been positioning itself to pull off what is arguably the greatest (in the ‘largest’ sense of the word) appropriation of security concerns yet to shield its behind-the-scenes trampling of user privacy — when, from next year, it will begin injecting ads into the WhatsApp messaging platform.
These will be targeted ads, because Facebook has already changed the WhatsApp T&Cs to link Facebook and WhatsApp accounts — via phone number matching and other technical means that enable it to connect distinct accounts across two otherwise entirely separate social services.
Thing is, WhatsApp got fat on its founders promise of 100% ad-free messaging. The founders were also privacy and security champions, pushing to roll e2e encryption right across the platform — even after selling their app to the adtech giant in 2014.
WhatsApp’s robust e2e encryption means Facebook literally cannot read the messages users are sending each other. But that does not mean Facebook is respecting WhatsApp users’ privacy.
On the contrary; The company has given itself broader rights to user data by changing the WhatsApp T&Cs and by matching accounts.
So, really, it’s all just one big Facebook profile now — whichever of its products you do (or don’t) use.
This means that even without literally reading your WhatsApps, Facebook can still know plenty about a WhatsApp user, thanks to any other Facebook Group profiles they have ever had and any shadow profiles it maintains in parallel. WhatsApp users will soon become 1.5BN+ bullseyes for yet more creepily intrusive Facebook ads to seek their target.
No private spaces, then, in Facebook’s empire as the company capitalizes on people’s fears to shift the debate away from personal privacy and onto the self-serving notion of ‘secured by Facebook spaces’ — in order that it can keep sucking up people’s personal data.
Because if Facebook can’t even deliver security for its users, thereby undermining those “security purposes” it keeps banging on about, it might find it difficult to sell the world on going naked just so Facebook Inc can keep turning a profit.
What’s the best security practice of all? That’s super simple: Not holding data in the first place.
WhatsApp is a fantastic instant messenger, but that doesn’t mean it couldn’t be better. Whether it’s hiding media from snooping eyes on WhatsApp Web or using two WhatsApp accounts on the same phone, a few apps and extensions can make anything possible. To use any of the extensions, you’ll need to be running Google Chrome or a Chromium-based browser like Opera. And of course, you have to use WhatsApp Web on the computer. Meanwhile, the apps in this list rely on Android. But the lone WhatsApp Messenger bot can be used with any device. WhatsApp Business (Android): Clone WhatsApp to…
WhatsApp is a fantastic instant messenger, but that doesn’t mean it couldn’t be better. Whether it’s hiding media from snooping eyes on WhatsApp Web or using two WhatsApp accounts on the same phone, a few apps and extensions can make anything possible.
To use any of the extensions, you’ll need to be running Google Chrome or a Chromium-based browser like Opera. And of course, you have to use WhatsApp Web on the computer. Meanwhile, the apps in this list rely on Android. But the lone WhatsApp Messenger bot can be used with any device.
If you have a dual-SIM phone, you probably want a WhatsApp account for each of the numbers. There are cloning apps to use multiple accounts, but WhatsApp doesn’t work on most of them anymore. An easier option is to use WhatsApp Business.
WhatsApp Business is an official app from WhatsApp, which is basically another version of the messenger you are used to. It has a few additional features for businesses, like “quick replies” to send frequently written messages, labels to identify different chats, and so on.
But more than anything else, WhatsApp Business works perfectly with two different numbers. The contact list remains the same on both apps, but you get to decide which SIM you want to reply from by firing up either WhatsApp or WhatsApp Business.
Right now, WhatsApp Business is available only for Android and not on iOS. WhatsApp has said that it will soon be launching an iOS version for the new iPhone XS, XS Max, and XR, which let you use two SIMS.
WhatsAuto (Android): Send Auto-Replies When You’re Busy
Sometimes, you’re driving your car, studying for an exam, or are too busy to reply to incoming messages. You still don’t want to be rude though. WhatsAuto lets you set up auto-replies for any texts you get.
The app is easy to customize. You can choose from preset template auto-replies, or create a custom one. It works with formatting, so you can bold, italicize, or strikethrough any text. You can choose who to send auto-replies to, such as your whole contact list, only some people, or all people with the exception of your favorites. By default, there is an “Auto Reply” header on top of the reply, but you can remove that if you want.
WhatsAuto also lets you choose how often to send a message. You can reply to every message that a contact sends, or be less annoying by giving them a five-minute window before sending the auto-reply again to that contact.
While there’s a status saving feature in WhatsAuto, it didn’t work well for me.
WikiBot (Any): Look Up Wikipedia Explanations on Whatsapp
WhatsApp is more than just a chat app. There are some powerful services you can use in WhatsApp, like job alerts, news updates, and so on. One other service you probably didn’t know about is Wikipedia. Send a word to WikiBot, and it will show you the Wikipedia definition.
Here’s how it works. You will need to add WikiBot’s phone number to your contact list first, and ideally save it as WikiBot. Then send a message to the number that says:
You will get a reply acknowledging that you have activated the service, and will also mention a method to unsubscribe if you ever want to.
That’s it, you’re set to use the bot. Send a word or a phrase, and WikiBot will reply with a few lines of definition. It’s a good way to look up the meanings of simple things or find out who a person is, without having to Google them.
Hide Media (Chrome): Hide Photos and Videos on WhatsApp Web
WhatsApp Web auto-loads all the photos and videos anyone sends you. And on the big computer screen, that can be a privacy nightmare as anyone walking by can see what’s on your screen.
Hide Media is a simple extension that auto-hides all images on WhatsApp Web by default. The image is still downloaded, but it is blurred so that you can’t see it. To view the image or video, hover your mouse cursor over it to reveal. For a video, you can press the play button once you reveal it.
It’s a simple and efficient app to take back some control over your privacy while using WhatsApp Web. Of course, you should also be using other tips to maintain privacy while using WhatsApp.
WAToolkit (Chrome): Read Message Previews, Change Text Width
WAToolkit is a must-have Chrome extension for anyone who uses WhatsApp Web. It adds two tricks that make WhatsApp Web so much better, along with a few other useful features.
First, WAToolkit fixes the width of chat bubbles. For some reason, WhatsApp doesn’t stretch chat bubbles across the entire screen, not utilizing the extent of your wide monitor. WAToolkit makes chat bubbles full-width to optimize screen space.
Second, you get a WAToolkit icon in Chrome’s toolbar. When you get a new message, the icon will add a badge for unread messages. Hover over the icon to read all your incoming messages without ever switching to the WhatsApp Web window. It’s not only a time-saver, but it’s also a sneaky WhatsApp Web trick to read messages without them getting the two blue ticks for “Seen”.
The aforementioned icon also turns orange if there’s a connectivity issue with your phone, which happens quite often with WhatsApp Web. WAToolkit also adds always-on desktop notifications, so that even when you close Chrome, you get WhatsApp Web notifications.
These apps and extensions are an excellent way to power up your WhatsApp experience. Right now, there isn’t much love for iOS, but hopefully that will change over time. Still, you don’t always need to rely on add-ons.
In fact, WhatsApp introduces new features all the time, making many of these tools redundant. For example, you can now check which chats are using up the most amount of storage space, or change numbers while still keeping your WhatsApp data intact.
In the two years since Indian social media app ShareChat raised $4 million in funding from Lightspeed Ventures the converging trends of increasing smartphone use, wireless internet connectivity, and cashless banking have combined to create a new social media juggernaut. Now Lightspeed has confirmed that the company has raised an additional $100 million in financing […]
In the two years since Indian social media app ShareChat raised $4 million in funding from Lightspeed Ventures the converging trends of increasing smartphone use, wireless internet connectivity, and cashless banking have combined to create a new social media juggernaut.
Now Lightspeed has confirmed that the company has raised an additional $100 million in financing at roughly a half billion dollar valuation alongside investment partners including India Quotient, Jesmond Holdings, Morningside, SAIF Partners, Shunwei Ventures, Venture Highway and Xiaomi.
The push began in 2009 with the launch of Aadhaar, India’s (recently amended) national biometric recording scheme. Seven years later it took a huge leap forward with the implementation of the nation’s massive demonetization plan and the near-simultaneous rollout of a 4G high speed mobile network across the country.
Since Jio, the telecommunications arm of the giant industrial conglomerate Reliance Group, launched its 4G service in September 2016, adoption rates across the country have skyrocketed.
According to a report from the telecommunications analysis firm, OpenSignal, Jio’s contribution to networking India has been massive.
During the quarter ending June 2017, total data usage stood at over 4.2 million terabytes, out of which 4G data accounted for 3.9 million TBs, according to TRAI. The growth is most visible when checking the numbers from a year ago, when 4G data usage stood at a mere 8,050 TBs; that’s a 500-fold increase… [And] LTE availability in India is remarkable: users were able to connect to an LTE signal over 84% of the time, a rise of over 10 percentage points from a year earlier. This places India ahead of more established countries in the 4G landscape such as Sweden, Taiwan, Switzerland or the U.K.
Disrupt telco Reliance Jio laid the foundation for India’s phone owners to switch to using mobile data packages (Photo by Arun Sharma/Hindustan Times via Getty Images)
For a startup like ShareChat that means tens of millions of daily active users, according to Mhatre.
Those users are drawn to ShareChat’s broadcast chat feature, which allows users on mobile phones to broadcast conversations and commentary about any topic they wish. “It’s a platform where content that is relevant to you is surfaced to you and you engage with it,” Mhatre says.
The company was founded by three Bangalore-based developers. Farid Ahsan, 26, Ankush Sachdeva, 25, and Bhanu Pratap Singh, also 26 — all graduates from India’s famous IIT Kanpur University — had worked up 17 different prototypes for a product before they finally settled on the version that would become ShareChat.
The company’s founders are also taking a page from the popular Chinese app WeChat and hope to turn their broadcast chat service into a platform for micropayments, education, and other types of entertainment.
What started as a niche site for people to communicate in their local dialects could now become the first true domestic social media giant in India.
There are other Chinese corollaries to ShareChat’s business that may be informative. Toutiao, the news aggregation service owned by Bytedance, is perhaps the closest in kind to ShareChat at the moment, but even that is only accurate to a point.
China’s infrastructure is still somewhat based on personal computers and landlines, whereas India’s is wholly mobile-first. For Mhatre, it’s the first country to make the leap to a digital economy based entirely on mobile computing.
At Lightspeed the opportunity that presents is similar to the mid-90s birth of the Internet in the U.S. and the late 2000 technology boom that created billions of dollars in value for companies like Alibaba, Baidu, and Tencent.
ShareChat is built to support India’s plethora of local languages, as opposed to English-first services like WhatsApp
What makes this feat even more impressive was that until two or three years ago, it looked like India wouldn’t be living up to the expectations that had been set for it and emerging market countries like Russia and Brazil that comprise three-fourths of the BRICs that were supposed to be the foundational building blocks of the 21st century global economy.
“If you look at China — the GDP in China is $12 to $13 trillion… India is about $2.5 trillion [but] infrastructure got developed there earlier than in India,” Mhatre said. India is at the same inflection point now, where the infrastructure boom is contributing to the development of new business models.
The constraints of that infrastructure have also informed the business ShareChat has built as well. Because while digital penetration rates in the country are high, the download speeds are exceptionally low (due in part to overwhelming demand).
While LTE availability saw a meteoric rise, the same cannot be said of 4G speeds. In our latest State of LTE report, India occupied the lowest spot among the 77 countries we examined, with average download speeds of 6.1 Mbps, over 10 Mbps lower than the global average.
ShareChat’s focus on messaging and sharing data light images is a platform that’s suited to the current strengths and limitations of India’s infrastructure. “You have half a billion people with a high speed internet terminal in their hand and they want to do things with it,” Mhatre said. And ShareChat isn’t just localized in its tech stack. The company also is localized by language.
As the investors at Lightspeed noted in their thoughts on the deal.
The “next billion” users in India speak 22 different languages and are spread out over an area the size of Europe. ShareChat’s foundersAnkush,Bhanu andFarid blew us away with their insight into this new user base. Their first brush with this user base came in 2015 when they noticed that sharing of photos, videos, poetry, jokes and even good morning messages was at epidemic levels on WhatsApp. Yet there was no easy one-stop shop for finding this content. ShareChat was born to solve this problem. As they developed the idea, they also saw that this audience hungered for connection and content about their cities and villages of origin. They noticed emergent behavior around users wanting to “look cool” to their friends by finding the best content, solving for loneliness by finding friends in their own language, and even wanting to drive fame and celebrity in their own geographies.
Alex Stamos, Facebook’s former chief security officer, who left the company this summer to take up a role in academia, has made a contribution to what’s sometimes couched as a debate about how to monetize (and thus sustain) commercial end-to-end encrypted messaging platforms in order that the privacy benefits they otherwise offer can be as widely […]
Alex Stamos, Facebook’s former chief security officer, who left the company this summer to take up a role in academia, has made a contribution to what’s sometimes couched as a debate about how to monetize (and thus sustain) commercial end-to-end encrypted messaging platforms in order that the privacy benefits they otherwise offer can be as widely spread as possible.
Stamos made the comments via Twitter, where he said he was indirectly responding to the fallout from a Forbes interview with WhatsApp co-founder Brian Acton — in which Acton hit at out at his former employer for being greedy in its approach to generating revenue off of the famously anti-ads messaging platform.
Both WhatsApp founders’ exits from Facebook has been blamed on disagreements over monetization. (Jan Koum left some months after Acton.)
In the interview, Acton said he suggested Facebook management apply a simple business model atop WhatsApp, such as metered messaging for all users after a set number of free messages. But that management pushed back — with Facebook COO Sheryl Sandberg telling him they needed a monetization method that generates greater revenue “scale”.
And while Stamos has avoided making critical remarks about Acton (unlike some current Facebook staffers), he clearly wants to lend his weight to the notion that some kind of trade-off is necessary in order for end-to-end encryption to be commercially viable (and thus for the greater good (of messaging privacy) to prevail); and therefore his tacit support to Facebook and its approach to making money off of a robustly encrypted platform.
Stamos’ own departure from the fb mothership was hardly under such acrimonious terms as Acton, though he has had his own disagreements with the leadership team — as set out in a memo he sent earlier this year that was obtained by BuzzFeed. So his support for Facebook combining e2e and ads perhaps counts for something, though isn’t really surprising given the seat he occupied at the company for several years, and his always fierce defence of WhatsApp encryption.
(Another characteristic concern that also surfaces in Stamos’ Twitter thread is the need to keep the technology legal, in the face of government attempts to backdoor encryption, which he says will require “accepting the inevitable downsides of giving people unfettered communications”.)
I don't want to weigh into the personal side of the WhatsApp vs Facebook fight, as there are people I respect on both sides, but I do want to use this as an opportunity to talk about the future of end-to-end encryption. (1/13)
This summer Facebook confirmed that, from next year, ads will be injected into WhatsApp statuses (aka the app’s Stories clone). So it is indeed bringing ads to the famously anti-ads messaging platform.
For several years the company has also been moving towards positioning WhatsApp as a business messaging platform to connect companies with potential customers — and it says it plans to meter those messages, also from next year.
So there are two strands to its revenue generating playbook atop WhatsApp’s e2e encrypted messaging platform. Both with knock-on impacts on privacy, given Facebook targets ads and marketing content by profiling users by harvesting their personal data.
This means that while WhatsApp’s e2e encryption means Facebook literally cannot read WhatsApp users’ messages, it is ‘circumventing’ the technology (for ad-targeting purposes) by linking accounts across different services it owns — using people’s digital identities across its product portfolio (and beyond) as a sort of ‘trojan horse’ to negate the messaging privacy it affords them on WhatsApp.
Facebook is using different technical methods (including the very low-tech method of phone number matching) to link WhatsApp user and Facebook accounts. Once it’s been able to match a Facebook user to a WhatsApp account it can then connect what’s very likely to be a well fleshed out Facebook profile with a WhatsApp account that nonetheless contains messages it can’t read. So it’s both respecting and eroding user privacy.
This approach means Facebook can carry out its ad targeting activities across both messaging platforms (as it will from next year). And do so without having to literally read messages being sent by WhatsApp users.
As trade offs go, it’s a clearly a big one — and one that’s got Facebook into regulatory trouble in Europe.
It is also, at least in Stamos’ view, a trade off that’s worth it for the ‘greater good’ of message content remaining strongly encrypted and therefore unreadable. Even if Facebook now knows pretty much everything about the sender, and can access any unencrypted messages they sent using its other social products.
In his Twitter thread Stamos argues that “if we want that right to be extended to people around the world, that means that E2E encryption needs to be deployed inside of multi-billion user platforms”, which he says means: “We need to find a sustainable business model for professionally-run E2E encrypted communication platforms.”
On the sustainable business model front he argues that two models “currently fit the bill” — either Apple’s iMessage or Facebook-owned WhatsApp. Though he doesn’t go into any detail on why he believes only those two are sustainable.
He does say he’s discounting the Acton-backed alternative, Signal, which now operates via a not-for-profit (the Signal Foundation) — suggesting that rival messaging app is “unlikely to hit 1B users”.
In passing he also throws it out there that Signal is “subsidized, indirectly, by FB ads” — i.e. because Facebook pays a licensing fee for use of the underlying Signal Protocol used to power WhatsApp’s e2e encryption. (So his slightly shade-throwing subtext is that privacy purists are still benefiting from a Facebook sugardaddy.)
Then he gets to the meat of his argument in defence of Facebook-owned (and monetized) WhatsApp — pointing out that Apple’s sustainable business model does not reach every mobile user, given its hardware is priced at a premium. Whereas WhatsApp running on a cheap Android handset ($50 or, perhaps even $30 in future) can.
Other encrypted messaging apps can also of course run on Android but presumably Stamos would argue they’re not professionally run.
“I think it is easy to underestimate how radical WhatsApp’s decision to deploy E2E was,” he writes. “Acton and Koum, with Zuck’s blessing, jumped off a bridge with the goal of building a monetization parachute on the way down. FB has a lot of money, so it was a very tall bridge, but it is foolish to expect that FB shareholders are going to subsidize a free text/voice/video global communications network forever. Eventually, WhatsApp is going to need to generate revenue.
“This could come from directly charging for the service, it could come from advertising, it could come from a WeChat-like services play. The first is very hard across countries, the latter two are complicated by E2E.”
“I can’t speak to the various options that have been floated around, or the arguments between WA and FB, but those of us who care about privacy shouldn’t see WhatsApp monetization as something evil,” he adds. “In fact, we should want WA to demonstrate that E2E and revenue are compatible. That’s the only way E2E will become a sustainable feature of massive, non-niche technology platforms.”
Stamos is certainly right that Apple’s iMessage cannot reach every mobile user, given the premium cost of Apple hardware.
Though he elides the important role that second hand Apple devices play in helping to reduce the barrier to entry to Apple’s pro-privacy technology — a role Apple is actively encouraging via support for older devices (and by its own services business expansion which extends its model so that support for older versions of iOS (and thus secondhand iPhones) is also commercially sustainable).
Robust encryption only being possible via multi-billion user platforms essentially boils down to a usability argument by Stamos — which is to suggest that mainstream app users will simply not seek encryption out unless it’s plated up for them in a way they don’t even notice it’s there.
The follow on conclusion is then that only a well-resourced giant like Facebook has the resources to maintain and serve this different tech up to the masses.
There’s certainly substance in that point. But the wider question is whether or not the privacy trade offs that Facebook’s monetization methods of WhatsApp entail, by linking Facebook and WhatsApp accounts and also, therefore, looping in various less than transparent data-harvest methods it uses to gather intelligence on web users generally, substantially erodes the value of the e2e encryption that is now being bundled with Facebook’s ad targeting people surveillance. And so used as a selling aid for otherwise privacy eroding practices.
Yes WhatsApp users’ messages will remain private, thanks to Facebook funding the necessary e2e encryption. But the price users are having to pay is very likely still their personal privacy.
And at that point the argument really becomes about how much profit a commercial entity should be able to extract off of a product that’s being marketed as securely encrypted and thus ‘pro-privacy’? How much revenue “scale” is reasonable or unreasonable in that scenario?
Other business models are possible, which was Acton’s point. But likely less profitable. And therein lies the rub where Facebook is concerned.
How much money should any company be required to leave on the table, as Acton did when he left Facebook without the rest of his unvested shares, in order to be able to monetize a technology that’s bound up so tightly with notions of privacy?
Acton wanted Facebook to agree to make as much money as it could without users having to pay it with their privacy. But Facebook’s management team said no. That’s why he’s calling them greedy.
Stamos doesn’t engage with that more nuanced point. He just writes: “It is foolish to expect that FB shareholders are going to subsidize a free text/voice/video global communications network forever. Eventually, WhatsApp is going to need to generate revenue” — thereby collapsing the revenue argument into an all or nothing binary without explaining why it has to be that way.
Who should you sell your startup to? Facebook and the founders of its former acquisitions are making a strong case against getting bought by Mark Zuckerberg and co. After a half-decade of being seen as one of the most respectful and desired acquirers, a series of scandals has destroyed the image of Facebook’s M&A division. That […]
Who should you sell your startup to? Facebook and the founders of its former acquisitions are making a strong case against getting bought by Mark Zuckerberg and co. After a half-decade of being seen as one of the most respectful and desired acquirers, a series of scandals has destroyed the image of Facebook’s M&A division. That could make it tougher to convince entrepreneurs to sell to Facebook, or force it to pay higher prices and put contractual guarantees of autononmy into the deals.
WhatsApp’s founders left amidst aggresive pushes to monetize. Instagram’s founders left as their independence was threatened. Oculus’ founders were demoted. And over the past few years Facebook has also shut down acquisitions including viral teen Q&A app TBH, fitness tracker Moves, video advertising system LiveRail, voice control developer toolkit Wit.ai, and still-popular mobile app developer platform Parse.
Facebook’s users might not know or care about much of this. But it could be a sticking point the next time Facebook tries to buy out a burgeoning competitor or complementary service.
Broken Promises With WhatsApp
The real trouble started with WhatsApp co-founder Brian Acton’s departure from Facebook a year ago before he was full vested from the $22 billion acquisition in 2014. He’d been adamant that Facebook not stick the targeted ads he hated inside WhatsApp, and Zuckerberg conceded not to. Acton even got a clause added to the deal that the co-founders’ remaining stock would vest instantly if Facebook implemented monetization schemes without their consent. Google was also interested in buying WhatsApp, but Facebook’s assurances of independence sealed the deal.
WhatsApp’s other co-founder Jan Koum left Facebook in April following tension about how Facebook would monetize his app and the impact of that on privacy. Acton’s departure saw him leave $850 million on the table. Captivity must have been pretty rough for freedom to be worth that much. Today in an interview with Forbes’s Parmy Olson, he detailed how Facebook got him to promise it wouldn’t integrate WhatsApp’s user data to get the deal approved by EU regulators. Facebook then broke that promise, paid the $122 million fine that amounted to a tiny speed bump for the money-printing corporation, and kept on hacking.
When Acton tried to enact the instant-vesting clause upon his departure, Facebook claimed it was still exploring, not “implementing”, monetization. Acton declined a legal fight and walked away, eventually tweeting “Delete Facebook”. Koum stayed to vest a little longer. But soon after they departed, WhatsApp started charging businesses for slow replies, and it will inject ads into the WhatsApp’s Stories product Status next year. With user growth slowing, users shifting to Stories, and News Feed out of ad space, Facebook’s revenue problem became WhatsApp’s monetization mandate.
The message was that Facebook would eventually break its agreements with acquired founders to prioritize its own needs.
Diminished Autonomy For Instagram
Instagram’s co-founders Kevin Systrom and Mike Krieger announced they were resigning this week, which sources tell Techcrunch was because of mounting tensions with Zuckerberg over product direction. Zuckerberg himself negotiated the 2012 acquisition for $1 billion ($715 million when the deal closed with Facebook’s share price down, but later $4 billion as it massively climbed). That price was stipulated on Instagram remaining independent in both brand and product roadmap.
Zuckerberg upheld his end of the bargain for five years, and the Instagram co-founders stayed on past their original vesting dates — uncommon in Silicon Valley. Facebook pointed to Instagram’s autonomy when it was trying to secure the WhatsApp acquisition. And with the help of Facebook’s engineering, sales, recruiting, internationalization, and anti-spam teams, Instagram grew into a 1 billion user juggernaut.
But again, Facebook’s growth and financial woes led to a change of heart for Zuckerberg. Facebook’s popularity amongst teens was plummeting while Instagram remained cool. Facebook pushed to show its alerts and links back to the parent company inside of Instagram’s notifications and settings tabs. Meanwhile, it stripped out the Instagram attribution from cross-posted photos and deleted a shortcut to Instagram from the Facebook bookmarks menu.
Zuckerberg then installed a loyalist, his close friend and former News Feed VP Adam Mosseri as Instagram’s new VP of Product mid-way through this year. The reorganization also saw Systrom start reporting to Facebook CPO Chris Cox. Previously the Instagram CEO had more direct contact with Zuckerberg despite technically reporting to CTO Mike Schroepfer, and the insertion of a layer of management between them frayed their connection. 6 years after being acquired, Facebook started breaking its promises, Instagram felt less autonomous, and the founders exited.
The message again was that Facebook expected to be able to exploit its acquisitions regardless of their previous agreements.
Reduced Visibility For Oculus
Zuckerberg declared Oculus was the next great computing platform when Facebook acquired the virtual reality company in 2014. Adoption ended up slower than many expected, forcing Oculus to fund VR content creators since it’s still an unsustainable business. Oculus has likely been a major cash sink for Facebook it will have to hope pays off later.
But in the meantime the co-founders of Oculus have faded into the background. Brendan Iribe and Nate Mitchell have gone from leading the company to focusing on the nerdiest part of its growing product lineup as VPs running the PC VR and Rift hardware teams respectively. Former Xiaomi hardware leader Hugo Barra was brought in as VP of VR to oversee Oculus, and he reports to former Facebook VP of Ads Andrew “Boz” Bosworth — a long-time Zuckerberg confidant who TA’d one of his classes at Harvard who now runs all of Facebook’s hardware efforts.
Oculus’ original visionary inventor Palmer Luckey left Facebook last year following a schism with the company over him funding anti-Hillary Clinton memes and “sh*tposters”. He was pressed to apologize, saying “I am deeply sorry that my actions are negatively impacting the perception of Oculus and its partners.”
Lesser-known co-founder Jack McCauley left Facebook just a year after the acquisition to start his own VR lab. Sadly, Oculus co-founder Andrew Reisse died in 2013 when he was struck by a vehicle in a police chase just two months after the acquisition was announced. The final co-founder Michael Antonov was the Chief Software Architect, but Facebook just confirmed to me he recently left the division to work on artificial intelligence infrastructure at Facebook.
Facebook needs to take action if it wants to reassure prospective acquisitions that it can be a good home for their startups. I think Zuckerberg or Mosseri (likely to be named Instagram’s new leader) should issue a statement that they understand people’s fears about what will happen to Instagram and WhatsApp since they’re such important parts of users’ lives, and establishing core tenets of the product’s identity they don’t want to change. Again, 15-year-old Instagrammers and WhatsAppers probably won’t care, but potential acquisitions would.
So far, Facebook has only managed to further inflame the founders versus Facebook divide. Today former VP of Messenger and now head of Facebook’s blockchain team David Marcus wrote a scathing note criticizing Acton for his Forbes interview and claiming that Zuckerberg tried to protect WhatsApp’s autonomy. “Call me old fashioned. But I find attacking the people and company that made you a billionaire, and went to an unprecedented extent to shield and accommodate you for years, low-class. It’s actually a whole new standard of low-class” he wrote.
But this was a wasted opportunity for Facebook to discuss all the advantages it brings to its acquisitions. Marcus wrote “As far as I’m concerned, and as a former lifelong entrepreneur and founder, there’s no other large company I’d work at, and no other leader I’d work for”, and noted the opportunity for impact and the relatively long amount of time acquired founders have stayed in the past. Still, it would have been more productive to focus on why’s it’s where he wants to work, how founders actually get to touch the lives of billions, and how other acquirers like Twitter and Google frequently dissolve the companies they buy and often see their founders leave even sooner.
Acquisitions have protected Facebook from disruption. Now that strategy is in danger if it can’t change this narrative. Lots of zeros on a check might not be enough to convince the next great entrepreneur to sell Facebook their startup if they suspect they or their project will be steamrolled.