China is funding the future of American biotech

Silicon Valley is in the midst of a health craze, and it is being driven by “Eastern” medicine. It’s been a record year for US medical investing, but investors in Beijing and Shanghai are now increasingly leading the largest deals for US life science and biotech companies. In fact, Chinese venture firms have invested more this […]

Silicon Valley is in the midst of a health craze, and it is being driven by “Eastern” medicine.

It’s been a record year for US medical investing, but investors in Beijing and Shanghai are now increasingly leading the largest deals for US life science and biotech companies. In fact, Chinese venture firms have invested more this year into life science and biotech in the US than they have back home, providing financing for over 300 US-based companies, per Pitchbook. That’s the story at Viela Bio, a Maryland-based company exploring treatments for inflammation and autoimmune diseases, which raised a $250 million Series A led by three Chinese firms.

Chinese capital’s newfound appetite also flows into the mainland. Business is booming for Chinese medical startups, who are also seeing the strongest year of venture investment ever, with over one hundred companies receiving $4 billion in investment.

As Chinese investors continue to shift their strategies towards life science and biotech, China is emphatically positioning itself to be a leader in medical investing with a growing influence on the world’s future major health institutions.

Chinese VCs seek healthy returns

We like to talk about things we can interact with or be entertained by. And so as nine-figure checks flow in and out of China with stunning regularity, we fixate on the internet giants, the gaming leaders or the latest media platform backed by Tencent or Alibaba.

However, if we follow the money, it’s clear that the top venture firms in China have actually been turning their focus towards the country’s deficient health system.

A clear leader in China’s strategy shift has been Sequoia Capital China, one of the country’s most heralded venture firms tied to multiple billion-dollar IPOs just this year.

Historically, Sequoia didn’t have much interest in the medical sector.  Health was one of the firm’s smallest investment categories, and it participated in only three health-related deals from 2015-16, making up just 4% of its total investing activity. 

Recently, however, life sciences have piqued Sequoia’s fascination, confirms a spokesperson with the firm.  Sequoia dove into six health-related deals in 2017 and has already participated in 14 in 2018 so far.  The firm now sits among the most active health investors in China and the medical sector has become its second biggest investment area, with life science and biotech companies accounting for nearly 30% of its investing activity in recent years.

Health-related investment data for 2015-18 compiled from Pitchbook, Crunchbase, and SEC Edgar

There’s no shortage of areas in need of transformation within Chinese medical care, and a wide range of strategies are being employed by China’s VCs. While some investors hope to address influenza, others are focused on innovative treatments for hypertension, diabetes and other chronic diseases.

For instance, according to the Chinese Journal of Cancer, in 2015, 36% of world’s lung cancer diagnoses came from China, yet the country’s cancer survival rate was 17% below the global average. Sequoia has set its sights on tackling China’s high rate of cancer and its low survival rate, with roughly 70% of its deals in the past two years focusing on cancer detection and treatment.

That is driven in part by investments like the firm’s $90 million Series A investment into Shanghai-based JW Therapeutics, a company developing innovative immunotherapy cancer treatments. The company is a quintessential example of how Chinese VCs are building the country’s next set of health startups using their international footprints and learnings from across the globe.

Founded as a joint-venture offshoot between US-based Juno Therapeutics and China’s WuXi AppTec, JW benefits from Juno’s experience as a top developer of cancer immunotherapy drugs, as well as WuXi’s expertise as one of the world’s leading contract research organizations, focusing on all aspects of the drug R&D and development cycle.

Specifically, JW is focused on the next-generation of cell-based immunotherapy cancer treatments using chimeric antigen receptor T-cell (CAR-T) technologies. (Yeah…I know…) For the WebMD warriors and the rest of us with a medical background that stopped at tenth-grade chemistry, CAR-T essentially looks to attack cancer cells by utilizing the body’s own immune system.

Past waves of biotech startups often focused on other immunologic treatments that used genetically-modified antibodies created in animals.  The antibodies would effectively act as “police,” identifying and attaching to “bad guy” targets in order to turn off or quiet down malignant cells.  CAR-T looks instead to modify the body’s native immune cells to attack and kill the bad guys directly.

Chinese VCs are investing in a wide range of innovative life science and biotech startups. (Photo by Eugeneonline via Getty Images)

The international and interdisciplinary pedigree of China’s new medical leaders not only applies to the organizations themselves but also to those running the show.

At the helm of JW sits James Li.  In a past life, the co-founder and CEO held stints as an executive heading up operations in China for the world’s biggest biopharmaceutical companies including Amgen and Merck.  Li was also once a partner at the Silicon Valley brand-name investor, Kleiner Perkins.

JW embodies the benefits that can come from importing insights and expertise, a practice that will come to define the companies leading the medical future as the country’s smartest capital increasingly finds its way overseas.

GV and Founders Fund look to keep the Valley competitive

Despite heavy investment by China’s leading VCs, Silicon Valley is doubling down in the US health sector.  (AFP PHOTO / POOL / JASON LEE)

Innovation in medicine transcends borders. Sickness and death are unfortunately universal, and groundbreaking discoveries in one country can save lives in the rest.

The boom in China’s life science industry has left valuations lofty and cross-border investment and import regulations in China have improved.

As such, Chinese venture firms are now increasingly searching for innovation abroad, looking to capitalize on expanding opportunities in the more mature US medical industry that can offer innovative technologies and advanced processes that can be brought back to the East.

In April, Qiming Venture Partners, another Chinese venture titan, closed a $120 million fund focused on early-stage US healthcare. Qiming has been ramping up its participation in the medical space, investing in 24 companies over the 2017-18 period.

New firms diving into the space hasn’t frightened the Bay Area’s notable investors, who have doubled down in the US medical space alongside their Chinese counterparts.

Partner directories for America’s most influential firms are increasingly populated with former doctors and medically-versed VCs who can find the best medical startups and have a growing influence on the flow of venture dollars in the US.

At the top of the list is Krishna Yeshwant, the GV (formerly Google Ventures) general partner leading the firm’s aggressive push into the medical industry.

Krishna Yeshwant (GV) at TechCrunch Disrupt NY 2017

A doctor by trade, Yeshwant’s interest runs the gamut of the medical spectrum, leading investments focusing on anything from real-time patient care insights to antibody and therapeutic technologies for cancer and neurodegenerative disorders.

Per data from Pitchbook and Crunchbase, Krishna has been GV’s most active partner over the past two years, participating in deals that total over a billion dollars in aggregate funding.

Backed by the efforts of Yeshwant and select others, the medical industry has become one of the most prominent investment areas for Google’s venture capital arm, driving roughly 30% of its investments in 2017 compared to just under 15% in 2015.

GV’s affinity for medical-investing has found renewed life, but life science is also part of the firm’s DNA.  Like many brand-name Valley investors, GV founder Bill Maris has long held a passion for the health startups.  After leaving GV in 2016, Maris launched his own fund, Section 32, focused specifically on biotech, healthcare and life sciences. 

In the same vein, life science and health investing has been part of the lifeblood for some major US funds including Founders Fund, which has consistently dedicated over 25% of its deployed capital to the space since at least 2015.

The tides may be changing, however, as the recent expansion of oversight for the Committee on Foreign Investment in the United States (CFIUS) may severely impact the flow of Chinese capital into areas of the US health sector. 

Under its extended purview, CFIUS will review – and possibly block – any investment or transaction involving a foreign entity related to the production, design or testing of technology that falls under a list of 27 critical industries, including biotech research and development.

The true implications of the expanded rules will depend on how aggressively and how often CFIUS exercises its power.  But a lengthy review process and the threat of regulatory blocks may significantly increase the burden on Chinese investors, effectively shutting off the Chinese money spigot.

Regardless of CFIUS, while China’s active presence in the US health markets hasn’t deterred Valley mainstays, with a severely broken health system and an improved investment environment backed by government support, China’s commitment to medical innovation is only getting stronger.

VCs target a disastrous health system

Deficiencies in China’s health sector has historically led to troublesome outcomes.  Now the government is jump-starting investment through supportive policy. (Photo by Alexander Tessmer / EyeEm via Getty Images)

They say successful startups identify real problems that need solving. Marred with inefficiencies, poor results, and compounding consumer frustration, China’s health industry has many

Outside of a wealthy few, citizens are forced to make often lengthy treks to overcrowded and understaffed hospitals in urban centers.  Reception areas exist only in concept, as any open space is quickly filled by hordes of the concerned, sick, and fearful settling in for wait times that can last multiple days. 

If and when patients are finally seen, they are frequently met by overworked or inexperienced medical staff, rushing to get people in and out in hopes of servicing the endless line behind them. 

Historically, when patients were diagnosed, treatment options were limited and ineffective, as import laws and affordability issues made many globally approved drugs unavailable.

As one would assume, poor detection and treatment have led to problematic outcomes. Heart disease, stroke, diabetes and chronic lung disease accounts for 80% of deaths in China, according to a recent report from the World Bank

Recurring issues of misconduct, deception and dishonesty have amplified the population’s mounting frustration.

After past cases of widespread sickness caused by improperly handled vaccinations, China’s vaccine crisis reached a breaking point earlier this year.  It was revealed that 250,000 children had been given defective and fallacious rabies vaccinations, a fact that inspectors had discovered months prior and swept under the rug.

Fracturing public trust around medical treatment has serious, potentially destabilizing effects. And with deficiencies permeating nearly all aspects of China’s health and medical infrastructure, there is a gaping set of opportunities for disruptive change.

In response to these issues, China’s government placed more emphasis on the search for medical innovation by rolling out policies that improve the chances of success for health startups, while reducing costs and risk for investors.

Billions of public investment flooded into the life science sector, and easier approval processes for patents, research grants, and generic drugs, suddenly made the prospect of building a life science or biotech company in China less daunting. 

For Chinese venture capitalists, on top of financial incentives and a higher-growth local medical sector, loosening of drug import laws opened up opportunities to improve China’s medical system through innovation abroad.

Liquidity has also improved due to swelling global interest in healthcare. Plus, the Hong Kong Stock Exchange recently announced changes to allow the listing of pre-revenue biotech companies.

The changes implemented across China’s major institutions have effectively provided Chinese health investors with a much broader opportunity set, faster growth companies, faster liquidity, and increased certainty, all at lower cost.

However, while the structural and regulatory changes in China’s healthcare system has led to more medical startups with more growth, it hasn’t necessarily driven quality.

US and Western investors haven’t taken the same cross-border approach as their peers in Beijing. From talking with those in the industry, the laxity of the Chinese system, and others, have made many US investors weary of investing in life science companies overseas.

And with the Valley similarly stepping up its focus on startups that sprout from the strong American university system, bubbling valuations have started to raise concern.

But with China dedicating more and more billions across the globe, the country is determined to patch the massive holes in its medical system and establish itself as the next leader in international health innovation.

Khashoggi’s fate shows the flip side of the surveillance state

It’s been over five years since NSA whistleblower Edward Snowden lifted the lid on government mass surveillance programs, revealing, in unprecedented detail, quite how deep the rabbit hole goes thanks to the spread of commercial software and connectivity enabling a bottomless intelligence-gathering philosophy of ‘bag it all’. Yet technology’s onward march has hardly broken its stride. […]

It’s been over five years since NSA whistleblower Edward Snowden lifted the lid on government mass surveillance programs, revealing, in unprecedented detail, quite how deep the rabbit hole goes thanks to the spread of commercial software and connectivity enabling a bottomless intelligence-gathering philosophy of ‘bag it all’.

Yet technology’s onward march has hardly broken its stride.

Government spying practices are perhaps more scrutinized, as a result of awkward questions about out-of-date legal oversight regimes. Though whether the resulting legislative updates, putting an official stamp of approval on bulk and/or warrantless collection as a state spying tool, have put Snowden’s ethical concerns to bed seems doubtful — albeit, it depends on who you ask.

The UK’s post-Snowden Investigatory Powers Act continues to face legal challenges. And the government has been forced by the courts to unpick some of the powers it helped itself to vis-à-vis people’s data. But bulk collection, as an official modus operandi, has been both avowed and embraced by the state.

In the US, too, lawmakers elected to push aside controversy over a legal loophole that provides intelligence agencies with a means for the warrantless surveillance of American citizens — re-stamping Section 702 of FISA for another six years. So of course they haven’t cared a fig for non-US citizens’ privacy either.

Increasingly powerful state surveillance is seemingly here to stay, with or without adequately robust oversight. And commercial use of strong encryption remains under attack from governments.

But there’s another end to the surveillance telescope. As I wrote five years ago, those who watch us can expect to be — and indeed are being — increasingly closely watched themselves as the lens gets turned on them:

“Just as our digital interactions and online behaviour can be tracked, parsed and analysed for problematic patterns, pertinent keywords and suspicious connections, so too can the behaviour of governments. Technology is a double-edged sword – which means it’s also capable of lifting the lid on the machinery of power-holding institutions like never before.”

We’re now seeing some of the impacts of this surveillance technology cutting both ways.

With attention to detail, good connections (in all senses) and the application of digital forensics all sorts of discrete data dots can be linked — enabling official narratives to be interrogated and unpicked with technology-fuelled speed.

Witness, for example, how quickly the Kremlin’s official line on the Skripal poisonings unravelled.

After the UK released CCTV of two Russian suspects of the Novichok attack in Salisbury, last month, the speedy counter-claim from Russia, presented most obviously via an ‘interview’ with the two ‘citizens’ conducted by state mouthpiece broadcaster RT, was that the men were just tourists with a special interest in the cultural heritage of the small English town.

Nothing to see here, claimed the Russian state, even though the two unlikely tourists didn’t appear to have done much actual sightseeing on their flying visit to the UK during the tail end of a British winter (unless you count vicarious viewing of Salisbury’s wikipedia page).

But digital forensics outfit Bellingcat, partnering with investigative journalists at The Insider Russia, quickly found plenty to dig up online, and with the help of data-providing tips. (We can only speculate who those whistleblowers might be.)

Their investigation made use of a leaked database of Russian passport documents; passport scans provided by sources; publicly available online videos and selfies of the suspects; and even visual computing expertise to academically cross-match photos taken 15 years apart — to, within a few weeks, credibly unmask the ‘tourists’ as two decorated GRU agents: Anatoliy Chepiga and Dr Alexander Yevgeniyevich Mishkin.

When public opinion is faced with an official narrative already lacking credibility that’s soon set against external investigation able to closely show workings and sources (where possible), and thus demonstrate how reasonably constructed and plausible is the counter narrative, there’s little doubt where the real authority is being shown to lie.

And who the real liars are.

That the Kremlin lies is hardly news, of course. But when its lies are so painstakingly and publicly unpicked, and its veneer of untruth ripped away, there is undoubtedly reputational damage to the authority of Vladimir Putin.

The sheer depth and availability of data in the digital era supports faster-than-ever evidence-based debunking of official fictions, threatening to erode rogue regimes built on lies by pulling away the curtain that invests their leaders with power in the first place — by implying the scope and range of their capacity and competency is unknowable, and letting other players on the world stage accept such a ‘leader’ at face value.

The truth about power is often far more stupid and sordid than the fiction. So a powerful abuser, with their workings revealed, can be reduced to their baser parts — and shown for the thuggish and brutal operator they really are, as well as proved a liar.

On the stupidity front, in another recent and impressive bit of cross-referencing, Bellingcat was able to turn passport data pertaining to another four GRU agents — whose identities had been made public by Dutch and UK intelligence agencies (after they had been caught trying to hack into the network of the Organisation for the Prohibition of Chemical Weapons) — into a long list of 305 suggestively linked individuals also affiliated with the same GRU military unit, and whose personal data had been sitting in a publicly available automobile registration database… Oops.

There’s no doubt certain governments have wised up to the power of public data and are actively releasing key info into the public domain where it can be poured over by journalists and interested citizen investigators — be that CCTV imagery of suspects or actual passport scans of known agents.

A cynic might call this selective leaking. But while the choice of what to release may well be self-serving, the veracity of the data itself is far harder to dispute. Exactly because it can be cross-referenced with so many other publicly available sources and so made to speak for itself.

Right now, we’re in the midst of another fast-unfolding example of surveillance apparatus and public data standing in the way of dubious state claims — in the case of the disappearance of Washington Post journalist Jamal Khashoggi, who went into the Saudi consulate in Istanbul on October 2 for a pre-arranged appointment to collect papers for his wedding and never came out.

Saudi authorities first tried to claim Khashoggi left the consulate the same day, though did not provide any evidence to back up their claim. And CCTV clearly showed him going in.

Yesterday they finally admitted he was dead — but are now trying to claim he died quarrelling in a fistfight, attempting to spin another after-the-fact narrative to cover up and blame-shift the targeted slaying of a journalist who had written critically about the Saudi regime.

Since Khashoggi went missing, CCTV and publicly available data has also been pulled and compared to identify a group of Saudi men who flew into Istanbul just prior to his appointment at the consulate; were caught on camera outside it; and left Turkey immediately after he had vanished.

Including naming a leading Saudi forensics doctor, Dr Salah Muhammed al-Tubaigy, as being among the party that Turkish government sources also told journalists had been carrying a bone saw in their luggage.

Men in the group have also been linked to Saudi crown prince Mohammed bin Salman, via cross-referencing travel records and social media data.

“In a 2017 video published by the Saudi-owned Al Ekhbariya on YouTube, a man wearing a uniform name tag bearing the same name can be seen standing next to the crown prince. A user with the same name on the Saudi app Menom3ay is listed as a member of the royal guard,” writes the Guardian, joining the dots on another suspected henchman.

A marked element of the Khashoggi case has been the explicit descriptions of his fate leaked to journalists by Turkish government sources, who have said they have recordings of his interrogation, torture and killing inside the building — presumably via bugs either installed in the consulate itself or via intercepts placed on devices held by the individuals inside.

This surveillance material has reportedly been shared with US officials, where it must be shaping the geopolitical response — making it harder for President Trump to do what he really wants to do, and stick like glue to a regional US ally with which he has his own personal financial ties, because the arms of that state have been recorded in the literal act of cutting off the fingers and head of a critical journalist, and then sawing up and disposing of the rest of his body.

Attempts by the Saudis to construct a plausible narrative to explain what happened to Khashoggi when he stepped over its consulate threshold to pick up papers for his forthcoming wedding have failed in the face of all the contrary data.

Meanwhile, the search for a body goes on.

And attempts by the Saudis to shift blame for the heinous act away from the crown prince himself are also being discredited by the weight of data…

And while it remains to be seen what sanctions, if any, the Saudis will face from Trump’s conflicted administration, the crown prince is already being hit where it hurts by the global business community withdrawing in horror from the prospect of being tainted by bloody association.

The idea that a company as reputation-sensitive as Apple would be just fine investing billions more alongside the Saudi regime, in SoftBank’s massive Vision Fund vehicle, seems unlikely, to say the least.

Thanks to technology’s surveillance creep the world has been given a close-up view of how horrifyingly brutal the Saudi regime can be — and through the lens of an individual it can empathize with and understand.

Safe to say, supporting second acts for regimes that cut off fingers and sever heads isn’t something any CEO would want to become famous for.

The power of technology to erode privacy is clearer than ever. Down to the very teeth of the bone saw. But what’s also increasingly clear is that powerful and at times terrible capability can be turned around to debase power itself — when authorities themselves become abusers.

So the flip-side of the surveillance state can be seen in the public airing of the bloody colors of abusive regimes.

Turns out, microscopic details can make all the difference to geopolitics.

RIP Jamal Khashoggi

In State Tectonics, an explosive ending for the future of democracy

An omnipotent data infrastructure and knowledge-sharing tech organization has spread across the planet. Global conspiracies to disseminate propaganda and rig elections are ever present. Algorithms determine what people see as objective truth, and terrorist organizations gird to bring down the monopoly on information. Malka Older faces a problem few speculative science fiction authors face in […]

An omnipotent data infrastructure and knowledge-sharing tech organization has spread across the planet. Global conspiracies to disseminate propaganda and rig elections are ever present. Algorithms determine what people see as objective truth, and terrorist organizations gird to bring down the monopoly on information.

Malka Older faces a problem few speculative science fiction authors face in their lifetimes: having their work become a blueprint for reality. The author, who began formulating her Centenal Cycle series just a few years ago, now finds that her plots have leapt off the page and have become the daily fodder for cable news programs and Congressional investigations. Her universe is set decades into the future, but history is accelerating, and decades into the future can now mean 2019.

So we arrive at the third and final volume of a trilogy that began as a single work called Infomocracy and has proliferated into Null States and now State Tectonics. Ending a trilogy is rarely easy, but State Tectonics does what Older has always done best with her works, smashing together ideas about the future of politics with a medley of thriller styles to deliver an ample helping of thought-provoking nuance.

Older’s world is built on two simple premises. First, through a project called microdemocracy, the world has been subdivided into 100,000 person governing units known as centenals, and every citizen in the world has the right of migration to choose the government they want. This creates strange artifacts — for instance, in dense areas like New York City, citizens can change governments from a corporate-backed libertarian paradise to a leftist environmental oasis in as quick as a subway stop.

Second, to ensure that citizens can make the best choices for themselves, a global organization called Information (a hybrid Google, United Nations, and BBC) tirelessly works to provide objective information to citizens about politics and the world, verifying claims about everything from election promises to the taste of items on a restaurant menu.

Together, they allow Older to explore a world of information manipulation and electoral strategy while meditating on the meaning of objective truth. Across the trilogy, we follow a crew of Information staffers as they uncover political plots and intrigue around a series of global elections. This structure allows Older to create paced thrillers without losing the intellectual spirit of speculative fiction.

While in her last work Null States, the focus was on inequality and lack of access to information, in State Tectonics, Older interrogates the meaning of Information’s monopoly on … information itself. In this microdemocratic world, it is a crime to provide unverified information to people, and yet, Information hardly has infinite knowledge about the world. A shadowy group starts to purvey local information about cities and people outside the normal Information channels, and that raises profound questions — who ultimately “owns” reality? How do we decide what objective truth even is?

In the background of this central question is a trial for an Information staffer accused of the crime of algorithmic bias, of adjusting reality to suit her own ends. Sound familiar?

As a work of speculative fiction — particularly about a subject as complex as the future of democracy — State Tectonics is superlative. Older is striking in her frenetic ability to weave together idea after idea into vignettes that caused this reader to constantly stop and wander in thought. In just this book, we have discussions on the future of politics, mental health, infrastructure finance, transportation, food, nationalism, and identity politics. The dynamic range here is exhilarating.

Unfortunately, that enormous range forces Older to sacrifice depth, not only in the sophistication of some of these topics, which are often only conceived in slight brushstrokes, but also in the characters themselves. After three reasonably hefty books, I still don’t feel as if I truly know the characters I’ve spent so much time with. They are like friends in a transient city such as New York City, people to hang out with on weekends, but not worth a followup once they move on.

More pejoratively, the book feels constantly weighed down by extraneous details that at times can feel more like Wikipedia than assiduous worldbuilding. In this regard, Older has actually matured as a writer from her earlier works, as the detailed digressions are fewer and far between, but they remain as distracting from her core plot, and take time away from the needed work of fleshing out her characters further.

State Tectonics, like its earlier siblings, is the best and worst of fusion cuisine: the brilliant items on the menu can inspire us to think radically beyond our traditional categories and beliefs, but the vast majority of the dishes end up being mishmashes that are ultimately ephemeral and forgotten. The novel is brilliant in discoursing on the future of democracy, and if that is a topic of keen interest, few books will satisfy that urge like this one will.

Smart home makers hoard your data, but won’t say if the police come for it

A decade ago, it was almost inconceivable that nearly every household item could be hooked up to the internet. These days, it’s near impossible to avoid a non-smart home gadget, and they’re vacuuming up a ton of new data that we’d never normally think about. Thermostats know the temperature of your house, and smart cameras […]

A decade ago, it was almost inconceivable that nearly every household item could be hooked up to the internet. These days, it’s near impossible to avoid a non-smart home gadget, and they’re vacuuming up a ton of new data that we’d never normally think about.

Thermostats know the temperature of your house, and smart cameras and sensors know when someone’s walking around your home. Smart assistants know what you’re asking for, and smart doorbells know who’s coming and going. And thanks to the cloud, that data is available to you from anywhere – you can check in on your pets from your phone or make sure your robot vacuum cleaned the house.

Because the data is stored or accessible by the smart home tech makers, law enforcement and government agencies have increasingly sought out data from the companies to solve crimes.

And device makers won’t say if your smart home gadgets have been used to spy on you.

For years, tech companies have published transparency reports — a semi-regular disclosure of the number of demands or requests a company gets from the government for user data. Google was first in 2010. Other tech companies followed in the wake of Edward Snowden’s revelations that the government had enlisted tech companies’ aid in spying on their users. Even telcos, implicated in wiretapping and turning over Americans’ phone records, began to publish their figures to try to rebuild their reputations.

As the smart home revolution began to thrive, police saw new opportunities to obtain data where they hadn’t before. Police sought Echo data from Amazon to help solve a murder. Fitbit data was used to charge a 90-year old man with the murder of his stepdaughter. And recently, Nest was compelled to turn over surveillance footage that led to gang members pleading guilty to identity theft.

Yet, Nest — a division of Google — is the only major smart home device maker that has published how many data demands they receive.

As first noted by Forbes last week, Nest’s little-known transparency report doesn’t reveal much — only that it’s turned over user data about 300 times since mid-2015 on over 500 Nest users. Nest also said it hasn’t to date received a secret order for user data on national security grounds, such as in cases of investigating terrorism or espionage. Nest’s transparency report is woefully vague compared to some of the more detailed reports by Apple, Google and Microsoft, which break out their data requests by lawful request, by region, and often by the kind of data that the government demands.

As Forbes said, “a smart home is a surveilled home.” But at what scale?

We asked some of the most well known smart home makers on the market if they plan on releasing a transparency report, or disclose the number of demands they receive for their smart home tech.

For the most part, we received fairly dismal responses.

What the big four tech giants said:

Amazon did not respond to requests for comment when asked if it will break out the number of demands it receives for Echo data, but a spokesperson told me last year that while its reports include Echo data, it would not break out those figures.

Facebook said that its transparency report section will include “any requests related to Portal,” its new hardware screen with a camera and a microphone. Although the device is new, a spokesperson did not comment on if the company will break out the hardware figures separately.

Google pointed us to Nest’s transparency report but did not comment on its own efforts in the hardware space — notably its Google Home products.

And Apple said that there’s no need to break out its smart home figures — such as its HomePod — because there would be nothing to report. The company said user requests made to HomePod are given a random identifier that cannot be tied to a person.

What the smaller but notable smart home players said:

August, a smart lock maker, said it “does not currently have a transparency report and we have never received any National Security Letters or orders for user content or non-content information under the Foreign Intelligence Surveillance Act (FISA),” but did not comment on the number of subpoenas, warrants and court orders it receives. “August does comply with all laws and when faced with a court order or warrant, we always analyze the request before responding,” a spokesperson said.

Roomba maker iRobot said it “has not received any demands from governments for customer data,” but wouldn’t say if it planned to issue a transparency report in the future.

Both Arlo, the former Netgear smart home division, and Signify, formerly Philips Lighting, said that they do not have transparency reports. Arlo didn’t comment on its future plans, and Signify said it has no plans to publish one. 

Ring, a smart doorbell and security device maker, did not answer our questions on why it doesn’t have a transparency report, but said it “will not release user information without a valid and binding legal demand properly served on us” and that Ring “objects to overbroad or otherwise inappropriate demands as a matter of course.” When pressed, a spokesperson said it plans to release a transparency report in the future, but did not say when.

Neither spokespeople for Honeywell or Canary — both of which have smart home security products — did not comment by our deadline.

And, Samsung, a maker of smart sensors, trackers and internet-connected televisions and other appliances, did not respond to a request for comment.

Only Ecobee, a maker of smart switches and sensors, said it plans to publish its first transparency report “at the end of 2018.” A spokesperson confirmed that, “prior to 2018, Ecobee had not been requested nor required to disclose any data to government entities.”

All in all, that paints a fairly dire picture for anyone thinking that when the gadgets in your home aren’t working for you, they could be helping the government.

As helpful and useful smart home gadgets can be, few fully understand the breadth of data that the devices collect — even when we’re not using them. Your smart TV may not have a camera to spy on you, but it knows what you’ve watched and when — which police used to secure a conviction of a sex offender. Even data from when a murder suspect pushed the button on his home alarm key fob can be enough to help convict someone of murder.

Two years ago, former U.S. director of national intelligence James Clapper said that the government was looking at smart home devices as a new foothold for intelligence agencies to conduct surveillance. And it’s only going to become more common as the number of internet-connected devices spread. Gartner said more than 20 billion devices will be connected to the internet by 2020.

As much as the chances are that the government is spying on you through your internet-connected camera in your living room or your thermostat are slim — it’s naive to think that it can’t.

But the smart home makers wouldn’t want you to know that. At least, most of them.

Facebook hires former UK Lib Dem leader, Nick Clegg, as global policy chief

Facebook has confirmed it has hired the former leader of the UK’s third largest political party — Nick Clegg of the political middle ground Liberal Democrats — to head up global policy and comms. The news was reported earlier by the Financial Times. Facebook hires Nick Clegg, the former UK deputy prime minister, to head […]

Facebook has confirmed it has hired the former leader of the UK’s third largest political party — Nick Clegg of the political middle ground Liberal Democrats — to head up global policy and comms.

The news was reported earlier by the Financial Times.

Facebook confirmed to TechCrunch that Clegg’s title will be VP, global affairs and communications, and that he starts on Monday — and will be moving with his family to California in the New Year.

Former global policy and communications chief, Elliot Schrage, who has been in post for a decade is staying on as an advisor, according to Facebook, and in a post announcing Clegg’s hire COO Sheryl Sandberg thanked Schrage for his “leadership, tenacity, and wise counsel ‑- in good times and bad”.

Facebook told us that Sandberg and founder Mark Zuckerberg were both deeply involved in the hiring process, beginning discussions with Clegg over the summer — as fallout from the Cambridge Analytica data misuse scandal continued to rain down around it — and emphasizing they have already spent a lot of time with him.

Facebook also made a point of noting that Clegg is the most senior European politician to ever take up a senior executive leadership role in Silicon Valley. 

The hire certainly looks like big tech waking up to the fact it needs a far better relationship with European lawmakers.

In a post on Facebook announcing his new job, Clegg says as much, writing: “Having spoken at length to Mark and Sheryl over the last few months, I have been struck by their recognition that the company is on a journey which brings new responsibilities not only to the users of Facebook’s apps but to society at large. I hope I will be able to play a role in helping to navigate that journey.”

“Facebook, WhatsApp, Messenger, Oculus and Instagram are at the heart of so many people’s everyday lives – but also at the heart of some of the most complex and difficult questions we face as a society: the privacy of the individual; the integrity of our democratic process; the tensions between local cultures and the global internet; the balance between free speech and prohibited content; the power and concerns around artificial intelligence; and the wellbeing of our children,” he adds.

“I believe that Facebook must continue to play a role in finding answers to those questions – not by acting alone in Silicon Valley, but by working with people, organizations, governments and regulators around the world to ensure that technology is a force for good.”

In her note about Clegg’s hire, Sandberg lauds Cleggs as “a thoughtful and gifted leader who… understands deeply the responsibilities we have to people who use our service around the world” — before also discussing the big challenges ahead.

“Our company is on a critical journey. The challenges we face are serious and clear and now more than ever we need new perspectives to help us though this time of change. The opportunities are clear too. Every day people use our apps to connect with family and friends and make a difference in their communities. If we can honor the trust they put in us and live up to our responsibilities, we can help more people use technology to do good,” she writes. “That’s what motivates our teams and from all my conversations with Nick, it’s clear that he believes in this as well. His experience and ability to work through complex issues will be invaluable in the years to come.”

One former Facebook policy staffer we spokes to for an insider perspective on Clegg’s hire, couched it as a sign Facebook is finally taking Europe seriously — i.e. as a regulatory force with the ability to bring big tech to rule.

“When I started at fb there were two people in a Regus office doing EU policy,” the person told us, speaking on condition of anonymity. “Now they have an army, and they’re still hiring.”

In Europe, the region’s new data protection framework, GDPR, which came into force at the end of May, has put privacy and security at the top of the tech agenda. And more regulations are coming — with the EU’s data protection supervisor warning today that GDPR is not enough.

“The Facebook/Cambridge Analytica revelations are still under investigation in Europe and America, but they are only the tip of the iceberg, a sign of a much wider problem and a symptom of many more problems still unnoticed,” writes Giovanni Buttarelli in a blog entitled: The urgent case for a new ePrivacy law.

“They didn’t take it seriously and they’re catching up now. I think it also just sends a strong signal that they’re not a U.S. centric company,” the former Facebooker added of the company’s attitude to EU policy, dating the dawning realization that a new approach was needed to around 2016.

Which was also the year that domestic election interference came home to roost for Zuckerberg, after Kremlin meddling in the US presidential elections.

So no more ‘pretty crazy ideas’ from Zuckerberg where politics is concerned — Nick Clegg instead.

For Brits, though, this is actually a pretty crazy idea, given Clegg is the awkwardly familiar face of middle ground, middler performance politics.

And, more importantly, the sacrificial lamb of political compromise, after his party got punished for its turn in coalition government with David Cameron’s Brexit triggering Conservatives.

Our ex-Facebooker source said they’d heard rumors linking the former Labour MP, David Miliband, and the Conservatives’ former chancellor, George Osborne, to the global policy position too.

Whatever the truth of those rumors, in the event Facebook went with Clegg’s third way — which of course meshes perfectly with the company’s desire to be a platform for all views; be that conservative, liberal and Holocaust denier too.

In Clegg it will have found a true believer that compromise can trump partisan tribalism.

Though Facebook’s business will probably test the limits of even Clegg’s famous powers of accommodation.

The current state of the Lib Dem political animal — a party with now just a handful of MPs left in the UK parliament — does also hold a cautionary message for Facebook’s mission to be all things to all men.

A target some less machiavellian types might judge ‘mission impossible’.

Add to that, given Facebook’s now dire need to win back user trust — i.e. in the wake of a string of data scandals, such as the Cambridge Analytica affair (and indeed ongoing attempts by unknown forces to use its platform for voter manipulation) — Clegg is rather an odd choice of hire, given he’s the man who led a political party that fatally burnt the trust of its core supporters who punished it with near political oblivion at the ballet box.

Still, at least Clegg knows how to say sorry in a way that be turned into a hip and shareable meme …

Take a video tour of Facebook’s election security war room

Beneath an American flag, 20 people packed tight into a beige conference room are Facebook’s, and so too the Internet’s, first line of defence for democracy. This is Facebook election security war room. Screens visualize influxes of foreign political content and voter suppression attempts as high-ranking team members from across divisions at Facebook, Instagram, and […]

Beneath an American flag, 20 people packed tight into a beige conference room are Facebook’s, and so too the Internet’s, first line of defence for democracy. This is Facebook election security war room. Screens visualize influxes of foreign political content and voter suppression attempts as high-ranking team members from across divisions at Facebook, Instagram, and WhatsApp coordinate rapid responses. The hope is through face-to-face real-time collaboration in the war room, Facebook can speed up decision-making to minimize how misinformation influences how vote.

In this video, TechCrunch takes you inside the war room at Facebook’s Menlo Park headquarters. Bustling with action beneath the glow of the threat dashboards, you see what should have existed two years ago. During the U.S. presidential election, Russian government trolls and profit-driven fake news outlets polluted the social network with polarizing propaganda. Now Facebook hopes to avoid a repeat in the upcoming US midterms as well as elections across the globe. And to win the hearts, minds, and trust of the public, it’s being more transparent about its strategy.

“It’s not something you can scale to solve with just human.s And it’s not something you can solve with just technology either” says Facebook’s head of cybersecurity Nathaniel Gleicher. “I think artificial intelligence is a critical component of a solution and humans are critical component of a solution.” The two approaches combine in the war room.

Who’s In The War Room And How They Fight Back

Engineers – Facebook’s coders develop the dashboards that monitor political content, hate speech, user reports of potential false news, voter suppression content, and more. They build in alarms that warn the team of anomalies and spikes in the data, triggering investigation by…

  • Data Scientists – Once a threat is detected and visualized on the threat boards, these team members dig into who’s behind an attack, and the web of accounts executing the misinformation campaign.
  • Operations Specialists – They determine if and how the attacks violate Facebook’s community standards. If a violation is confirmed, they take down the appropriate accounts and content wherever they appear on the network.
  • Threat Intelligence Researchers and Investigators – These professional cybersecurity professionals have tons of experience in deciphering the sophisticated tactics used by Facebook’s most powerful adversaries including state actors. They also help Facebook run war games and drills to practice defense against last-minute election day attacks.
  • Instagram and WhatsApp Leaders – Facebook’s acquisitions must also be protected, so representatives from those teams join the war room to coordinate monitoring and takedowns across the company’s family of apps. Together with Facebook’s high-ups, they dispense info about election protection to Facebook’s 20,000 security staffers.
  • Local Experts – Facebook now starts working to defend an election 1.5 to 2 years ahead of time. To provide maximum context for decisions, local experts from countries with the next elections join to bring knowledge of cultural norms and idiosyncracies.
  • Policy Makers – To keep Facebook’s rules about what’s allowed up to date to bar the latest election interference tactics, legal and policy team members join to turn responses into process.

Beyond fellow Facebook employees, the team works external government, security, and tech industry partners. Facebook routinely cooperates with other social networks to pass each other information and synchronize take-downs. Facebook has to get used to this. Following the mid-terms it will evaluate whether it needs to constantly operate a war room. But after it was caught be surprise in 2016, Facebook accepts that it can never turn a blind eye again.

Facebook’s director of our global politics and government outreach team Katie Harbath concludes. “This is our new normal.”

UK health minister sets out tech-first vision for future care provision

The UK’s still fairly new in post minister for health, Matt Hancock, quickly made technology one of his stated priorities. And today he’s put more meat on the bones of his thinking, setting out a vision for transforming, root and branch, how the country’s National Health Service operates to accommodate the plugging in of “healthtech” […]

The UK’s still fairly new in post minister for health, Matt Hancock, quickly made technology one of his stated priorities. And today he’s put more meat on the bones of his thinking, setting out a vision for transforming, root and branch, how the country’s National Health Service operates to accommodate the plugging in of “healthtech” apps and services — to support tech-enabled “preventative, predictive and personalised care”.

How such a major IT upgrade program would be paid for is not clearly set out in the policy document. But the government writes that it is “committed to working with partners” to deliver on its grand vision.

“Our ultimate objective is the provision of better care and improved health outcomes for people in England,” Hancock writes in the ‘future of healthcare’ policy document. “But this cannot be done without a clear focus on improving the technology used by the 1.4 million NHS staff, 1.5 million-strong social care workforce and those many different groups who deliver and plan health and care services for the public.”

The minister is proposing that NHS digital services and IT systems will have to meet “a clear set of open standards” to ensure interoperability and updatability.

Meaning that existing systems that don’t meet the incoming standards will need to be phased out and ripped out over time.

The tech itself that NHS trusts and clinical commissioners can choose to buy will not be imposed upon them from above. Rather the stated intent is to encourage “competition on user experience and better tools for everyone”, says Hancock.

In a statement, the health and social care secretary said: “The tech revolution is coming to the NHS. These robust standards will ensure that every part of the NHS can use the best technology to improve patient safety, reduce delays and speed up appointments.

“A modern technical architecture for the health and care service has huge potential to deliver better services and to unlock our innovations. We want this approach to empower the country’s best innovators — inside and outside the NHS — and we want to hear from staff, experts and suppliers to ensure our standards will deliver the most advanced health and care service in the world.”

The four stated priorities for achieving the planned transformation are infrastructure (principally but not only related to patient records); digital services; innovation; and skills and culture:

“Our technology infrastructure should allow systems to talk to each other safely and securely, using open standards for data and interoperability so people have confidence that their data is up to date and in the right place, and health and care professionals have access to the information they need to provide care,” the document notes.  

The ‘tech for health’ vision — which lacks any kind of timeframe whatsoever — loops in an assortment of tech-fuelled case studies, from applying AI for faster diagnoses (as DeepMind has been trying) to Amazon Alexa skills being used as a memory aid for social care. And envisages, as a future success metric, that “a healthy person can stay healthy and active (using wearables, diet-tracking apps) and can co-ordinate with their GP or other health professional about targeted preventative care”.

The ‘techiness’ of the vision is unsurprising, given Hancock was previously the UK’s digital minister and has made no secret of his love of apps. Even having an app of his own developed to connect with his constituents (aka the eponymous Matt Hancock App — albeit running into some controversy for problems with the app’s privacy policy).

Hancock has also been a loud advocate for (and a personal user of) London-based digital healthcare startup Babylon Health, whose app initially included an AI diagnostic chatbot, in addition to offering video and text consultations with (human) doctors and specialists.

The company has partnered with the NHS for a triage service, and to offer a digital alternative to a traditional primary care service via an app that offers remote consultations (called GP at Hand).

But the app has also faced criticism from healthcare professionals. The AI chatbot component specifically has been attacked by doctors for offering incorrect and potentially dangerous diagnosis advice to patients. This summer Babylon pulled the AI element out of the app, leaving the bot to serve unintelligent triage advice — such as by suggesting people go straight to A&E even with just a headache. (Thereby, said its critics, piling pressure on already over-stretched NHS hospital services.)

All of which underlines some of the pitfalls of scrambling too quickly to squash innovation and healthcare together.

The demographic cherrypicking that can come inherently bundled with digital healthcare apps which are most likely to appeal to younger users (who have fewer complex health problems) is another key criticism of some of these shiny, modern services — with the argument being they impact non-digital NHS primary care services by saddling the bricks-and-mortar bits with more older, sicker patients to care for while the apps siphon off (and monetize) mostly the well, tech-savvy young.

Hancock’s pro-tech vision for upgrading the UK’s healthcare service doesn’t really engage with that critique of modern tech services having a potentially unequal impact on a free-at-the-point-of-use, taxpayer-funded health service.

Rather, in a section on “inclusion”, the vision document talks about the need to “design for, and with, people with different physical, mental health, social, cultural and learning needs, and for people with low digital literacy or those less able to access technology”. But without saying exactly how that might be achieved, given the overarching thrust being to reconfigure the NHS to be mobile-first, tech-enabled and tech-fuelled. 

“Different people may need different services and some people will never use digital services themselves directly but will benefit from others using digital services and freeing resources to help them,” runs the patter. “We must acknowledge that those with the greatest health needs are also the most at risk of being left behind and build digital services with this in mind, ensuring the highest levels of accessibility wherever possible.”

So the risk is being acknowledged — yet in a manner and context that suggests it’s simultaneously being dismissed, or elbowed out of the way, in the push for technology-enabled progress.

Hancock also appears willing to tolerate some iterative tech missteps — again towards a ‘greater good’ of modernizing the tech used to deliver NHS services so it can be continuously responsive to user needs, via updates and modular plugins, all greased by patient data being made reliably available via the envisaged transformation.

Though there is a bit of a cautionary caveat for healthcare startups like Babylon too. At least if they make actual clinical claims, with the document noting that: “We must be careful to ensure that we follow clinical trials where the new technology is clinical but also to ensure we have appropriate assurance processes that recognise when an innovation can be adopted faster. We must learn to adopt, iterate and continuously improve innovations, and support those who are working this way.”

Another more obvious contradiction is Hancock’s claim that “privacy and security” is one of four guiding principles for the vision (alongside “user need; interoperability and openness; and inclusion”), yet this is rubbing up against active engagement with the idea of sensitive social care data being processed by and hosted by a commercial ecommerce giant like Amazon, for example.

The need for patient trust and buy in gets more than passing mention, though. And there’s a pledge to introduce “a healthtech regulatory sandbox working with the ICO, National Data Guardian, NICE and other regulators” to provide support and an easier entry route for developers wanting to build health apps to sell in to the NHS, with the government also saying it will take other steps to “simplify the landscape for innovators”.

“If data is to be used effectively to support better health and care outcomes, it is essential that the public has trust and confidence in us and can see robust data governance, strong safeguards and strict penalties in place for misuse,” the policy document notes. 

Balancing support for data-based digital innovation, including where data-thirsty technologies like AI are concerned, with respect for the privacy of people’s highly sensitive health data will be a tricky act for the government to steer, though. Perhaps especially given Hancock is so keenly rushing to embrace the market.

“We need to build nationally only those few services that the market can’t provide and that must be done once and for everyone, such as a secure login and granular access to date,” runs the ministerial line. “This may mean some programmes need to be stopped.”

Although he also writes that there is a “huge role” for the NHS, care providers and commissioners to “develop solutions and co-create them with industry”.

Some of our user needs are unique, like carers in a particular geographical location or patients using assistive technologies. Or sometimes we can beat something to market because we know what we need and are motivated to solve the problem first.

“In those circumstances where industry won’t see the economies of scale they need to invest, we must be empowered to build our own digital services, often running on our data and networks. We will do that according to the government’s Digital Service Standard, and within the minimal rules we set for our infrastructure.”

“We also want to reassure those who are currently building products that we have no intention or desire to close off the market – in fact we want exactly the opposite,” the document also notes. “We want to back innovations that can improve our health and care system, wherever they can be found – and we know that some of the best innovations are being driven by clinicians and staff up and down the country.”

Among the commercial entities currently building products targeted at the NHS is Google -owned DeepMind, which got embroiled in a privacy controversy related to a data governance failure by the NHS Trust it worked with to co-develop an app for the early detection of a kidney condition.

DeepMind’s health data ambitions expand beyond building alert apps or even crafting diagnostic AIs to also wanting to build out and own healthcare app delivery infrastructure (aka, a fast healthcare interoperability resource, or FHIR) — which, in the aforementioned project, was bundled into the app contract with the Royal Free NHS Trust, locking the trust into sending data to DeepMind’s servers by prohibiting it from connecting to other FHIR servers. So not at all a model vision of interoperability.

Earlier this year DeepMind’s own independent reviewer panel warned there was a risk of the company gaining excessive monopoly power. And Hancock’s vision for health tech seems to be proposing to outlaw such contractual lock ins. Though it remains to be seen whether the guiding principle will stand up to the inexorable tech industry lobbying.

We will set national open standards for data, interoperability, privacy and confidentiality, real-time data access, cyber security and access rules,” the vision grandly envisages.

Open standards are not an abstract technical goal. They permit interoperability between different regions and systems but they also, crucially, permit a modular approach to IT in the NHS, where tools can be pulled and replaced with better alternatives as vendors develop better products. This, in turn, will help produce market conditions that drive innovation, in an ecosystem where developers and vendors continuously compete on quality to fill each niche, rather than capturing users.”

Responding to Hancock’s health tech plan, Sam Smith, coordinator of patient data privacy advocacy group medConfidential, told us: “There’s not much detail in here. It’s not so much ‘jam tomorrow’, as ‘jam… sometime’ — there’s no timeline, and jam gets pretty rancid after not very long. He says “these are standards”, but they’re just a vision for standards — all the hard work is left to be done.”

On the privacy plus AI front, Smith also picks up on Hancock’s vision including suggestive support for setting up “data trusts to facilitate the ethical sharing of data between organisations”, with the document reiterating the government’s plan to launch a pilot later this year. 

“Hancock says “we are supportive” of stripping the NHS of its role in oversight of commercial exploitation of data. Who is the “we” in that as it should be a cause for widespread concern. If Matt thinks the NHS will never get data right, what does he know that the public don’t?” said Smith on this.

He also points out at previous grand scheme attempts to overhaul NHS IT — most notably the uncompleted NHS National Programme for IT, which in the early 2000s tried and failed to deliver a top-down digitization of the service — taking a decade and sinking billions in the process.

“The widely criticised National Programme for IT also started out with similar lofty vision,” he noted. “This is yet another political piece saying what “good looks like”, but none of the success criteria are about patients getting better care from the NHS. For that, better technology has to be delivered on a ward, and in a GP surgery, and the many other places that the NHS and social care touch. Reforming procurement and standards do matter, and will help, but it helps in the same way a good accountant helps — and that’s not by having a vision of better accounting.”

On the vision’s timeframe, a Department of Health spokesman told us: “Today marks the beginning of a conversation between technology experts across the NHS, regulatory bodies and industry as we refine the standards and consider timeframes and details. The iterated standards document will be published in December once we receive feedback and the mandate will be rolled out gradually.

“We have been clear that we will phase out any system which does not meet these standards, will not procure systems which do not comply and will look to end contracts with suppliers who do not meet the standards.”

UK health minister sets out tech-first vision for future care provision

The UK’s still fairly new in post minister for health, Matt Hancock, quickly made technology one of his stated priorities. And today he’s put more meat on the bones of his thinking, setting out a vision for transforming, root and branch, how the country’s National Health Service operates to accommodate the plugging in of “healthtech” […]

The UK’s still fairly new in post minister for health, Matt Hancock, quickly made technology one of his stated priorities. And today he’s put more meat on the bones of his thinking, setting out a vision for transforming, root and branch, how the country’s National Health Service operates to accommodate the plugging in of “healthtech” apps and services — to support tech-enabled “preventative, predictive and personalised care”.

How such a major IT upgrade program would be paid for is not clearly set out in the policy document. But the government writes that it is “committed to working with partners” to deliver on its grand vision.

“Our ultimate objective is the provision of better care and improved health outcomes for people in England,” Hancock writes in the ‘future of healthcare’ policy document. “But this cannot be done without a clear focus on improving the technology used by the 1.4 million NHS staff, 1.5 million-strong social care workforce and those many different groups who deliver and plan health and care services for the public.”

The minister is proposing that NHS digital services and IT systems will have to meet “a clear set of open standards” to ensure interoperability and updatability.

Meaning that existing systems that don’t meet the incoming standards will need to be phased out and ripped out over time.

The tech itself that NHS trusts and clinical commissioners can choose to buy will not be imposed upon them from above. Rather the stated intent is to encourage “competition on user experience and better tools for everyone”, says Hancock.

In a statement, the health and social care secretary said: “The tech revolution is coming to the NHS. These robust standards will ensure that every part of the NHS can use the best technology to improve patient safety, reduce delays and speed up appointments.

“A modern technical architecture for the health and care service has huge potential to deliver better services and to unlock our innovations. We want this approach to empower the country’s best innovators — inside and outside the NHS — and we want to hear from staff, experts and suppliers to ensure our standards will deliver the most advanced health and care service in the world.”

The four stated priorities for achieving the planned transformation are infrastructure (principally but not only related to patient records); digital services; innovation; and skills and culture:

“Our technology infrastructure should allow systems to talk to each other safely and securely, using open standards for data and interoperability so people have confidence that their data is up to date and in the right place, and health and care professionals have access to the information they need to provide care,” the document notes.  

The ‘tech for health’ vision — which lacks any kind of timeframe whatsoever — loops in an assortment of tech-fuelled case studies, from applying AI for faster diagnoses (as DeepMind has been trying) to Amazon Alexa skills being used as a memory aid for social care. And envisages, as a future success metric, that “a healthy person can stay healthy and active (using wearables, diet-tracking apps) and can co-ordinate with their GP or other health professional about targeted preventative care”.

The ‘techiness’ of the vision is unsurprising, given Hancock was previously the UK’s digital minister and has made no secret of his love of apps. Even having an app of his own developed to connect with his constituents (aka the eponymous Matt Hancock App — albeit running into some controversy for problems with the app’s privacy policy).

Hancock has also been a loud advocate for (and a personal user of) London-based digital healthcare startup Babylon Health, whose app initially included an AI diagnostic chatbot, in addition to offering video and text consultations with (human) doctors and specialists.

The company has partnered with the NHS for a triage service, and to offer a digital alternative to a traditional primary care service via an app that offers remote consultations (called GP at Hand).

But the app has also faced criticism from healthcare professionals. The AI chatbot component specifically has been attacked by doctors for offering incorrect and potentially dangerous diagnosis advice to patients. This summer Babylon pulled the AI element out of the app, leaving the bot to serve unintelligent triage advice — such as by suggesting people go straight to A&E even with just a headache. (Thereby, said its critics, piling pressure on already over-stretched NHS hospital services.)

All of which underlines some of the pitfalls of scrambling too quickly to squash innovation and healthcare together.

The demographic cherrypicking that can come inherently bundled with digital healthcare apps which are most likely to appeal to younger users (who have fewer complex health problems) is another key criticism of some of these shiny, modern services — with the argument being they impact non-digital NHS primary care services by saddling the bricks-and-mortar bits with more older, sicker patients to care for while the apps siphon off (and monetize) mostly the well, tech-savvy young.

Hancock’s pro-tech vision for upgrading the UK’s healthcare service doesn’t really engage with that critique of modern tech services having a potentially unequal impact on a free-at-the-point-of-use, taxpayer-funded health service.

Rather, in a section on “inclusion”, the vision document talks about the need to “design for, and with, people with different physical, mental health, social, cultural and learning needs, and for people with low digital literacy or those less able to access technology”. But without saying exactly how that might be achieved, given the overarching thrust being to reconfigure the NHS to be mobile-first, tech-enabled and tech-fuelled. 

“Different people may need different services and some people will never use digital services themselves directly but will benefit from others using digital services and freeing resources to help them,” runs the patter. “We must acknowledge that those with the greatest health needs are also the most at risk of being left behind and build digital services with this in mind, ensuring the highest levels of accessibility wherever possible.”

So the risk is being acknowledged — yet in a manner and context that suggests it’s simultaneously being dismissed, or elbowed out of the way, in the push for technology-enabled progress.

Hancock also appears willing to tolerate some iterative tech missteps — again towards a ‘greater good’ of modernizing the tech used to deliver NHS services so it can be continuously responsive to user needs, via updates and modular plugins, all greased by patient data being made reliably available via the envisaged transformation.

Though there is a bit of a cautionary caveat for healthcare startups like Babylon too. At least if they make actual clinical claims, with the document noting that: “We must be careful to ensure that we follow clinical trials where the new technology is clinical but also to ensure we have appropriate assurance processes that recognise when an innovation can be adopted faster. We must learn to adopt, iterate and continuously improve innovations, and support those who are working this way.”

Another more obvious contradiction is Hancock’s claim that “privacy and security” is one of four guiding principles for the vision (alongside “user need; interoperability and openness; and inclusion”), yet this is rubbing up against active engagement with the idea of sensitive social care data being processed by and hosted by a commercial ecommerce giant like Amazon, for example.

The need for patient trust and buy in gets more than passing mention, though. And there’s a pledge to introduce “a healthtech regulatory sandbox working with the ICO, National Data Guardian, NICE and other regulators” to provide support and an easier entry route for developers wanting to build health apps to sell in to the NHS, with the government also saying it will take other steps to “simplify the landscape for innovators”.

“If data is to be used effectively to support better health and care outcomes, it is essential that the public has trust and confidence in us and can see robust data governance, strong safeguards and strict penalties in place for misuse,” the policy document notes. 

Balancing support for data-based digital innovation, including where data-thirsty technologies like AI are concerned, with respect for the privacy of people’s highly sensitive health data will be a tricky act for the government to steer, though. Perhaps especially given Hancock is so keenly rushing to embrace the market.

“We need to build nationally only those few services that the market can’t provide and that must be done once and for everyone, such as a secure login and granular access to date,” runs the ministerial line. “This may mean some programmes need to be stopped.”

Although he also writes that there is a “huge role” for the NHS, care providers and commissioners to “develop solutions and co-create them with industry”.

Some of our user needs are unique, like carers in a particular geographical location or patients using assistive technologies. Or sometimes we can beat something to market because we know what we need and are motivated to solve the problem first.

“In those circumstances where industry won’t see the economies of scale they need to invest, we must be empowered to build our own digital services, often running on our data and networks. We will do that according to the government’s Digital Service Standard, and within the minimal rules we set for our infrastructure.”

“We also want to reassure those who are currently building products that we have no intention or desire to close off the market – in fact we want exactly the opposite,” the document also notes. “We want to back innovations that can improve our health and care system, wherever they can be found – and we know that some of the best innovations are being driven by clinicians and staff up and down the country.”

Among the commercial entities currently building products targeted at the NHS is Google -owned DeepMind, which got embroiled in a privacy controversy related to a data governance failure by the NHS Trust it worked with to co-develop an app for the early detection of a kidney condition.

DeepMind’s health data ambitions expand beyond building alert apps or even crafting diagnostic AIs to also wanting to build out and own healthcare app delivery infrastructure (aka, a fast healthcare interoperability resource, or FHIR) — which, in the aforementioned project, was bundled into the app contract with the Royal Free NHS Trust, locking the trust into sending data to DeepMind’s servers by prohibiting it from connecting to other FHIR servers. So not at all a model vision of interoperability.

Earlier this year DeepMind’s own independent reviewer panel warned there was a risk of the company gaining excessive monopoly power. And Hancock’s vision for health tech seems to be proposing to outlaw such contractual lock ins. Though it remains to be seen whether the guiding principle will stand up to the inexorable tech industry lobbying.

We will set national open standards for data, interoperability, privacy and confidentiality, real-time data access, cyber security and access rules,” the vision grandly envisages.

Open standards are not an abstract technical goal. They permit interoperability between different regions and systems but they also, crucially, permit a modular approach to IT in the NHS, where tools can be pulled and replaced with better alternatives as vendors develop better products. This, in turn, will help produce market conditions that drive innovation, in an ecosystem where developers and vendors continuously compete on quality to fill each niche, rather than capturing users.”

Responding to Hancock’s health tech plan, Sam Smith, coordinator of patient data privacy advocacy group medConfidential, told us: “There’s not much detail in here. It’s not so much ‘jam tomorrow’, as ‘jam… sometime’ — there’s no timeline, and jam gets pretty rancid after not very long. He says “these are standards”, but they’re just a vision for standards — all the hard work is left to be done.”

On the privacy plus AI front, Smith also picks up on Hancock’s vision including suggestive support for setting up “data trusts to facilitate the ethical sharing of data between organisations”, with the document reiterating the government’s plan to launch a pilot later this year. 

“Hancock says “we are supportive” of stripping the NHS of its role in oversight of commercial exploitation of data. Who is the “we” in that as it should be a cause for widespread concern. If Matt thinks the NHS will never get data right, what does he know that the public don’t?” said Smith on this.

He also points out at previous grand scheme attempts to overhaul NHS IT — most notably the uncompleted NHS National Programme for IT, which in the early 2000s tried and failed to deliver a top-down digitization of the service — taking a decade and sinking billions in the process.

“The widely criticised National Programme for IT also started out with similar lofty vision,” he noted. “This is yet another political piece saying what “good looks like”, but none of the success criteria are about patients getting better care from the NHS. For that, better technology has to be delivered on a ward, and in a GP surgery, and the many other places that the NHS and social care touch. Reforming procurement and standards do matter, and will help, but it helps in the same way a good accountant helps — and that’s not by having a vision of better accounting.”

On the vision’s timeframe, a Department of Health spokesman told us: “Today marks the beginning of a conversation between technology experts across the NHS, regulatory bodies and industry as we refine the standards and consider timeframes and details. The iterated standards document will be published in December once we receive feedback and the mandate will be rolled out gradually.

“We have been clear that we will phase out any system which does not meet these standards, will not procure systems which do not comply and will look to end contracts with suppliers who do not meet the standards.”

Jeff Bezos is just fine taking the Pentagon’s $10B JEDI cloud contract

Some tech companies might have a problem taking money from the Department of Defense, but Amazon isn’t one of them, as CEO Jeff Bezos made clear today at the Wired25 conference. Just last week, Google pulled out of the running for the Pentagon’s $10 billion, 10-year JEDI cloud contract, but Bezos suggested that he was happy […]

Some tech companies might have a problem taking money from the Department of Defense, but Amazon isn’t one of them, as CEO Jeff Bezos made clear today at the Wired25 conference. Just last week, Google pulled out of the running for the Pentagon’s $10 billion, 10-year JEDI cloud contract, but Bezos suggested that he was happy to take the government’s money.

Bezos has been surprisingly quiet about the contract up until now, but his company has certainly attracted plenty of attention from the companies competing for the JEDI deal. Just last week IBM filed a formal protest with the Government Accountability Office claiming that the contract was stacked in favor one vendor. And while it didn’t name it directly, the clear implication was that company was the one owned by Bezos.

Last summer Oracle also filed a protest and also complained that they believed the government had set up the contract to favor Amazon, a charge spokesperson Heather Babb denied. “The JEDI Cloud final RFP reflects the unique and critical needs of DOD, employing the best practices of competitive pricing and security. No vendors have been pre-selected,” she said last month.

While competitors are clearly worried about Amazon, which has a substantial lead in the cloud infrastructure market, the company itself has kept quiet on the deal until now. Bezos set his company’s support in patriotic terms and one of leadership.

“Sometimes one of the jobs of the senior leadership team is to make the right decision, even when it’s unpopular. And if if big tech companies are going to turn their back on the US Department of Defense, this country is going to be in trouble,” he said.

“I know everyone is conflicted about the current politics in this country, but this country is a gem,” he added.

While Google tried to frame its decision as taking a principled stand against misuse of technology by the government, Bezos chose another tack, stating that all technology can be used for good or ill. “Technologies are always two-sided. You know there are ways they can be misused as well as used, and this isn’t new,” Bezos told Wired25.

He’s not wrong of course, but it’s hard not to look at the size of the contract and see it as purely a business decision on his part. Amazon is as hot for that $10 billion contract as any of its competitors. What’s different in this talk is that Bezos made it sound like a purely patriotic decision, rather than economic one.

The Pentagon’s JEDI contract could have a value of up to $10 billion with a maximum length of 10 years. The contract is framed as a two year deal with two three-year options and a final one for two years. The DOD can opt out before exercising any of the options.

Bidding for the contract closed last Friday. The DOD is expected to choose the winning vendor next April.

Ahead of midterm elections, Facebook expands ban on posts aimed at voter suppression

Facebook is expanding its ban on false and misleading posts that aim to deter citizens from voting in the upcoming midterm elections. The social media giant is adding two more categories of false information to its existing policy, which it introduced in 2016, in an effort to counter new types of abuse. Facebook already removes verifiably […]

Facebook is expanding its ban on false and misleading posts that aim to deter citizens from voting in the upcoming midterm elections.

The social media giant is adding two more categories of false information to its existing policy, which it introduced in 2016, in an effort to counter new types of abuse.

Facebook already removes verifiably false posts about the dates, times and locations of polling stations — but will now exclude false posts that wrongly describe methods of voting — such as by phone or text message — as well as posts that aim to exclude portions of the population, such as based on a voter’s age, for example.

But other posts that can’t be immediately verified will be sent to the company’s fact checkers for review.

Facebook’s public policy manager Jessica Leinwand said in a blog post announcing the changes that users will also be given a new reporting option to flag false posts.

The expanded policy is part of the company’s ongoing work to counter misleading or maliciously incorrect posts that try to suppress voters from casting their ballot, which could alter the outcome of a political race.

The ban comes into effect with less than a month before the U.S. midterm elections, after facing heavy criticism from lawmakers that Facebook has not done enough to prevent election meddling and misinformation campaigns on its site. Facebook has largely shied away from banning the spread of deliberately false news and information, including about candidates and other political issues, amid concerns that the platform would be accused of stifling free speech and expression.

But the company didn’t have much room to maneuver after a prominent Democratic senator challenged Facebook’s chief operating officer Sheryl Sandberg during a congressional hearing about how the company planned to prevent content that suppresses votes.

During that hearing, Sandberg admitted the company could have done more to prevent the spread of false news on its platform, but argued that U.S. intelligence could have helped.