Y Combinator invests in non-invasive breast cancer screening bra EVA

According to a report by the American Cancer Society, an estimated 266,120 women will be newly diagnosed with breast cancer in the United States this year and (according to a 2016 estimate) can expect to pay between $60,000 and $134,000 on average for treatment and care. But, after hundreds of thousands of dollars and non-quantifiable emotional […]

According to a report by the American Cancer Society, an estimated 266,120 women will be newly diagnosed with breast cancer in the United States this year and (according to a 2016 estimate) can expect to pay between $60,000 and $134,000 on average for treatment and care. But, after hundreds of thousands of dollars and non-quantifiable emotional stress for them and their families, the American Cancer Society still estimates 40,920 women will lose their battle to the disease this year.

Worldwide, roughly 1.7 million women will be diagnosed with the disease yearly, according to a 2012 estimate by The World Cancer Research Fund International.

While these numbers are stark, they do little to fully capture just how devastating a breast cancer diagnosis is for women and their loved ones. This is a feeling that Higia Technologies‘ co-founder and CEO Julián Ríos Cantú is unfortunately very familiar with.

“My mom is a two-time breast cancer survivor,” Cantú told TechCrunch. “The first time she was diagnosed I was eight years old.”

Cantú says that his mother’s second diagnosis was originally missed through standard screenings because her high breast density obscured the tumors from the X-ray. As a result, she lost both of her breasts, but has since fully recovered.

“At that moment I realized that if that was the case for a woman with private insurance and a prevention mindset, then for most women in developing countries, like Mexico where we’re from, the outcome could’ve not been a mastectomy but death,” said Cantú.

Following his mother’s experience, Cantú resolved to develop a way to improve the value of women’s lives and support them in identifying breast abnormalities and cancers early in order to ensure the highest likelihood of survival.

To do this, at the age of 18 Cantú designed EVA — a bio-sensing bra insert that uses thermal sensing and artificial intelligence to identify abnormal temperatures in the breast that can correlate to tumor growth. Cantú says that EVA is not only an easy tool for self-screening but also fills in gaps in current screening technology.

Today, women have fairly limited options when it comes to breast cancer screening. They can opt for a breast ultrasound (which has lower specificity than other options), or a breast MRI (which has higher associated costs), but the standard option is a yearly or bi-yearly mammogram for women 45 and older. This method requires a visit to a doctor, manual manipulation of the breasts by a technologist and exposure to low-levels of radiation for an X-ray scan of the breast tissue.

While this method is relatively reliable, there are still crucial shortcomings, Higia Technologies’ medical adviser Dr. Richard Kaszynski M.D., PhD told TechCrunch.

“We need to identify a real-world solution to diagnosing breast cancer earlier,” said Dr. Kaszynski. “It’s always a trade-off when we’re talking about mammography because you have the radiation exposure, discomfort and anxiety in regards to exposing yourself to a third-party.”

Dr. Kaszynski continued to say that these yearly or bi-yearly mammograms also leave a gap in care in which interval cancers — cancers that begin to take hold between screenings — have time to grow unhindered.

Additionally, Dr. Kaszynski says mammograms are not highly sensitive when it comes to detecting tumors in dense breast tissue, like that of Cantú’s mom. Dense breast tissue, which is more common in younger women and is present in 40 percent of women globally and 80 percent of Asian women, can mask the presence of tumors in the breast from mammograms.

Through its use of non-invasive, thermal sensors EVA is able to collect thermal data from a variety of breast densities that can enable women of all ages to more easily (and more frequently) perform breast examinations.

Here’s how it works:

To start, the user inserts the thermal sensing cups (which come in three standard sizes ranging from A-D) into a sports bra, open EVA’s associated EVA Health App, follow the instructions and wait for 60 minutes while the cup collects thermal data. From there, EVA will send the data via Bluetooth to the app and an AI will analyze the results to provide the user with an evaluation. If EVA believes the user may have an abnormality that puts them at risk, the app will recommend follow-up steps for further screening with a healthcare professional.

While sacrificing your personal health data to the whims of an AI might seem like a scary (and dangerous, if the device were to be hacked) idea to some, Cantú says Higia Technologies has taken steps to protect its users’ data, including advanced encryption of its server and a HIPAA-compliant privacy infrastructure.

So far, EVA has undergone clinical trials in Mexico, and through these trials has seen 87.9 percent sensibility and 81.7 percent specificity from the device. In Mexico, the company has already sold 5,000 devices and plans to begin shipping the first several hundred by October of this year.

And the momentum for EVA is only increasing. In 2017, Cantú was awarded Mexico’s Presidential Medal for Science and Technology and so far this year Higia Technologies has won first place in the SXSW’s International Pitch Competition, been named one of “30 Most Promising Businesses of 2018” by Forbes Magazine Mexico and this summer received a $120,000 investment from Y Combinator.

Moving forward, the company is looking to enter the U.S. market and has plans to begin clinical trials with Stanford Medicine X in October 2018 that should run for about a year. Following these trials, Dr. Kaszynski says that Higia Technologies will continue the process of seeking FDA approval to sell the inserts first as a medical device, accessible at a doctor’s office, and then as a device that users can have at home.

The final pricing for the device is still being decided, but Cantú says he wants the product to be as affordable and accessible as possible so it can be the first choice for women in developing countries where preventative cancer screening is desperately needed.

Computer vision researchers build an AI benchmark app for Android phones

A group of computer vision researchers from ETH Zurich want to do their bit to enhance AI development on smartphones. To wit: They’ve created a benchmark system for assessing the performance of several major neural network architectures used for common AI tasks. They’re hoping it will be useful to other AI researchers but also to […]

A group of computer vision researchers from ETH Zurich want to do their bit to enhance AI development on smartphones. To wit: They’ve created a benchmark system for assessing the performance of several major neural network architectures used for common AI tasks.

They’re hoping it will be useful to other AI researchers but also to chipmakers (by helping them get competitive insights); Android developers (to see how fast their AI models will run on different devices); and, well, to phone nerds — such as by showing whether or not a particular device contains the necessary drivers for AI accelerators. (And, therefore, whether or not they should believe a company’s marketing messages.)

The app, called AI Benchmark, is available for download on Google Play and can run on any device with Android 4.1 or higher — generating a score the researchers describe as a “final verdict” of the device’s AI performance.

AI tasks being assessed by their benchmark system include image classification, face recognition, image deblurring, image super-resolution, photo enhancement or segmentation.

They are even testing some algorithms used in autonomous driving systems, though there’s not really any practical purpose for doing that at this point. Not yet anyway. (Looking down the road, the researchers say it’s not clear what hardware platform will be used for autonomous driving — and they suggest it’s “quite possible” mobile processors will, in future, become fast enough to be used for this task. So they’re at least prepped for that possibility.)

The app also includes visualizations of the algorithms’ output to help users assess the results and get a feel for the current state-of-the-art in various AI fields.

The researchers hope their score will become a universally accepted metric — similar to DxOMark that is used for evaluating camera performance — and all algorithms included in the benchmark are open source. The current ranking of different smartphones and mobile processors is available on the project’s webpage.

The benchmark system and app was around three months in development, says AI researcher and developer Andrey Ignatov.

He explains that the score being displayed reflects two main aspects: The SoC’s speed and available RAM.

“Let’s consider two devices: one with a score of 6000 and one with a score of 200. If some AI algorithm will run on the first device for 5 seconds, then this means that on the second device this will take about 30 times longer, i.e. almost 2.5 minutes. And if we are thinking about applications like face recognition this is not just about the speed, but about the applicability of the approach: Nobody will wait 10 seconds till their phone will be trying to recognize them.

“The same is about memory: The larger is the network/input image — the more RAM is needed to process it. If the phone has small amount of RAM that is e.g. only enough to enhance 0.3MP photo, then this enhancement will be clearly useless, but if it can do the same job for Full HD images — this opens up much wider possibilities. So, basically the higher score — the more complex algorithms can be used / larger images can be processed / it will take less time to do this.”

Discussing the idea for the benchmark, Ignatov says the lab is “tightly bound” to both research and industry — so “at some point we became curious about what are the limitations of running the recent AI algorithms on smartphones”.

“Since there was no information about this (currently, all AI algorithms are running remotely on the servers, not on your device, except for some built-in apps integrated in phone’s firmware), we decided to develop our own tool that will clearly show the performance and capabilities of each device,” he adds. 

“We can say that we are quite satisfied with the obtained results — despite all current problems, the industry is clearly moving towards using AI on smartphones, and we also hope that our efforts will help to accelerate this movement and give some useful information for other members participating in this development.”

After building the benchmarking system and collating scores on a bunch of Android devices, Ignatov sums up the current situation of AI on smartphones as “both interesting and absurd”.

For example, the team found that devices running Qualcomm chips weren’t the clear winners they’d imagined — i.e. based on the company’s promotional materials about Snapdragon’s 845 AI capabilities and 8x performance acceleration.

“It turned out that this acceleration is available only for ‘quantized’ networks that currently cannot be deployed on the phones, thus for ‘normal’ networks you won’t get any acceleration at all,” he says. “The saddest thing is that actually they can theoretically provide acceleration for the latter networks too, but they just haven’t implemented the appropriated drivers yet, and the only possible way to get this acceleration now is to use Snapdragon’s proprietary SDK available for their own processors only. As a result — if you are developing an app that is using AI, you won’t get any acceleration on Snapdragon’s SoCs, unless you are developing it for their processors only.”

Whereas the researchers found that Huawei’s Kirin’s 970 CPU — which is technically even slower than Snapdragon 636 — offered a surprisingly strong performance.

“Their integrated NPU gives almost 10x acceleration for Neural Networks, and thus even the most powerful phone CPUs and GPUs can’t compete with it,” says Ignatov. “Additionally, Huawei P20/P20 Pro are the only smartphones on the market running Android 8.1 that are currently providing AI acceleration, all other phones will get this support only in Android 9 or later.”

It’s not all great news for Huawei phone owners, though, as Ignatov says the NPU doesn’t provide acceleration for ‘quantized’ networks (though he notes the company has promised to add this support by the end of this year); and also it uses its own RAM — which is “quite limited” in size, and therefore you “can’t process large images with it”…

“We would say that if they solve these two issues — most likely nobody will be able to compete with them within the following year(s),” he suggests, though he also emphasizes that this assessment only refers to the one SoC, noting that Huawei’s processors don’t have the NPU module.

For Samsung processors, the researchers flag up that all the company’s devices are still running Android 8.0 but AI acceleration is only available starting from Android 8.1 and above. Natch.

They also found CPU performance could “vary quite significantly” — up to 50% on the same Samsung device — because of throttling and power optimization logic. Which would then have a knock on impact on AI performance.

For Mediatek, the researchers found the chipmaker is providing acceleration for both ‘quantized’ and ‘normal’ networks — which means it can reach the performance of “top CPUs”.

But, on the flip side, Ignatov calls out the company’s slogan — that it’s “Leading the Edge-AI Technology Revolution” — dubbing it “nothing more than their dream”, and adding: “Even the aforementioned Samsung’s latest Exynos CPU can slightly outperform it without using any acceleration at all, not to mention Huawei with its Kirin’s 970 NPU.”

“In summary: Snapdragon — can theoretically provide good results, but are lacking the drivers; Huawei — quite outstanding results now and most probably in the nearest future; Samsung — no acceleration support now (most likely this will change soon since they are now developing their own AI Chip), but powerful CPUs; Mediatek — good results for mid-range devices, but definitely no breakthrough.”

It’s also worth noting that some of the results were obtained on prototype samples, rather than shipped smartphones, so haven’t yet been included in the benchmark table on the team’s website.

“We will wait till the devices with final firmware will come to the market since some changes might still be introduced,” he adds.

For more on the pros and cons of AI-powered smartphone features check out our article from earlier this year.