A tri-fold automotive renaissance, led by technology, has been playing out over the past few years. Yet vehicles combining all three aspects of this renaissance remain a niche market. It is China that will push this market into overdrive.
Matt Stone is a business development manager at Intel Capital, with a focus on automotive technology and transportation.
A tri-fold automotive renaissance, led by technology, has been playing out over the past few years. Electric vehicles finally have been embraced by mainstream consumers — and elevated by Tesla with long range and luxury quality. Meanwhile, rides on demand from Uber and Lyft have become a global movement that’s liberated personal mobility (and briefly snared China’s Didi the title of most valuable unicorn).
The third dimension, and most significant catalyst of this renaissance, is autonomous driving. Already, more than 15 million vehicles feature fundamental, Level 2 autonomy, thanks to Mobileye, and Waymo is within reach of commercializing fully autonomous, Level 5 driving.
These innovations have earned enough momentum to draw luxury brands Mercedes and Cadillac into the fray. Yet vehicles combining all three aspects of this renaissance remain a niche market; EVs, for instance, last year accounted for only a little over 300,000 of the 18 million cars and trucks produced in North America.
It is China that will push this market into overdrive, guided by the Five Year Plan the country passed in 2016. It mandated 1 million electric vehicles be sold domestically by 2020, and 3 million by 2025. In the past year alone, China made itself the world’s largest EV market by a wide margin: Sales in the country reached 777,000 vehicles, more than doubling North America’s results.
A massive flood of capital is being directed toward this Five Year Plan, both from incumbent automakers and investors funding hundreds of new Chinese automakers. The incumbents — global brands such as GM and Volkswagen, and Chinese leaders such as SAIC and Dongfeng — face tremendous pressure from the army of investors at the gates. Powered by this new funding, as many as a dozen Chinese companies are on the verge of scaling, in the next three to five years, to a level it took Tesla a decade to reach.
Suddenly, every global automaker has a strategy for electrification, high-level autonomy and rides on demand, with products available to consumers now or in the next year. GM, for example, bought Cruise Automation and released Super Cruise on the Cadillac CTS 6; started mass producing Chevy Bolts and planning more than 20 new EV models in the next five years; invested in Lyft; and created Maven, its own ride-on-demand service, with autonomous rides due to start this year.
Without pressure from China, these new vehicles wouldn’t be arriving so soon, especially in an affordable offering.
Sure, global automakers had already started heading down this path once Tesla proved there was a market. But I’d argue that without pressure from China, these new vehicles wouldn’t be arriving so soon, especially in an affordable offering. When you look at the volume China is able to drive, prices go way down for crucial items like electric vehicle batteries and LIDAR. China’s scale has become the greatest accelerant to mainstream adoption of the next-generation vehicle.
In addition to pushing down the cost curve, the emergence of new Chinese companies eyeing the market (such as NIO, Byton, Faraday Future, Xiaopeng and WM Motors) has provided opportunities for promising new technical talent to emerge. As a result, the barriers to high-level autonomy and electrification may well become more easily surmounted.
Let’s take two major barriers for electric vehicles: The time required to charge them and the general shortage of charging stations. Now EV makers are pushing to establish charging networks: NIO is implementing battery swap stations, while Porsche is adding chargers to all its dealerships. In addition, wireless charging looms on the horizon: Dan Bladen, CEO of our portfolio company Chargifi, believes Apple’s decision to embrace wireless charging in the latest iPhone will pave the way for ubiquitous wireless vehicle charging.
The more people we have attacking these problems, the faster we’ll have mass-market solutions. This is a seismic shift in the automotive industry, and it couldn’t be more exciting.
Even further down the road, the innovation explosion in China’s automotive sector is poised to disrupt the entire U.S. automotive market. Much as we saw Germany, Japan and Korea make major headway versus Detroit in the late 20th Century, I believe it’s just a matter of time until Chinese brands take to American highways.
The promise of artificial intelligence is immense, but the roadmap to achieving those goals still remains unclear. Onstage at TechCrunch Disrupt SF, some of AI’s leading minds shared their thoughts on current competition in the market, how to ensure algorithms don’t perpetuate racism and the future of human-machine interaction. Here are five takeaways on the […]
The promise of artificial intelligence is immense, but the roadmap to achieving those goals still remains unclear. Onstage at TechCrunch Disrupt SF, some of AI’s leading minds shared their thoughts on current competition in the market, how to ensure algorithms don’t perpetuate racism and the future of human-machine interaction.
Here are five takeaways on the state of AI from Disrupt SF 2018:
1. U.S. companies will face many obstacles if they look to China for AI expansion
Sinnovation CEO Kai-Fu Lee (Photo: TechCrunch/Devin Coldewey)
The meteoric rise in China’s focus on AI has been well-documented and has become impossible to ignore these days. With mega companies like Alibaba and Tencent pouring hundreds of millions of dollars into home-grown businesses, American companies are finding less and less room to navigate and expand in China. AI investor and Sinnovation CEO Kai-Fu Lee described China as living in a “parallel universe” to the U.S. when it comes to AI development.
“We should think of it as electricity,” explained Lee, who led Google’s entrance into China. “Thomas Edison and the AI deep learning inventors – who were American – they invented this stuff and then they generously shared it. Now, China, as the largest marketplace with the largest amount of data, is really using AI to find every way to add value to traditional businesses, to internet, to all kinds of spaces.”
“The Chinese entrepreneurial ecosystem is huge so today the most valuable AI companies in computer vision, speech recognition, drones are all Chinese companies.”
2. Bias in AI is a new face on an old problem
SAN FRANCISCO, CA – SEPTEMBER 07: (L-R) UC Berkeley Professor Ken Goldberg, Google AI Research Scientist Timnit Gebru, UCOT Founder and CEO Chris Ategeka, and moderator Devin Coldewey speak onstage during Day 3 of TechCrunch Disrupt SF 2018 at Moscone Center on September 7, 2018 in San Francisco, California. (Photo by Kimberly White/Getty Images for TechCrunch)
AI promises to increase human productivity and efficiency by taking the grunt work out of many processes. But the data used to train many AI systems often falls victim to the same biases of humans and, if unchecked, can further marginalize communities caught up in systemic issues like income disparity and racism.
“People in lower socio-economic statuses are under more surveillance and go through algorithms more,” said Google AI’s Timnit Gebru. “So if they apply for a job that’s lower status they are likely to go through automated tools. We’re right now in a stage where these algorithms are being used in different places and we’re not event checking if they’re breaking existing laws like the Equal Opportunity Act.”
Algorithmic bias is a new face of an old problem rooted in income disparity and racism. Timnit Gebru (Google AI), Ken Goldberg (UC Berkeley) and Chris Ategeka (UCOT) outline steps to build more objective algorithms in the future. #TCDisruptpic.twitter.com/XnK422JTL8
A potential solution to prevent the spread of toxic algorithms was outlined by UC Berkeley’s Ken Goldberg who cited the concept of ensemble theory, which involves multiple algorithms with various classifiers working together to produce a single result.
We’re right now in a stage where these algorithms are being used in different places and we’re not even checking if they’re breaking existing laws.
But how do we know if the solution to inadequate tech is more tech? Goldberg says this is where having individuals from multiple backgrounds, both in and outside the world of AI, is vital to developing just algorithms. “It’s very relevant to think about both machine intelligence and human intelligence,” explained Goldberg. “Having people with different viewpoints is extremely valuable and I think that’s starting to be recognized by people in business… it’s not because of PR, it’s actually because it will give you better decisions if you get people with different cognitive, diverse viewpoints.”
3. The future of autonomous travel will rely on humans and machines working together
Uber CEO Dara Khosrowshahi (Photo: TechCrunch/Devin Coldewey)
Transportation companies often paint a flowery picture of the near future where mobility will become so automated that human intervention will be detrimental to the process.
That’s not the case, according to Uber CEO Dara Khosrowshahi. In an era that’s racing to put humans on the sidelines, Khosrowshahi says humans and machines working hand-in-hand is the real thing.
“People and computers actually work better than each of them work on a stand-alone basis and we are having the capability of bringing in autonomous technology, third-party technology, Lime, our own product all together to create a hybrid,” said Khosrowshahi.
Khosrowshahi ultimately envisions the future of Uber being made up of engineers monitoring routes that present the least amount of danger for riders and selecting optimal autonomous routes for passengers. The combination of these two systems will be vital in the maturation of autonomous travel, while also keeping passengers safe in the process.
4. There’s no agreed definition of what makes an algorithm “fair”
SAN FRANCISCO, CA – SEPTEMBER 07: Human Rights Data Analysis Group Lead Statistician Kristian Lum speaks onstage during Day 3 of TechCrunch Disrupt SF 2018 at Moscone Center on September 7, 2018 in San Francisco, California. (Photo by Kimberly White/Getty Images for TechCrunch)
Last July ProPublica released a report highlighting how machine learning can falsely develop its own biases. The investigation examined an AI system used in Fort Lauderdale, Fla., that falsely flagged black defendants as future criminals at a rate twice that of white defendants. These landmark findings set off a wave of conversation on the ingredients needed to build a fair algorithms.
One year later AI experts still don’t have the recipe fully developed, but many agree a contextual approach that combines mathematics and an understanding of human subjects in an algorithm is the best path forward.
How can we create fair algorithms? Kristian Lum (Human Rights Data Analysis Group) says there’s no clear answer, but it depends on the contextual data training the AI #TCDisruptpic.twitter.com/CEiglgeX2d
“Unfortunately there is not a universally agreed upon definition of what fairness looks like,” said Kristian Lum, lead statistician at the Human Rights Data Analysis Group. “How you slice and dice the data can determine whether you ultimately decide the algorithm is unfair.”
Lum goes on to explain that research in the past few years has revolved around exploring the mathematical definition of fairness, but this approach is often incompatible to the moral outlook on AI.
“What makes an algorithm fair is highly contextually dependent, and it’s going to depend so much on the training data that’s going into it,” said Lum. “You’re going to have to understand a lot about the problem, you’re going to have to understand a lot about the data, and even when that happens there will still be disagreements on the mathematical definitions of fairness.”
5. AI and Zero Trust are a “marriage made in heaven” and will be key in the evolution of cybersecurity
SAN FRANCISCO, CA – SEPTEMBER 06: (l-R) Duo VP of Security Mike Hanley, Okta Executive Director of Cybersecurity Marc Rogers, and moderator Mike Butcher speak onstage during Day 2 of TechCrunch Disrupt SF 2018 at Moscone Center on September 6, 2018 in San Francisco, California. (Photo by Kimberly White/Getty Images for TechCrunch)
If previous elections have taught us anything it’s that security systems are in dire need of improvement to protect personal data, financial assets and the foundation of democracy itself. Facebook’s ex-chief security officer Alex Stamos shared a grim outlook on the current state of politics and cybersecurity at Disrupt SF, stating the security infrastructure for the upcoming Midterm elections isn’t much better than it was in 2016.
So how effective will AI be in improving these systems? Marc Rodgers of Okta and Mike Hanley of Duo Security believe the combination of AI and a security model called Zero Trust, which cuts off all users from accessing a system until they can prove themselves, are the key to developing security systems that actively fight off breaches without the assistance of humans.
“AI and Zero Trust are a marriage made in heaven because the whole idea behind Zero Trust is you design policies that sit inside your network,” said Rodgers. “AI is great at doing human decisions much faster than a human ever can and I have great hope that as Zero Trust evolves, we’re going to see AI baked into the new Zero Trust platforms.”
By handing much of the heavy lifting to machines, cybersecurity professionals will also have the opportunity to solve another pressing issue: being able to staff qualified security experts to manage these systems.
“There’s also a substantial labor shortage of qualified security professionals that can actually do the work needed to be done,” said Hanley. “That creates a tremendous opportunity for security vendors to figure out what are those jobs that need to be done, and there are many unsolved challenges in that space. Policy engines are one of the more interesting ones.”
A future dominated by autonomous vehicles (AVs) is, for many experts, a foregone conclusion. Declarations that the automobile will become the next living room are almost as common — but, they are imprecise. In our inevitable driverless future, the more apt comparison is to the mobile device. As with smartphones, operating systems will go a long way toward determining what autonomous vehicles are and what they could be. For mobile app companies trying to seize on the coming AV opportunity, their future depends on how the OS landscape shapes up.
By most measures, the mobile app economy is still growing, yet the time people spend using their apps is actually starting to dip. A recent study reported that overall app session activity grew only 6 percent in 2017, down from the 11 percent growth it reported in 2016. This trend suggests users are reaching a saturation point in terms of how much time they can devote to apps. The AV industry could reverse that. But just how mobile apps will penetrate this market and who will hold the keys in this new era of mobility is still very much in doubt.
When it comes to a driverless future, multiple factors are now converging. Over the last few years, while app usage showed signs of stagnation, the push for driverless vehicles has only intensified. More cities are live-testing driverless software than ever, and investments in autonomous vehicle technology and software by tech giants like Google and Uber (measured in the billions) are starting to mature. And, after some reluctance, automakers have now embraced this idea of a driverless future. Expectations from all sides point to a “passenger economy” of mobility-as-a-service, which, by some estimates, may be worth as much as $7 trillion by 2050.
For mobile app companies this suggests several interesting questions: Will smart cars, like smartphones before them, be forced to go “exclusive” with a single OS of record (Google, Apple, Microsoft, Amazon/AGL), or will they be able to offer multiple OS/platforms of record based on app maturity or functionality? Or, will automakers simply step in to create their own closed loop operating systems, fragmenting the market completely?
Automakers and tech companies clearly recognize the importance of “connected mobility.”
Complicating the picture even further is the potential significance of an OS’s ability to support multiple Digital Assistants of Record (independent of the OS), as we see with Google Assistant now working on iOS. Obviously, voice NLP/U will be even more critical for smart car applications as compared to smart speakers and phones. Even in those established arenas the battle for OS dominance is only just beginning. Opening a new front in driverless vehicles could have a fascinating impact. Either way, the implications for mobile app companies are significant.
Looking at the driverless landscape today there are several indications as to which direction the OSes in AVs will ultimately go. For example, after some initial inroads developing their own fleet of autonomous vehicles, Google has now focused almost all its efforts on autonomous driving software while striking numerous partnership deals with traditional automakers. Some automakers, however, are moving forward developing their own OSes. Volkswagen, for instance, announced that vw.OS will be introduced in VW brand electric cars from 2020 onward, with an eye toward autonomous driving functions. (VW also plans to launch a fleet of autonomous cars in 2019 to rival Uber.) Tesla, a leader in AV, is building its own unified hardware-software stack. Companies like Udacity, however, are building an “open-source” self-driving car tech. Mobileye and Baidu have a partnership in place to provide software for automobile manufacturers.
Clearly, most smartphone apps would benefit from native integration, but there are several categories beyond music, voice and navigation that require significant hardware investment to natively integrate. Will automakers be interested in the Tesla model? If not, how will smart cars and apps (independent of OS/voice assistant) partner up? Given the hardware requirements necessary to enable native app functionality and optimal user experience, how will this force smart car manufacturers to work more seamlessly with platforms like AGL to ensure competitive advantage and differentiation? And, will this commoditize the OS dominance we see in smartphones today?
It’s clearly still early days and — at least in the near term — multiple OS solutions will likely be employed until preferred solutions rise to the top. Regardless, automakers and tech companies clearly recognize the importance of “connected mobility.” Connectivity and vehicular mobility will very likely replace traditional auto values like speed, comfort and power. The combination of Wi-Fi hotspot and autonomous vehicles (let alone consumer/business choice of on-demand vehicles) will propel instant conversion/personalization of smart car environments to passenger preferences. And, while questions remain around the how and the who in this new era in mobile, it’s not hard to see the why.
Productivity in this sense extends well beyond personal entertainment and commerce and into the realm of business productivity. Use of integrated display (screen and heads-up) and voice will enable business multi-tasking from video conferencing, search, messaging, scheduling, travel booking, e-commerce and navigation. First-mover advantage goes to the mobile app companies that first bundle into a single compelling package information density, content access and mobility. An app company that can claim 10 to 15 percent of this market will be a significant player.
For now, investors are throwing lots of money at possible winners in the autonomous automotive race, who, in turn, are beginning to define the shape of the mobile app landscape in a driverless future. In fact, what we’re seeing now looks a lot like the early days of smartphones with companies like Tesla, for example, applying an Apple -esque strategy for smart car versus smartphone. Will these OS/app marketplaces be dominated by a Tesla — or Google (for that matter) — and command a 30 percent revenue share from apps, or will auto manufacturers with proprietary platforms capitalize on this opportunity? Questions like these — while at the same time wondering just who the winners and losers in AV will be — mean investment and entrepreneurship in the mobile app sector is an extremely lucrative but risky gamble.
Volvo unveiled Wednesday its vision for future travel. And it’s an electric autonomous vehicle without a steering wheel or other traditional means of control that would serve multiple purposes for its passengers, and ultimately disrupt the domestic air travel industry. The 360c concept is just a concept. Meaning, the vehicle shown Wednesday in Sweden won’t […]
Volvo unveiled Wednesday its vision for future travel. And it’s an electric autonomous vehicle without a steering wheel or other traditional means of control that would serve multiple purposes for its passengers, and ultimately disrupt the domestic air travel industry.
The 360c concept is just a concept. Meaning, the vehicle shown Wednesday in Sweden won’t be going into production anytime soon, if at all. But as most concepts aim to do, the 360c gives us insight into Volvo’s thinking and hints at where the company is headed.
In short, the 360c concept is a conversation piece. And Volvo wants to talk about how autonomous vehicles, like this one, will be used and how the technology might change societies.
The 360c concept shows four potential uses of autonomous driving vehicles: a sleeping environment, mobile office, living room and entertainment space.
Volvo 360c interior
“The business will change in the coming years and Volvo should lead that change of our industry,” said xVolvo Cars president and CEO Håkan Samuelsson. “Autonomous drive will allow us to take the big next step in safety but also open up exciting new business models and allow consumers to spend time in the car doing what they want to do.”
The concept is also supposed to represent what Volvo describes as a “potentially lucrative competitor to short-haul air travel. Volvo contends that shorter routes, where the distance between origin and destination is around 300 kilometers (186 miles), “are prime candidates for disruption by an alternative mode of travel.”
Volvo 360c interior
Volvo doesn’t say how fleets of 360c vehicles—presuming they were ever built—might affect trains, a present-day mode of travel that often shuttles people short distances between cities.
The company also introduced a proposal for a global standard in how autonomous vehicles can safely communicate with all other road users. Engineers created a system for the 360c made of up external sounds, colors, visuals, and movements to communicate the vehicle’s intentions to other road users, a critical feature for self-driving cars when they eventually are deployed en masse on pubic roads.
Self-driving car startup Nuro is ready to put autonomous vehicles on the road in partnership with Kroger to deliver groceries in Scottsdale, Arizona. This comes a couple of months after Nuro and Kroger announced their partnership to offer same-day deliveries. This pilot will serve a single Fry’s Food and Drug location in Scottsdale starting today. Customers […]
This pilot will serve a single Fry’s Food and Drug location in Scottsdale starting today. Customers can shop for groceries and place either same- or next-day delivery orders via the grocer’s website or mobile app. There’s no minimum order but there is a flat delivery fee of $5.95.
“We’re proud to contribute and turn our vision for local commerce into a real, accessible service that residents of Scottsdale can use immediately,” Nuro CEO Dave Ferguson said in a statement. “Our goal is to save people time, while operating safely and learning how we can further improve the experience.”
Nuro’s intent is to use its self-driving technology in the last mile for the delivery of local goods and services. That could be things like groceries, dry cleaning, an item you left at a friend’s house or really anything within city limits that can fit inside one of Nuro’s vehicles. Nuro has two compartments that can fit up to six grocery bags each.
In Scottsdale, however, Nuro will initially use Toyota Prius cars before introducing its custom self-driving vehicles. That’s because the main purpose of this pilot is to learn, and using the Prius self-driving fleet can help to accelerate those learnings, a Nuro spokesperson told TechCrunch.
“The Priuses share many software and hardware systems with the R1 custom vehicle, so while we compete final certification and testing of the R1, the Prius will begin delivering groceries and help us improve the overall service and customer experience,” the spokesperson said.
When it came to going to market, Ferguson previously told me groceries were most exciting to him. And Kroger particularly stood out because of its smart shelf technology and partnership with Ocado around automated fulfillment centers.
“With the pilot, we’re excited about getting more experience interacting with real customers and understanding exactly what they want,” Ferguson told me. “The things they love about it, the things they don’t love as much. As an organization for us, it’s also very valuable for us to have to exercise our operational muscle.”
Throughout the pilot program, Nuro will be looking to see how accurate its estimated delivery times are, how the public reacts to the vehicles and how regular, basic cars interact with self-driving ones.
The bedroom community of Frisco, Texas might seem like an unusual place to find a self-driving vehicle. But here in this city of nearly 175,000 people, there are seven. And as of Monday, they’re available for the public to use within a specific sector of the city that has a concentration of retail, entertainment venues […]
The bedroom community of Frisco, Texas might seem like an unusual place to find a self-driving vehicle. But here in this city of nearly 175,000 people, there are seven.
And as of Monday, they’re available for the public to use within a specific sector of the city that has a concentration of retail, entertainment venues and office space.
Drive.ai, an autonomous vehicle startup, launched the self-driving on-demand service Monday that will cover a two-mile route. The service will be operated in conjunction with Frisco TMA, a public-private partnership focused on “last-mile” transportation options. People within this geographic zone can hail a ride using a smartphone app.
Even in their small numbers, the modified Nissan NV200s will be hard to miss. The self-driving vehicles are painted a bright orange with two swooping blue lines — with the words “self-driving vehicle” and “Drive.ai” set in white.
The vehicles, which have been given distinctly human names like Anna, Emma, Bob, Fred and Carl, are equipped with LED screens on the hood and rear, and above the front tires, which will display messages as well as the vehicle’s name to pedestrians.
This isn’t a business enterprise just yet. The service, which is considered a pilot project, is free and will be operational for six months. The program will begin with fixed pick-up and drop-off locations around HALL Park and The Star and then will expand into Frisco Station.
Conway Chen, Drive.ai’s vice president of business strategy, emphasized to TechCrunch that this is designed as an on-demand service, and not a shuttle. When the vehicles are not being used they won’t just keep circling the route, which could cause more traffic congestion, Chen said. Instead they will be able to park along the route.
In the weeks since announcing plans to launch in Frisco, Drive.ai has been tweaking the service, its schedule as well as racking up miles on the road and in simulation. The company said it has logged 1 million simulated miles on its Frisco route. In its simulation, Drive.ai replicates scenarios — taken from its driving logs — the vehicles encountered while driving the route, as well as creating its own scenarios.
As Drive.ai explains in a post on Medium: “It’s like a high tech version of SimCity, where we design the world, and can then replay events and modify their components to explore how our technology responds in unique scenarios. This is a good place to start for the more common things that people do on the roads: navigating tricky intersections, right-of-way decisions, and observing the behaviors of cyclists and pedestrians.”
The service, which will operate weekdays from 10 a.m. to 7 p.m., will initially have a safety driver behind the wheel. That person will eventually move to a passenger seat and take on a chaperone role, whose primary responsibility will be to answer questions and make riders comfortable. At some point, Drive.ai will remove the employee from the vehicle completely.
The company also has a remote monitoring feature, called “telechoice,” that allows a human operator to see everything in real-time that the self-driving vehicle can see using HD cameras.
Telechoice is not like the full remote control teleoperation that startup Phantom Auto provides. The telechoice operator can control basic functions like braking, but it cannot take full control of the vehicle or make it accelerate. With Drive.ai’s feature, if “Bob” the self-driving vehicle struggles with a certain situation on the road, the telechoice operator can help it make the right decision.
Uber is putting its autonomous vehicles back on Pittsburgh’s city streets, four months after a fatal accident involving one of its self-driving cars prompted the ride-hailing company to halt testing on public roads. But for now, Uber’s modified self-driving Volvo XC90 vehicles will only be driven manually by humans and under a new set of safety […]
Uber is putting its autonomous vehicles back on Pittsburgh’s city streets, four months after a fatal accident involving one of its self-driving cars prompted the ride-hailing company to halt testing on public roads. But for now, Uber’s modified self-driving Volvo XC90 vehicles will only be driven manually by humans and under a new set of safety standards that includes real-time monitoring of its test drivers and efforts to beef up simulation.
The sensors, including light detection and ranging radar known as LiDAR, will be operational on these self-driving vehicles. They won’t be operated in autonomous mode, however. Uber will use these manually operated self-driving vehicles to update its HD maps of Pittsburgh.
This manual-first rollout is a step toward Uber’s ultimate goal to relaunch its autonomous vehicle testing program in Pittsburgh, according to Eric Meyhofer, head of Uber Advanced Technologies Group, who published a post Tuesday on Medium.
Uber halted all of its autonomous vehicle operations March 19, the day after one of its vehicles struck and killed pedestrian Elaine Herzberg in the Phoenix suburb of Tempe. Uber was testing its self-driving vehicles on public roads in Tempe, Ariz., where the accident occurred, as well as in Pittsburgh, San Francisco and Toronto.
In the days and weeks following the fatal accident, it appeared the company’s self-driving vehicle program might end for good. Arizona Governor Doug Ducey, a proponent of autonomous-vehicle technology who invited Uber to the state, suspended the company from testing its self-driving cars following the accident. Last month, Uber let go all 100 of its self-driving car operators in Pittsburgh and San Francisco.
Those drivers affected by the layoffs, most of whom were in Pittsburgh, are being encouraged to apply for Uber’s new mission specialist positions. Uber is holding off on making these positions public until the laid-off drivers have a chance to apply and go through the interview process.
Even now, with the company beefing up its safety protocols and taking a slower approach to autonomous vehicle testing, the program’s future is still uncertain. Another accident would likely derail it for good.
These new safeguards aim to avoid such a scenario. Uber said Tuesday that all its self-driving vehicles, whether they’re driven manually or eventually in autonomous mode, will have two Uber employees inside. These “mission specialists” — a new name Uber has given to its test drivers — will have specific jobs. The person behind the wheel will be responsible for maintaining the vehicle safely, while the second “mission specialist” will ride shotgun and document events.
Uber is also equipping every self-driving vehicle with a driver monitoring system that will remain active whenever the vehicle is in use. The system will track driver behavior in real time. If it detects inattentiveness an audio alert will cue the driver. An alert is also sent to a remote monitor who will take appropriate action once they’ve assessed the situation, Uber said.
The driver monitoring system, which an Uber spokesperson declined to share details about, is an off-the-shelf aftermarket product.
Investigators determined that Rafaela Vasquez, who was operating the Uber self-driving vehicle involved in the fatal crash, looked down at a phone that was streaming The Voice 204 times during a 43-minute test drive that ended when Herzberg was struck and killed, according to a 318-page police report released by the Tempe Police Department.
Based on the data, police reported that Vasquez could have avoided hitting Herzberg if her eyes were on the road. The case has been submitted to the Maricopa County Attorney’s office for review against Vasquez, who could face charges of vehicular manslaughter.
Uber has always had a policy prohibiting mobile device usage for anyone operating its self-driving vehicles, according to a spokesperson. However, without a proper driver monitoring system or another passenger in the vehicle, it was impossible for Uber to really know if that rule was being followed.
Now, the driver monitoring system can spot the behavior immediately. If the system detects the driver is looking at a phone, the remote monitor will call the team immediately back, a spokesperson said, adding it was grounds for dismissal.
Other safeguards include a defensive and distracted driving course conducted on a test track and fatigue management program that requires the two mission specialists in each vehicle to periodically switch between driver and data logger roles, according to Uber.
The National Transportation Safety Board is also investigating the accident. A preliminary report by the NTSB found Uber’s modified Volvo XC90’s LiDAR and radar first spotted an object in its path about six seconds before the crash. The self-driving system first classified the pedestrian as an unknown object, then as a vehicle and then as a bicycle. At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision, according to the NTSB. But to reduce the potential for “erratic behavior,” Uber had disabled Volvo’s emergency braking system so it didn’t work when the vehicle was under computer control.
Uber said it will keep Volvo’s emergency braking and vehicle collision warning systems enabled while the vehicle is in manual mode. Engineers are examining whether the Volvo’s safety system can work in conjunction with its own self-driving technology while the vehicle is in autonomous mode.
Billions of people—and a growing number of autonomous vehicles—rely on mobile navigation services from Google, Uber, and others to provide real-time driving directions. A new proof-of-concept attack demonstrates how hackers could inconspicuously steer a targeted automobile to the wrong destination or, worse, endanger passengers by sending them down the wrong way of a one-way road.
The attack starts with a $225 piece of hardware that’s planted in or underneath the targeted vehicle that spoofs the radio signals used by civilian GPS services. It then uses algorithms to plot a fake “ghost route” that mimics the turn-by-turn navigation directions contained in the original route. Depending on the hackers’ ultimate motivations, the attack can be used to divert an emergency vehicle or a specific passenger to an unintended location or to follow an unsafe route. The attack works best in urban areas the driver doesn’t know well, and it assumes hackers have a general idea of the vehicle’s intended destination.
“Our study demonstrated the initial feasibility of manipulating the road navigation system through targeted GPS spoofing,” the researchers, from Virginia Tech, China’s University of Electronic Sciences and Technology, and Microsoft Research, wrote in an 18-page paper. “The threat becomes more realistic as car makers are adding autopilot features so that human drivers can be less involved (or completely disengaged).”