Thursday, April 19, 2018

Self-Driving Cars Like, Teslas, and Waymos, Generate a Lot of Driving Data, That is Used to Make Their Self-Driving Safer. But, What Will Make People Accept Them As Safe?

How Tesla and Waymo are tackling a major problem for self-driving cars: data

There's a race happening right now that stretches from Silicon Valley to Detroit and back: who can make a self-driving car that behaves better than a human driver? It's a far harder task than it sounded even a few years ago because human drivers know a lot - not just about their cars but about how people behave on the road when they're behind the wheel.
From article, (There’s a race happening right now that stretches from Silicon Valley to Detroit and back: who can make a self-driving car that behaves better than a human driver? It’s a far harder task than it sounded even a few years ago because human drivers know a lot — not just about their cars but about how people behave on the road when they’re behind the wheel. To reach that same kind of understanding, computerized cars need lots of data. And the two companies with the most data right now are Tesla and Waymo.
Both Tesla and Waymo are attempting to collect and process enough data to create a car that can drive itself. And they’re approaching those problems in very different ways. Tesla is taking advantage of the hundreds of thousands of cars it has on the road by collecting real-world data about how those vehicles perform (and how they might perform) with Autopilot, its current semi-autonomous system. Waymo, which started as Google’s self-driving car project, uses powerful computer simulations and feeds what it learns from those into a smaller real-world fleet.
It’s possible — and proponents certainly claim — that self-driving technology would lower the number of yearly deaths in the US that result from car crashes, a staggering 40,000 people. But there’s also a huge financial incentive to apply all this data-driven tech to the road as quickly as possible. Intel believes autonomous vehicles could generate $800 billion per year in revenue in 2030 and $7 trillion per year by 2050. Last summer, Morgan Stanley analyst Adam Jonas said in a note that data might be more valuable to Tesla than something like the Model 3. “There’s only one market big enough to propel the stock’s value to the levels of Elon Musk’s aspirations: that of miles, data and content,” he wrote in June.
 Tesla is developing towards autonomy by using customer-owned cars to gather that all-important data. The company has hundreds of thousands of customers, many of whom use Autopilot on streets around the world every day, and Tesla — according to its privacy policy— collects information about how well the feature performs. It’s a familiar strategy for anyone who’s followed another of Elon Musk’s companies: SpaceX. Musk has quietly tested equipment on real rocket launches and even sold some of the company’s test launches.

It’s hard to pin down exactly how many miles of data Tesla’s gotten from Autopilot because the company doesn’t make many public statements about it. In 2016, the then-head of Autopilot told a conference crowd at MIT that Tesla had logged 780 million miles of data, with 100 million of those miles coming while Autopilot was “in at least partial control” according to IEEE Spectrum. Later that summer, Musk said that Tesla was collecting “just over 3 million miles [of data] per day.” As of last July, though, the total number of fleet miles driven had jumped to 5 billion. As Tesla sells more cars, the amount of data that can be collected increases exponentially.
Waymo is constrained by the fact that it is only gathering real-world data via a fleet of about 500 to 600 self-driving Pacifica minivans. Tesla has over 300,000 vehicles on the road around the world, and those cars are navigating far more diverse settings than Waymo — which is currently only in Texas, California, Michigan, Arizona, and Georgia. But Tesla is only learning from those real-world miles because even when Autopilot is engaged, the current version is only semi-autonomous.

This balance will also change. Waymo plans to add “thousands” more Chrysler minivans are to its fleet starting at the end of this year. And it recently announced a partnership with Jaguar Land Rover to develop a fully self-driving version of the all-electric I-Pace SUV from the ground up. Waymo says it will add up to 20,000 of these to its fleet in the coming years, and it will be able to handle a volume of 1 million trips per day once all those cars are on the road.

Not only are these two companies collecting data at different scales, they’re also collecting different data. Waymo’s self-driving minivans use three different types of LIDAR sensors, five radar sensors, and eight cameras. Tesla’s cars are also heavily kitted out: eight cameras, 12 ultrasonic sensors, and one forward-facing radar.
But Tesla doesn’t use LIDAR. LIDAR is a lot like radar, but instead of radio waves, it sends out millions of laser light signals per second and measures how long it takes for them to bounce back. This makes it possible to create a very high-resolution picture of a car’s surroundings, and in all directions, if it’s placed in the right spot (like the top of a car). It maintains this precision even in the dark since the sensors are their own light source. That’s important because cameras are worse in the dark, and radar and ultrasound aren’t as precise.
LIDAR can be expensive and bulky, and it also involves moving mechanical parts (for now, at least). Musk recently called the technology a “crutch,” and argued that while it makes things easier in the short term, companies will have to master camera-based systems to keep costs down.
If Tesla can develop autonomous cars without that tech, Keeney says that would be a huge advantage. “It’s a riskier strategy but it could pay off for them in the end,” she explains. “If Tesla solves [self-driving cars without LIDAR], everyone else is going to be kicking themselves.”
Kalra has co-authored a number of studies for RAND about self-driving technology, including one in 2016 that tried to determine how many real-world miles would need to be driven to prove that autonomous cars are safer than humans. 
Kalra and co-author Susan M. Paddock came to the conclusion that self-driving cars will need to be driven “hundreds of millions of miles and sometimes hundreds of billions of miles” to make any statistically reliable claims about safety. Because of this, they wrote, companies need to find other ways to demonstrate safety and reliability.
When it comes time for these companies to prove to regulators or customers that they’ve developed fully self-driving tech, the most likely metric that will be used to judge whether a company has developed a full-stop fully self-driving car is whether or not they’re as safe or safer than human driving. How to define that — the rate of crashes per X miles, injuries per X miles, or even deaths per X miles — is another question.
As Kalra and Paddock point out in their study, this will be hard to prove in real-world terms. But Kalra thinks it can’t be proven by simulation alone — at least not without a more thorough and open understanding of the quality and rate of data being collected. “We’re probably going to see this technology deployed before we have conclusive evidence about how safe it is,” she says. “This is the rub. We can’t prove how safe self-driving cars are until we all decide to use them.”

No comments:

Post a Comment