https://www.autoblog.com/2018/03/20/uber-self-driving-death-legal-questions/Fatal accident with self-driving car raises novel legal questions
Who's at risk here? Uber? Volvo? Tech suppliers? All of the above?
https://www.autoblog.com/2018/03/20/toyota-pauses-self-driving-car-testing-amid-uber-accident-probe/Toyota pauses self-driving car testing amid Uber accident probe
Toyota Research Institute conducts on-road testing in California and Michigan
RegGuheert wrote:InsideEVs has a link to dashcam video of the Uber vehicle right up to the point of impact. I see several things in the video:
1) The person in the vehicle was not paying attention to the road.
2) The headlights appeared to be on low beams which did not allow the jaywalker to be seen until it was really to late to avoid them. In other words, it seems the vehicle was overdriving the headlights. I wonder if the autonomy modulates the high beams like a human driver would. All that said, there were streelights in the area and they were behind another vehicle, so high beams really would not have been appropriate.
3) I don't see any indication that the vehicle attempted to avoid the accident by slowing or swerving.
4) I do not see any indication of reflectors in the bicycles wheels.
5) Even though there were streetlights in the vicinity, the person walking the bicycle was crossing where there was no streetlight. I will point out that the presence of streetlights makes it more difficult to see objects which are in the dark.
I guess I'm not convinced that I could have avoided that collision unless the perception of the cameras are significantly worse than that of human vision.
Why we’re aiming for fully self-driving vehicles
As we see more cars with semi-autonomous features on the roads, we’re often asked why we’re aiming for fully
autonomous vehicles. To be honest, we didn’t always have this as our plan.
In the fall of 2012, our software had gotten good enough that we wanted to have people who weren’t on our team
test it, so we could learn how they felt about it and if it’d be useful for them. We found volunteers, all Google
employees, to use our Lexus vehicles on the freeway portion of their commute. They’d have to drive the Lexus to
the freeway and merge on their own, and then they could settle into a single lane and turn on the self-driving
feature. We told them this was early stage technology and that they should pay attention 100% of the time -- they
needed to be ready to take over driving at any moment. They signed forms promising to do this, and they knew
they’d be on camera.
We were surprised by what happened over the ensuing weeks. On the upside, everyone told us that our technology
made their commute less stressful and tiring. One woman told us she suddenly had the energy to exercise and
cook dinner for her family, because she wasn’t exhausted from fighting traffic. One guy originally scoffed at us
because he loved driving his sports car -- but he also enjoyed handing the commute tedium to the car.
But we saw some worrying things too. People didn’t pay attention like they should have. We saw some silly
behavior, including someone who turned around and searched the back seat for his laptop to charge his phone --
while travelling 65mph down the freeway! We saw human nature at work: people trust technology very quickly
once they see it works. As a result, it’s difficult for them to dip in and out of the task of driving when they are
encouraged to switch off and relax.
We did spend some time thinking about ways we could build features to address what is often referred to as “The
Handoff Problem” -- keeping drivers engaged enough that they can take control of driving as needed. The industry
knows this is a big challenge, and they’re spending lots of time and effort trying to solve this. One study by the
Virginia Tech Transportation Institute found that drivers required somewhere between five and eight seconds to
safely regain control of a semi-autonomous system. In a NHTSA study published in August 2015, some participants
took up to 17 seconds to respond to alerts and take control of the the vehicle -- in that time they’d have covered
more than a quarter of a mile at highway speeds. There’s also the challenge of context -- once you take back
control, do you have enough understanding of what’s going on around the vehicle to make the right decision?
In the end, our tests led us to our decision to develop vehicles that could drive themselves from point A to B, with
no human intervention. (We were also persuaded by the opportunity to help everyone get around, not just people
who can drive.) Everyone thinks getting a car to drive itself is hard. It is. But we suspect it’s probably just as hard to
get peoople to pay attention when they’re bored or tired and the technology is saying “don’t worry, I’ve got this...for
An Analysis of Driver
Inattention Using a
On 100-Car Data: Final Report
https://insideevs.com/mobileye-sees-what-uber-didnt-cyclist-detected-1-second-before-impact/MOBILEYE SEES WHAT UBER DIDN’T, CYCLIST DETECTED 1 SECOND BEFORE IMPACT
https://www.bloomberg.com/news/articles/2018-03-26/uber-disabled-volvo-suv-s-standard-safety-system-before-fatalityUber Disabled Volvo SUV's Safety System Before Fatality
Uber Technologies Inc. disabled the standard collision-avoidance technology in the Volvo SUV that struck and killed a woman in Arizona last week, according to the auto-parts maker that supplied the vehicle’s radar and camera.
“We don’t want people to be confused or think it was a failure of the technology that we supply for Volvo, because that’s not the case,” Zach Peterson, a spokesman for Aptiv Plc, said by phone. The Volvo XC90’s standard advanced driver-assistance system “has nothing to do” with the Uber test vehicle’s autonomous driving system, he said.
Aptiv is speaking up for its technology to avoid being tainted by the fatality involving Uber, which may have been following standard practice by disabling other tech as it develops and tests its own autonomous driving system. . . .
https://www.autoblog.com/2018/03/28/nvidia-autonomous-testing-uber-accident/Nvidia halts self-driving tests; Uber drops test permit in California
Chipmaker CEO on fatal crash: ‘We don't know what happened’
https://www.autoblog.com/2018/03/26/lidar-maker-velodyne-fatal-uber-crash/?icid=autoblog|trend|lidar-maker-confused-by-fatal-uber-crashLidar maker Velodyne is confused by fatal Uber crash
‘Our lidar doesn't make the decision to put on the brakes ...’
Velodyne president Marta Thoma Hall told Bloomberg, "We are as baffled as anyone else. Certainly, our lidar is capable of clearly imaging Elaine and her bicycle in this situation. However, our lidar doesn't make the decision to put on the brakes or get out of her way."
The company, which supplies lidar units to a number of tech firms testing autonomous cars, wants to make sure its equipment isn't blamed for the crash. The accident took place around 10 p.m., and in fact, lidar works better at night than during the day because the lasers won't suffer any interference from daylight reflections. . . .
https://www.reuters.com/article/us-autos-selfdriving-uber-settlement/uber-avoids-legal-battle-with-family-of-autonomous-vehicle-victim-idUSKBN1H5092Uber avoids legal battle with family of autonomous vehicle victim
Our data shows that Tesla owners have driven this same stretch of highway with Autopilot engaged roughly 85,000 times since Autopilot was first rolled out, and roughly 20,000 times since just the beginning of the year, and there has never been an accident that we know of. There are over 200 successful Autopilot trips per day on this exact stretch of road.
Full statement here:This is the true problem of autonomy: getting a machine learning system to be 99% correct is relatively easy, but getting it to be 99.9999% correct, which is where it ultimately needs to be, is vastly more difficult. One can see this with the annual machine vision competitions, where the computer will properly identify something as a dog more than 99% of the time, but might occasionally call it a potted plant. Making such mistakes at 70 mph would be highly problematic.