Tesla's autopilot, on the road

My Nissan Leaf Forum

Help Support My Nissan Leaf Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
CNN picked it up now too


https://www.google.com/amp/s/amp.cnn.com/cnn/2021/08/30/business/telsa-crash-police-car/index.html

I am curious how many more will be allowed before Tesla is told to disable the feature pending investigation.
 
Tesla must deliver Autopilot crash data to federal auto safety watchdog by October 22
https://www.cnbc.com/2021/09/01/tesla-must-deliver-autopilot-crash-data-to-nhtsa-by-october-22.html
 
https://www.consumerreports.org/car-safety/tesla-autopilot-investigation-regulators-seek-automaker-data-a1090896405/

Just adding this here for completeness.

Looks like NHTSA is interested in how autopilot compares with ADAS systems from the other vendors. Wonder if they'd be shocked at how supercruise can be fooled by gag-glasses?
 
Audio captures moments CHP stopped apparently passed-out driver of Tesla on Autopilot in Glendale
https://abc7.com/tesla-autopilot-passed-out-driver-glendale-los-angeles/11028280/
 
Oils4AsphaultOnly said:
https://www.consumerreports.org/car-safety/tesla-autopilot-investigation-regulators-seek-automaker-data-a1090896405/

Just adding this here for completeness.

Looks like NHTSA is interested in how autopilot compares with ADAS systems from the other vendors. Wonder if they'd be shocked at how supercruise can be fooled by gag-glasses?



Presumably some of them may read CR and C&D as well. Of course, many of the points raised in the letter were recommendations made by the NTSB to NHTSA years ago, but at least the NHTSA is finally awakening from their long snooze.

As to the use of gag glasses, that was foreseen by others including me. From an exchange with you in 2019:
The sensors in the steering wheel that register the human touch, though, are easy to cheat, as YouTube videos demonstrate. A well-wedged orange or water bottle can do the trick. Posters in online forums say they have strapped weights onto their wheels and experimented with Ziplock bags and “mini weights.” For a while, drivers even could buy an Autopilot Buddy “nag reduction device,” until the feds sent the company a cease-and-desist letter this summer.

All of which makes the design of similar systems offered by Cadillac and Audi look rather better suited to the task of keeping human eyes on the road, even as the car works the steering wheel, throttle, and brakes. Cadillac’s Super Cruise includes a gumdrop-sized infrared camera on the steering column that monitors the driver’s head position: Look away or down for too long, and the system issues a sharp beep. Audi’s Traffic Jam Pilot does the same with an interior gaze-monitoring camera.

Humans being human, they will presumably find ways to cheat those systems (perhaps borrowing inspiration from Homer Simpson
*) but it’s clear a system that monitors where a driver is looking is more robust for this purpose than one that can be fooled by citrus.

It’s possible Tesla will give it a shot. The Model 3 comes with an interior camera mounted near the rearview mirror, and though the automaker hasn’t confirmed what it’s for, don’t be surprised if an over-the-air software update suddenly gives those cars the ability to creep on their human overlords. . . .

*If that doesn't work, I'm sure someone will try painting eyes on their eyelids.
[/quote]

https://www.mynissanleaf.com/viewtopic.php?f=12&t=22213&p=557216&hilit=Homer+simpson#p557216


For those not familiar with the reference: https://youtu.be/U6qBnykH0DU
 
ABG:
Tesla needs to address 'basic safety issues' before expanding tech, says NTSB chief

Tesla 'has clearly misled numerous people to misuse and abuse technology'

https://www.autoblog.com/2021/09/19/tesla-full-self-driving-autopilot-basic-safety-issues-ntsb/


Before Tesla considers rolling out a further expansion of its semi-autonomous driving technology, the automaker should take a hard look at what it's already released, according to Jennifer Homendy, head of the National Transportation Safety Board. In an interview with the Wall Street Journal, Homendy said, "Basic safety issues have to be addressed before they’re then expanding it to other city streets and other areas."

Homendy further suggested that Tesla "has clearly misled numerous people to misuse and abuse technology." It seems Homendy is referring in part to what she calls "misleading and irresponsible" naming and marketing of the automaker's technology, which includes provisions to automatically steer, accelerate and brake but requires the full attention of a licensed driver who is ready and able to take over if and when the software encounters a situation it can't handle. Tesla calls some of its technologies "Full Self Driving" and "Autopilot," despite the fact that the vehicles are not actually capable of autonomously driving themselves. . . .


Of course, the NTSB and various consumer organizations have been making these same points for the past 5+ years, but maybe the NHTSA will finally mandate more responsible behavior.
 
GRA said:
ABG:
Tesla needs to address 'basic safety issues' before expanding tech, says NTSB chief

Tesla 'has clearly misled numerous people to misuse and abuse technology'

https://www.autoblog.com/2021/09/19/tesla-full-self-driving-autopilot-basic-safety-issues-ntsb/


Before Tesla considers rolling out a further expansion of its semi-autonomous driving technology, the automaker should take a hard look at what it's already released, according to Jennifer Homendy, head of the National Transportation Safety Board. In an interview with the Wall Street Journal, Homendy said, "Basic safety issues have to be addressed before they’re then expanding it to other city streets and other areas."

Homendy further suggested that Tesla "has clearly misled numerous people to misuse and abuse technology." It seems Homendy is referring in part to what she calls "misleading and irresponsible" naming and marketing of the automaker's technology, which includes provisions to automatically steer, accelerate and brake but requires the full attention of a licensed driver who is ready and able to take over if and when the software encounters a situation it can't handle. Tesla calls some of its technologies "Full Self Driving" and "Autopilot," despite the fact that the vehicles are not actually capable of autonomously driving themselves. . . .


Of course, the NTSB and various consumer organizations have been making these same points for the past 5+ years, but maybe the NHTSA will finally mandate more responsible behavior.

Autopilot, even used inappropriately, saved lives. The most recent incident being of a drunk woman passed out , while the car was driving on the highway. She could've taken out other drivers if she had tried to drive home.

And no, there are enough drunk drivers on the road today (and their associated deaths and destruction) to refute the speculation that she wouldn't have tried to drive drunk if it weren't for the availability of autopilot. People being stupid will always exist. Taking the wheels away from them, and doing it as quickly as possible, would be safest for everyone. Try to distinguish stupid users from stupid tech.
 
Oils4AsphaultOnly said:
GRA said:
ABG:
Tesla needs to address 'basic safety issues' before expanding tech, says NTSB chief

Tesla 'has clearly misled numerous people to misuse and abuse technology'

https://www.autoblog.com/2021/09/19/tesla-full-self-driving-autopilot-basic-safety-issues-ntsb/


Before Tesla considers rolling out a further expansion of its semi-autonomous driving technology, the automaker should take a hard look at what it's already released, according to Jennifer Homendy, head of the National Transportation Safety Board. In an interview with the Wall Street Journal, Homendy said, "Basic safety issues have to be addressed before they’re then expanding it to other city streets and other areas."

Homendy further suggested that Tesla "has clearly misled numerous people to misuse and abuse technology." It seems Homendy is referring in part to what she calls "misleading and irresponsible" naming and marketing of the automaker's technology, which includes provisions to automatically steer, accelerate and brake but requires the full attention of a licensed driver who is ready and able to take over if and when the software encounters a situation it can't handle. Tesla calls some of its technologies "Full Self Driving" and "Autopilot," despite the fact that the vehicles are not actually capable of autonomously driving themselves. . . .


Of course, the NTSB and various consumer organizations have been making these same points for the past 5+ years, but maybe the NHTSA will finally mandate more responsible behavior.

Autopilot, even used inappropriately, saved lives. The most recent incident being of a drunk woman passed out , while the car was driving on the highway. She could've taken out other drivers if she had tried to drive home.

And no, there are enough drunk drivers on the road today (and their associated deaths and destruction) to refute the speculation that she wouldn't have tried to drive drunk if it weren't for the availability of autopilot. People being stupid will always exist. Taking the wheels away from them, and doing it as quickly as possible, would be safest for everyone. Try to distinguish stupid users from stupid tech.


Sure, we'll always have drivers behaving as humans; that's entirely foreseeable, which is why it's incumbent on any company trying to design a system to replace them that they take human factors into account. That's why Chauffeur (Google's AV division before they were spun off to Waymo) abandoned development of a freeway-only L2 ADAS, similar to but preceding Cadillac's Supercruise. Despite giving each Google employee who was allowed to use the development system two hours of training and telling them they had to keep their hands on the wheel and pay attention at all times, the monitoring cameras they installed to record driver behavior showed that as soon as they got accustomed to the car they ignored those instructions and proceeded to eat, put on makeup, use their laptops/phones or sleep, exactly the same behavior we've seen on so many Youtube videos of Tesla drivers using A/P. Chauffeur immediately stopped development of L2 systems, as they realized that the only safe way to do this was to have a system that didn't require a human to monitor it and be instantly ready to resume control, because such a system was predicated on humans not behaving like humans.

Tesla doesn't even bother to require or offer any training (not that it would work), it just pretends that humans won't act like humans and then says it's not on Tesla when they do. Here's a relevant quote from the GCR article reporting the same story as above:
Autopilot—and then Navigate on Autopilot—were features branded "beta" at first.

An informal poll we took years ago indicated that not very many Tesla owners were taking the "beta" part all that seriously. With some essentially untrained Tesla drivers going "no hands" in the city, are things any different now?
https://www.greencarreports.com/new...sleading-irresponsible-city-driving-beta-test


It may be Tesla will no longer be able to get away with denying any responsibility on their part; I hope so.
 
GRA said:
Oils4AsphaultOnly said:
GRA said:
ABG:

https://www.autoblog.com/2021/09/19/tesla-full-self-driving-autopilot-basic-safety-issues-ntsb/





Of course, the NTSB and various consumer organizations have been making these same points for the past 5+ years, but maybe the NHTSA will finally mandate more responsible behavior.

Autopilot, even used inappropriately, saved lives. The most recent incident being of a drunk woman passed out , while the car was driving on the highway. She could've taken out other drivers if she had tried to drive home.

And no, there are enough drunk drivers on the road today (and their associated deaths and destruction) to refute the speculation that she wouldn't have tried to drive drunk if it weren't for the availability of autopilot. People being stupid will always exist. Taking the wheels away from them, and doing it as quickly as possible, would be safest for everyone. Try to distinguish stupid users from stupid tech.


Sure, we'll always have drivers behaving as humans; that's entirely foreseeable, which is why it's incumbent on any company trying to design a system to replace them that they take human factors into account. That's why Chauffeur (Google's AV division before they were spun off to Waymo) abandoned development of a freeway-only L2 ADAS, similar to but preceding Cadillac's Supercruise. Despite giving each Google employee who was allowed to use the development system two hours of training and telling them they had to keep their hands on the wheel and pay attention at all times, the monitoring cameras they installed to record driver behavior showed that as soon as they got accustomed to the car they ignored those instructions and proceeded to eat, put on makeup, use their laptops/phones or sleep, exactly the same behavior we've seen on so many Youtube videos of Tesla drivers using A/P. Chauffeur immediately stopped development of L2 systems, as they realized that the only safe way to do this was to have a system that didn't require a human to monitor it and be instantly ready to resume control, because such a system was predicated on humans not behaving like humans.

Tesla doesn't even bother to require or offer any training (not that it would work), it just pretends that humans won't act like humans and then says it's not on Tesla when they do. Here's a relevant quote from the GCR article reporting the same story as above:
Autopilot—and then Navigate on Autopilot—were features branded "beta" at first.

An informal poll we took years ago indicated that not very many Tesla owners were taking the "beta" part all that seriously. With some essentially untrained Tesla drivers going "no hands" in the city, are things any different now?
https://www.greencarreports.com/new...sleading-irresponsible-city-driving-beta-test


It may be Tesla will no longer be able to get away with denying any responsibility on their part; I hope so.

You can't get to level 5 autonomous systems in a reasonable length of time using Google's method. They've been at it for almost a dozen years, and it fails to even deal with unplanned traffic cones. That's like the space shuttle method of designing safety from the top down. Top-down development will never solve complex problems. It's slow, cumbersome, and eventually too complicated to make work (especially as the core developers age out). Rapid iteration (failing fast and fixing fast) is how the job gets done, because 35,000 people die every year while we wait for Google to get it done "right".
 
Oils4AsphaultOnly said:
GRA said:
Oils4AsphaultOnly said:
Autopilot, even used inappropriately, saved lives. The most recent incident being of a drunk woman passed out , while the car was driving on the highway. She could've taken out other drivers if she had tried to drive home.

And no, there are enough drunk drivers on the road today (and their associated deaths and destruction) to refute the speculation that she wouldn't have tried to drive drunk if it weren't for the availability of autopilot. People being stupid will always exist. Taking the wheels away from them, and doing it as quickly as possible, would be safest for everyone. Try to distinguish stupid users from stupid tech.


Sure, we'll always have drivers behaving as humans; that's entirely foreseeable, which is why it's incumbent on any company trying to design a system to replace them that they take human factors into account. That's why Chauffeur (Google's AV division before they were spun off to Waymo) abandoned development of a freeway-only L2 ADAS, similar to but preceding Cadillac's Supercruise. Despite giving each Google employee who was allowed to use the development system two hours of training and telling them they had to keep their hands on the wheel and pay attention at all times, the monitoring cameras they installed to record driver behavior showed that as soon as they got accustomed to the car they ignored those instructions and proceeded to eat, put on makeup, use their laptops/phones or sleep, exactly the same behavior we've seen on so many Youtube videos of Tesla drivers using A/P. Chauffeur immediately stopped development of L2 systems, as they realized that the only safe way to do this was to have a system that didn't require a human to monitor it and be instantly ready to resume control, because such a system was predicated on humans not behaving like humans.

Tesla doesn't even bother to require or offer any training (not that it would work), it just pretends that humans won't act like humans and then says it's not on Tesla when they do. Here's a relevant quote from the GCR article reporting the same story as above:
Autopilot—and then Navigate on Autopilot—were features branded "beta" at first.

An informal poll we took years ago indicated that not very many Tesla owners were taking the "beta" part all that seriously. With some essentially untrained Tesla drivers going "no hands" in the city, are things any different now?
https://www.greencarreports.com/new...sleading-irresponsible-city-driving-beta-test


It may be Tesla will no longer be able to get away with denying any responsibility on their part; I hope so.

You can't get to level 5 autonomous systems in a reasonable length of time using Google's method. They've been at it for almost a dozen years, and it fails to even deal with unplanned traffic cones. That's like the space shuttle method of designing safety from the top down. Top-down development will never solve complex problems. It's slow, cumbersome, and eventually too complicated to make work (especially as the core developers age out). Rapid iteration (failing fast and fixing fast) is how the job gets done, because 35,000 people die every year while we wait for Google to get it done "right".


We don't ened L5, good L3 may and L4 that would show anindependently verified statistical decrease would be enough. You mention a drunk driver who might have killed someone if A/P hadn't been driving. that's quite possible. But we know that a non-passenger has been killed by a car using A/P and others injured, when a normally alert human driver wouldn't have done so. If you want to risk your life on private roads by relying on immature ADAS, be my guest. But you shouldn't be able to make that choice for me on public roads. I'm all for automatic safety systems like AEB systems as back-up to humans; they have been shown to reduce crashes in certain situations (but not for common situations like cross-traffic or stopped vehicles in lanes), but they are a backup system, not primary. None of the current ADAS systems has shown the ability to be safer than a normally alert human driver, nor are they likely to.
 
GRA said:
Oils4AsphaultOnly said:
GRA said:
Sure, we'll always have drivers behaving as humans; that's entirely foreseeable, which is why it's incumbent on any company trying to design a system to replace them that they take human factors into account. That's why Chauffeur (Google's AV division before they were spun off to Waymo) abandoned development of a freeway-only L2 ADAS, similar to but preceding Cadillac's Supercruise. Despite giving each Google employee who was allowed to use the development system two hours of training and telling them they had to keep their hands on the wheel and pay attention at all times, the monitoring cameras they installed to record driver behavior showed that as soon as they got accustomed to the car they ignored those instructions and proceeded to eat, put on makeup, use their laptops/phones or sleep, exactly the same behavior we've seen on so many Youtube videos of Tesla drivers using A/P. Chauffeur immediately stopped development of L2 systems, as they realized that the only safe way to do this was to have a system that didn't require a human to monitor it and be instantly ready to resume control, because such a system was predicated on humans not behaving like humans.

Tesla doesn't even bother to require or offer any training (not that it would work), it just pretends that humans won't act like humans and then says it's not on Tesla when they do. Here's a relevant quote from the GCR article reporting the same story as above:
https://www.greencarreports.com/new...sleading-irresponsible-city-driving-beta-test


It may be Tesla will no longer be able to get away with denying any responsibility on their part; I hope so.

You can't get to level 5 autonomous systems in a reasonable length of time using Google's method. They've been at it for almost a dozen years, and it fails to even deal with unplanned traffic cones. That's like the space shuttle method of designing safety from the top down. Top-down development will never solve complex problems. It's slow, cumbersome, and eventually too complicated to make work (especially as the core developers age out). Rapid iteration (failing fast and fixing fast) is how the job gets done, because 35,000 people die every year while we wait for Google to get it done "right".


We don't ened L5, good L3 may and L4 that would show anindependently verified statistical decrease would be enough. You mention a drunk driver who might have killed someone if A/P hadn't been driving. that's quite possible. But we know that a non-passenger has been killed by a car using A/P and others injured, when a normally alert human driver wouldn't have done so. If you want to risk your life on private roads by relying on immature ADAS, be my guest. But you shouldn't be able to make that choice for me on public roads. I'm all for automatic safety systems like AEB systems as back-up to humans; they have been shown to reduce crashes in certain situations (but not for common situations like cross-traffic or stopped vehicles in lanes), but they are a backup system, not primary. None of the current ADAS systems has shown the ability to be safer than a normally alert human driver, nor are they likely to.

"But we know that a non-passenger has been killed by a car using A/P and others injured, when a normally alert human driver wouldn't have done so. "

And how many non-passengers have been killed by human drivers that were NOT normally alert? 10,000 deaths per year by drunk drivers is the current total.

We have the number of A/P miles driven (well into the billions), and the number of deaths from them (1?). That single non-passenger was killed by a driver that was too tired to drive. That same driver profile (being too tired to drive) has already killed other people without A/P. And A/P has made huge improvements in the years since that accident.
 
Oils4AsphaultOnly said:
GRA said:
Oils4AsphaultOnly said:
You can't get to level 5 autonomous systems in a reasonable length of time using Google's method. They've been at it for almost a dozen years, and it fails to even deal with unplanned traffic cones. That's like the space shuttle method of designing safety from the top down. Top-down development will never solve complex problems. It's slow, cumbersome, and eventually too complicated to make work (especially as the core developers age out). Rapid iteration (failing fast and fixing fast) is how the job gets done, because 35,000 people die every year while we wait for Google to get it done "right".


We don't ened L5, good L3 may and L4 that would show anindependently verified statistical decrease would be enough. You mention a drunk driver who might have killed someone if A/P hadn't been driving. that's quite possible. But we know that a non-passenger has been killed by a car using A/P and others injured, when a normally alert human driver wouldn't have done so. If you want to risk your life on private roads by relying on immature ADAS, be my guest. But you shouldn't be able to make that choice for me on public roads. I'm all for automatic safety systems like AEB systems as back-up to humans; they have been shown to reduce crashes in certain situations (but not for common situations like cross-traffic or stopped vehicles in lanes), but they are a backup system, not primary. None of the current ADAS systems has shown the ability to be safer than a normally alert human driver, nor are they likely to.

"But we know that a non-passenger has been killed by a car using A/P and others injured, when a normally alert human driver wouldn't have done so. "

And how many non-passengers have been killed by human drivers that were NOT normally alert? 10,000 deaths per year by drunk drivers is the current total.


Quite so, so we don't need to add a system that's no better than an un-alert human driver, and worse than an alert one.



Oils4AsphaultOnly said:
We have the number of A/P miles driven (well into the billions), and the number of deaths from them (1?). That single non-passenger was killed by a driver that was too tired to drive. That same driver profile (being too tired to drive) has already killed other people without A/P. And A/P has made huge improvements in the years since that accident.



At least 4 deaths that we know of, three occupants and one non-occupant, plus various injuries. Then there's the Uber death of Elaine Herzberg, which was a similar case of human lack of attention to monitoring an automated system.

What we have are statistics interpreted by Tesla, and they like every other company cherry-pick and interpret the numbers to make their systems look as good a possible. We've had statisticians point out methodological errors behind Tesla's accident rate claims. Here's an example of some of the criticisms: https://www.businessinsider.com/tes...opilot-safety-data-flaws-experts-nhtsa-2021-4

Which is why the NTSB has been nagging NHTSA to require all companies using ADAS and higher techs to provide certain types of data in a standard format to NHTSA, so that it can be independently verified and compared. Absent that,
Abuelsamid says the issues with Tesla's quarterly data all boil down to one old adage: "Tell me what side of the argument you're on, and I'll give you the statistics to prove that you're right."
 
ABG:
Tesla Autopilot upgraded to slow down when it detects emergency lights

The feature was added in the midst of an NHTSA investigation

https://www.autoblog.com/2021/09/24/tesla-autopilot-emergency-vehicles-nhtsa-investigation/


Tesla has quietly made a significant improvement to its Autopilot suite of electronic driving aids. As of September 2021, in the midst of a federal investigation, some of the cars equipped with the technology gained the ability to slow down when they detect emergency lights.

Instead of letting its CEO announce the new feature on Twitter, the California-based firm merely updated the online owner's manual for the Model 3 and the Model Y. "If the Model 3/ Model Y detects lights from an emergency vehicle when using Autosteer at night on a high-speed road, the driving speed is automatically reduced and the touchscreen displays a message informing you of the slowdown," the update explains. It adds that the system will also emit a chime to remind motorists that they need to keep both hands on the steering wheel and stay aware.

Autopilot automatically resumes the previously-set cruising speed when the system no longer detects the lights. Tesla adds that drivers should never rely on Autopilot to detect the presence of emergency lights, and that its cars may not detect them in all situations.

On the surface, this is a latest in a series of improvements made to Autopilot — the timing is odd, however. There have been several reports of Tesla cars slamming into parked emergency vehicles while allegedly traveling on Autopilot; 11 crashes have been reported since 2018, leaving at least 17 people injured and killing one. The trend caught the attention of the National Highway Traffic Safety Administration (NHTSA), which opened a probe into the crashes on August 16, 2021. It covers almost every Tesla sold here since the 2014 model year.

The investigation is ongoing, so its results haven't been made public yet, but NHTSA asked Tesla to explain how its Autopilot system detects and reacts to emergency vehicles parked on highways on September 1. Tesla has until October 22 to respond or seek an extension. NHTSA has also asked 12 other automakers to provide information on how their systems react to those situations.

Giving cars the ability to identify emergency vehicles is a big step forward, but the feature leaves a lot of unanswered questions. It can only detect lights "at night on a high-speed road" — what happens during the day, in a dense urban area, or both? Can it detect traffic cones and reflective vests? We'll need to wait until the investigation's results are published to learn more about the issue and the proposed solution.


A useful step forward, albeit far from adequate as a substitute for an alert human driver - it wouldn't have prevented the fatal accident where a Tesla hit two people outside of their stopped car, which wasn't an emergency vehicle.

I also question letting the system automatically resume the set speed if it no longer detects emergency lights, given that such slowdowns lead to gawking ng and often more collisions. I'd think it would be far safer to disengage cruise and require the now presumably fully-alert driver to re-engage it.
 
GRA said:
ABG:
Tesla Autopilot upgraded to slow down when it detects emergency lights

The feature was added in the midst of an NHTSA investigation

https://www.autoblog.com/2021/09/24/tesla-autopilot-emergency-vehicles-nhtsa-investigation/


Tesla has quietly made a significant improvement to its Autopilot suite of electronic driving aids. As of September 2021, in the midst of a federal investigation, some of the cars equipped with the technology gained the ability to slow down when they detect emergency lights.

Instead of letting its CEO announce the new feature on Twitter, the California-based firm merely updated the online owner's manual for the Model 3 and the Model Y. "If the Model 3/ Model Y detects lights from an emergency vehicle when using Autosteer at night on a high-speed road, the driving speed is automatically reduced and the touchscreen displays a message informing you of the slowdown," the update explains. It adds that the system will also emit a chime to remind motorists that they need to keep both hands on the steering wheel and stay aware.

Autopilot automatically resumes the previously-set cruising speed when the system no longer detects the lights. Tesla adds that drivers should never rely on Autopilot to detect the presence of emergency lights, and that its cars may not detect them in all situations.

On the surface, this is a latest in a series of improvements made to Autopilot — the timing is odd, however. There have been several reports of Tesla cars slamming into parked emergency vehicles while allegedly traveling on Autopilot; 11 crashes have been reported since 2018, leaving at least 17 people injured and killing one. The trend caught the attention of the National Highway Traffic Safety Administration (NHTSA), which opened a probe into the crashes on August 16, 2021. It covers almost every Tesla sold here since the 2014 model year.

The investigation is ongoing, so its results haven't been made public yet, but NHTSA asked Tesla to explain how its Autopilot system detects and reacts to emergency vehicles parked on highways on September 1. Tesla has until October 22 to respond or seek an extension. NHTSA has also asked 12 other automakers to provide information on how their systems react to those situations.

Giving cars the ability to identify emergency vehicles is a big step forward, but the feature leaves a lot of unanswered questions. It can only detect lights "at night on a high-speed road" — what happens during the day, in a dense urban area, or both? Can it detect traffic cones and reflective vests? We'll need to wait until the investigation's results are published to learn more about the issue and the proposed solution.


A useful step forward, albeit far from adequate as a substitute for an alert human driver - it wouldn't have prevented the fatal accident where a Tesla hit two people outside of their stopped car, which wasn't an emergency vehicle.

I also question letting the system automatically resume the set speed if it no longer detects emergency lights, given that such slowdowns lead to gawking ng and often more collisions. I'd think it would be far safer to disengage cruise and require the now presumably fully-alert driver to re-engage it.

That's a rather self-serving recommendation. I know that your stance is that all L2 ADAS should be banned. By implementing your recommendation, then drivers won't trust their ADAS to NOT self-disengage (like Ford's "blue cruise" during a moderate curve on a highway), thereby being constantly vigilant (what you're hoping for). But in reality, they would simply choose to not use ADAS, and the drunks and distracted drivers would go ahead and drive without the ADAS, keeping the death statistics at the status quo. ZERO IMPROVEMENT. That makes the tech useless.

The Tesla A/P data is data that is at least available. No other manufacturer has data on how often their ADAS is used or auto-disengaged.
 
GRA said:
Oils4AsphaultOnly said:
GRA said:
We don't ened L5, good L3 may and L4 that would show anindependently verified statistical decrease would be enough. You mention a drunk driver who might have killed someone if A/P hadn't been driving. that's quite possible. But we know that a non-passenger has been killed by a car using A/P and others injured, when a normally alert human driver wouldn't have done so. If you want to risk your life on private roads by relying on immature ADAS, be my guest. But you shouldn't be able to make that choice for me on public roads. I'm all for automatic safety systems like AEB systems as back-up to humans; they have been shown to reduce crashes in certain situations (but not for common situations like cross-traffic or stopped vehicles in lanes), but they are a backup system, not primary. None of the current ADAS systems has shown the ability to be safer than a normally alert human driver, nor are they likely to.

"But we know that a non-passenger has been killed by a car using A/P and others injured, when a normally alert human driver wouldn't have done so. "

And how many non-passengers have been killed by human drivers that were NOT normally alert? 10,000 deaths per year by drunk drivers is the current total.


Quite so, so we don't need to add a system that's no better than an un-alert human driver, and worse than an alert one.



Oils4AsphaultOnly said:
We have the number of A/P miles driven (well into the billions), and the number of deaths from them (1?). That single non-passenger was killed by a driver that was too tired to drive. That same driver profile (being too tired to drive) has already killed other people without A/P. And A/P has made huge improvements in the years since that accident.



At least 4 deaths that we know of, three occupants and one non-occupant, plus various injuries. Then there's the Uber death of Elaine Herzberg, which was a similar case of human lack of attention to monitoring an automated system.

What we have are statistics interpreted by Tesla, and they like every other company cherry-pick and interpret the numbers to make their systems look as good a possible. We've had statisticians point out methodological errors behind Tesla's accident rate claims. Here's an example of some of the criticisms: https://www.businessinsider.com/tes...opilot-safety-data-flaws-experts-nhtsa-2021-4

Which is why the NTSB has been nagging NHTSA to require all companies using ADAS and higher techs to provide certain types of data in a standard format to NHTSA, so that it can be independently verified and compared. Absent that,
Abuelsamid says the issues with Tesla's quarterly data all boil down to one old adage: "Tell me what side of the argument you're on, and I'll give you the statistics to prove that you're right."

"At least 4 deaths that we know of, three occupants and one non-occupant"

All four of them was for the abuse of A/P (watching a movie while driving, or asleep at the wheel). Situations that would've resulted in death or serious injury anyway and potentially impacting more than themselves.
 
Oils4AsphaultOnly said:
GRA said:
ABG:
Tesla Autopilot upgraded to slow down when it detects emergency lights

The feature was added in the midst of an NHTSA investigation

https://www.autoblog.com/2021/09/24/tesla-autopilot-emergency-vehicles-nhtsa-investigation/


Tesla has quietly made a significant improvement to its Autopilot suite of electronic driving aids. As of September 2021, in the midst of a federal investigation, some of the cars equipped with the technology gained the ability to slow down when they detect emergency lights.

Instead of letting its CEO announce the new feature on Twitter, the California-based firm merely updated the online owner's manual for the Model 3 and the Model Y. "If the Model 3/ Model Y detects lights from an emergency vehicle when using Autosteer at night on a high-speed road, the driving speed is automatically reduced and the touchscreen displays a message informing you of the slowdown," the update explains. It adds that the system will also emit a chime to remind motorists that they need to keep both hands on the steering wheel and stay aware.

Autopilot automatically resumes the previously-set cruising speed when the system no longer detects the lights. Tesla adds that drivers should never rely on Autopilot to detect the presence of emergency lights, and that its cars may not detect them in all situations.

On the surface, this is a latest in a series of improvements made to Autopilot — the timing is odd, however. There have been several reports of Tesla cars slamming into parked emergency vehicles while allegedly traveling on Autopilot; 11 crashes have been reported since 2018, leaving at least 17 people injured and killing one. The trend caught the attention of the National Highway Traffic Safety Administration (NHTSA), which opened a probe into the crashes on August 16, 2021. It covers almost every Tesla sold here since the 2014 model year.

The investigation is ongoing, so its results haven't been made public yet, but NHTSA asked Tesla to explain how its Autopilot system detects and reacts to emergency vehicles parked on highways on September 1. Tesla has until October 22 to respond or seek an extension. NHTSA has also asked 12 other automakers to provide information on how their systems react to those situations.

Giving cars the ability to identify emergency vehicles is a big step forward, but the feature leaves a lot of unanswered questions. It can only detect lights "at night on a high-speed road" — what happens during the day, in a dense urban area, or both? Can it detect traffic cones and reflective vests? We'll need to wait until the investigation's results are published to learn more about the issue and the proposed solution.


A useful step forward, albeit far from adequate as a substitute for an alert human driver - it wouldn't have prevented the fatal accident where a Tesla hit two people outside of their stopped car, which wasn't an emergency vehicle.

I also question letting the system automatically resume the set speed if it no longer detects emergency lights, given that such slowdowns lead to gawking ng and often more collisions. I'd think it would be far safer to disengage cruise and require the now presumably fully-alert driver to re-engage it.

That's a rather self-serving recommendation. I know that your stance is that all L2 ADAS should be banned. By implementing your recommendation, then drivers won't trust their ADAS to NOT self-disengage (like Ford's "blue cruise" during a moderate curve on a highway), thereby being constantly vigilant (what you're hoping for). But in reality, they would simply choose to not use ADAS, and the drunks and distracted drivers would go ahead and drive without the ADAS, keeping the death statistics at the status quo. ZERO IMPROVEMENT. That makes the tech useless.

The Tesla A/P data is data that is at least available. No other manufacturer has data on how often their ADAS is used or auto-disengaged.


So what you're saying is that my belief that the system should encourage drivers to be constantly vigilant, which is what even Tesla says they must be, is expecting too much of drivers. Thanks for agreeing with me that such systems are simply inadequate to be used safely by humans.
 
Oils4AsphaultOnly said:
GRA said:
Oils4AsphaultOnly said:
"But we know that a non-passenger has been killed by a car using A/P and others injured, when a normally alert human driver wouldn't have done so. "

And how many non-passengers have been killed by human drivers that were NOT normally alert? 10,000 deaths per year by drunk drivers is the current total.


Quite so, so we don't need to add a system that's no better than an un-alert human driver, and worse than an alert one.



Oils4AsphaultOnly said:
We have the number of A/P miles driven (well into the billions), and the number of deaths from them (1?). That single non-passenger was killed by a driver that was too tired to drive. That same driver profile (being too tired to drive) has already killed other people without A/P. And A/P has made huge improvements in the years since that accident.



At least 4 deaths that we know of, three occupants and one non-occupant, plus various injuries. Then there's the Uber death of Elaine Herzberg, which was a similar case of human lack of attention to monitoring an automated system.

What we have are statistics interpreted by Tesla, and they like every other company cherry-pick and interpret the numbers to make their systems look as good a possible. We've had statisticians point out methodological errors behind Tesla's accident rate claims. Here's an example of some of the criticisms: https://www.businessinsider.com/tes...opilot-safety-data-flaws-experts-nhtsa-2021-4

Which is why the NTSB has been nagging NHTSA to require all companies using ADAS and higher techs to provide certain types of data in a standard format to NHTSA, so that it can be independently verified and compared. Absent that,
Abuelsamid says the issues with Tesla's quarterly data all boil down to one old adage: "Tell me what side of the argument you're on, and I'll give you the statistics to prove that you're right."

"At least 4 deaths that we know of, three occupants and one non-occupant"

All four of them was for the abuse of A/P (watching a movie while driving, or asleep at the wheel). Situations that would've resulted in death or serious injury anyway and potentially impacting more than themselves.


Do you suppose if they weren't trusting A/P to drive the car but were doing it themselves, they would have almost certainly avoided these accidents in exactly the same situations, which is what happens tens if not hundreds of thousands of times a day? It's the inattentive/impaired drivers who crash into crossing trucks, emergency vehicles etc.

BTW, the known total's five. I was forgetting the first A/P death, in China IIRR, when the car hit a road-sweeper.

Then there was the non-occupant death, of someone standing outside their car who got hit after the Tesla ignored a light or stop sign (forget which). If a human driver had done that, they'd bear full responsibility for the accident, so why doesn't A/P?


Oh, one last thing. While NHTSA doesn't currently require gathering data, states like California do for L4+, maybe L3 too. As cwerdna has repeatedly pointed out, Tesla has been absent from such reports for years, although I think they may have finally appeared in the last one. Hopefully cwerdna can provide the link to the most recent report.

One thing's for sure, as long as companies get to decide what info they provide, it's GIGO.
 
ABG:
NHTSA asks Tesla why it didn't recall cars for Autopilot update

The investigation into Autopilot started after a series of collisions with parked emergency vehicles

https://www.autoblog.com/2021/10/13/tesla-autopilot-nhtsa-recall/


U.S. safety investigators want to know why Tesla didn't file recall documents when it updated Autopilot software to better identify parked emergency vehicles, escalating a simmering clash between the automaker and regulators.

In a letter to Tesla, the National Highway Traffic Safety Administration told the electric car maker Tuesday that it must recall vehicles if an over-the-internet update mitigates a safety defect.

“Any manufacturer issuing an over-the-air update that mitigates a defect that poses an unreasonable risk to motor vehicle safety is required to timely file an accompanying recall notice to NHTSA,” the agency said in a letter to Eddie Gates, Tesla’s director of field quality.

The agency also ordered Tesla to provide information about its “Full Self-Driving” software that's being tested on public roads with some owners.

The latest clash is another sign of escalating tensions between Tesla and the agency that regulates partially automated driving systems.

In August the agency opened an investigation into Tesla's Autopilot after getting multiple reports of vehicles crashing into emergency vehicles with warning lights flashing that were stopped on highways.

The letter was posted on the NHTSA website early Wednesday. A message was left early Wednesday seeking comment from Tesla, which has disbanded its media relations department. . . .

According to the agency, Tesla did an over-the-internet software update in late September that was intended to improve detection of emergency vehicle lights in low-light conditions. The agency says that Tesla is aware that federal law requires automakers to do a recall if they find out that vehicles or equipment have safety defects.

The agency asked for information about Tesla’s “Emergency Light Detection Update” that was sent to certain vehicles “with the stated purpose of detecting flashing emergency vehicle lights in low light conditions and then responding to said detection with driver alerts and changes to the vehicle speed while Autopilot is engaged.”

The letter asks for a list of events that motivated the software update, as well as what vehicles it was sent to and whether the measures extend to Tesla's entire fleet.

It also asks the Palo Alto, California, company whether it intends to file recall documents. “If not, please furnish Tesla's technical and/or legal basis for declining to do so,” the agency asks.

When automakers find out about a safety defect, they must tell NHTSA within five working days, and they’re required to do recalls. NHTSA monitors the recalls to make sure they cover all affected vehicles and the automakers are making proper efforts to contact all owners.

Tesla has to comply with the request by Nov. 1 or face court action and civil fines of more than $114 million, the agency wrote.

In a separate special order sent to Tesla, NHTSA says that the company may be taking steps to hinder the agency’s access to safety information by requiring drivers who are testing “Full Self-Driving” software to sign non-disclosure agreements.

The order demands that Tesla describe the non-disclosure agreements and how they are signed by Tesla drivers. The company also must say whether Tesla requires owners of vehicles equipped with Autopilot to agree “to any terms that would prevent or discourage vehicle owners from sharing information about or discussing any aspect of Autopilot with any person other than Tesla.”

Responses must be made by a Tesla officer under oath. If Tesla fails to fully comply, the order says the matter could be referred to the Justice Department for court action to force responses. It also threatens more fines of over $114 million. . . .

The new demands from NHTSA signal a tougher regulatory stance under President Joe Biden on automated vehicle safety compared with the previous administrations. The agency had appeared reluctant to regulate the new technology for fear of hampering adoption of the potentially life-saving systems.

The National Transportation Safety Board, which also has investigated some of the Tesla crashes dating to 2016, has recommended that NHTSA and Tesla limit Autopilot’s use to areas where it can safely operate. The NTSB also recommended that NHTSA require Tesla to have a better system to make sure drivers are paying attention. The NTSB has no enforcement powers and can only make recommendations to other federal agencies.
 
There's no NDA for the current crop of FSDBeta testers who requested FSDBeta via "the button".
Guessing the Tesla employees who have been testing over the past years did have some sort of NDA.
 
Oils4AsphaultOnly said:
GRA said:
Some more info, via ABG:
NTSB: Tesla owner started trip in driver's seat before fatal crash

The preliminary report offers no explanation as to the cause

https://www.autoblog.com/2021/05/10/tesla-texas-crash-ntsb-report/


Home security camera footage shows that the owner of a Tesla got into the driver's seat of the car shortly before a deadly crash in suburban Houston, according to a government report Monday.

But the preliminary report on the crash that killed two men doesn't explain the mystery of why authorities found no one behind the wheel of the car, which burst into flames after crashing about 550 feet (170 meters) from the owner's home. Nor does it conclusively say whether Tesla's “Autopilot” partially automated driver-assist system was operating at the time of the crash, although it appears unlikely.

The National Transportation Safety Board said it's still investigating all aspects of the crash. An onboard data storage device in the console, however, was destroyed by fire. A computer that records air bag and seat belt status as well as speed and acceleration was damaged and is being examined at an NTSB lab.

The NTSB said it tested a different Tesla vehicle on the same road, and the Autopilot driver-assist system could not be fully used. Investigators could not get the system's automated steering system to work, but were able to use Traffic Aware Cruise Control.

get the system's automated steering system to work, but were able to use Traffic Aware Cruise Control.

Autopilot needs both the cruise control and the automatic steering to function. Traffic Aware Cruise Control can keep the car a safe distance from vehicles in front of it, while autosteer keeps it in its own lane.

“The NTSB continues to collect data to analyze the crash dynamics, postmortem toxicology test results, seat belt use, occupant egress and electric vehicle fires,” the agency said in its report. “All aspects of the crash remain under investigation as the NTSB determines the probable cause. . . .”


Curiouser and curiouser. Direct link to report: https://www.ntsb.gov/news/press-releases/Pages/NR20210510.aspx

There's nothing curious about it. The driver lost control and crashed within 550 ft of his driveway. The doors were jammed shut from the crash (which happens - remember Paul Walker?), and the driver tried to flee through the backseat, but didn't make it (that's why he was in the backseat). We've long known that autopilot wasn't a contributing factor and this report was released to settle the FUD.

The constable at the scene should never have claimed that no one was driving the car. He should've stopped with "no one was in the driver's seat" and claimed anything as fact beyond that.

Hey look! I said months ago that "the driver tried to flee through the backseat, but didn't make it", and now evidence shows that I was spot on!

https://electrek.co/2021/10/21/tesla-autopilot-vindicated-event-data-in-highly-publicized-fatal-crash/

Some of us are just better at drawing logical conclusions. ;-)
 
Back
Top