Tesla's autopilot, on the road

My Nissan Leaf Forum

Help Support My Nissan Leaf Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
cwerdna said:
GRA said:
I'm a bit pushed for time this week, so I'll have to put off my reply for a while. I'll get to it when I can. In the meantime, you can ponder this while hopefully reconsidering your answer to my question re what constitutes adequate pre-public deployment development testing:
Tesla's Navigate on Autopilot Requires Significant Driver Intervention
CR finds that latest version of Tesla's automatic lane-changing feature is far less competent than a human driver
For those who don't follow along on TMC (I take hiatuses sometimes for months at a time), CR's findings (https://www.consumerreports.org/autonomous-driving/tesla-navigate-on-autopilot-automatic-lane-change-requires-significant-driver-intervention/) aren't really new. There are tons of threads and posts on TMC about how poorly NoA works, in general. It's been going on ever since the feature was rolled out.

And, IIRC, there are definitely some software releases that are worse than others from an AP and NoA POV. Latest isn't always the greatest.

Since GRA won't post this, I thought I'd do you both a favor and post it: https://insideevs.com/news/351110/consumer-reports-tesla-navigate-autopilot/amp/

Turns out, CR actually approves of NoA. They just don't like the version that does the lane-changes for you. Go figure. I'll still make my own judgements based upon the actual use of the product though, instead of hearsay.
 
There might be come confusion on lane changing so let me interject my experience with changing lanes in my Model 3. My 10-month-old long-range, rear-wheel drive car has what at the time were called Enhanced Auto Pilot (EAP) and Full Self-Driving (FSD) options. The former has several features such as the Traffic-Aware Cruise Control (TACC), Summon (to go straight into and out of a parking spot), Self Parking (to automatically parallel and perpendicular park), and AutoSteer (to remain in lane). The FSD option is vaporware at this point but allegedly will include a no-cost hardware update at some point and enabling features. The definition and content of these options have changed since my purchase, so YMMV. The current definition of FSD offers the Navigate On AutoPilot feature, IIRC.

That said, the NoA feature works pretty well for me, and has gotten much, much better since it first appeared in a firmware update sent to my car a few months ago. Today on the latest firmware (received version 2019.16.2 yesterday), the Lane Change feature works as follows:

1- You can leave AutoSteer off and change lanes manually, as with any vehicle, and with or without cruise control in use.
2- AutoSteer can be on and you can trigger a lane change by activating the turn signal in the direction you wish to go. The car does the rest including turning off the turn signal after completing the lane change.
3- AutoSteer can be on and you’re using the confirmed Lane Change. This means the car will identify a Lane Change opportunity, ask you to confirm with the turn signal, and the car does the rest. Note that this and Option 2 are NOT mutually exclusive.
4- AutoSteer can be on and you’ve chosen to use the no-confirmation Lane Change option. This means the car will identify a Lane Change opportunity and if your hands can be sensed by the car as physically steering (applying some torque to the wheel), the car will do the rest including turning on the turn signals to initiate and turning them off when the lane change is complete. Note that this and Option 2 are NOT mutually exclusive.

All of the above work, and work quite well in my case. And note that all are optional...you decide, and you choose, and you can change your mind at any time. For me, I like Option 3 above as it is the most intuitive, again for me. To me, Option 4 is not exactly “no confirmation” as you have to apply a tiny amount of steering torque...another form of confirmation. But it’s there, it works, and it’s nice to have choices.
 
SalisburySam said:
All of the above work, and work quite well in my case. And note that all are optional...you decide, and you choose, and you can change your mind at any time. For me, I like Option 3 above as it is the most intuitive, again for me. To me, Option 4 is not exactly “no confirmation” as you have to apply a tiny amount of steering torque...another form of confirmation. But it’s there, it works, and it’s nice to have choices.
+1

Excellent description and summary. Had CR not been trying to be negative on Tesla, they might have posted something equally as informative as your single message here.

Last weekend I took a 500 mile round trip across VT to NY and back. I turned on no-confirmation lane changes ("option 4") for the first time on the whole trip. In the light traffic volume of I89 in VT it seemed to do reasonably well. I doubt very much that the current iteration would be tolerable on a highway with more than just medium volume.
 
Back from the Holiday weekend, and hope everyone enjoyed it,.

Oils4AsphaultOnly said:
GRA said:
Oils4AsphaultOnly said:
There are at least 2 other reasons why Tesla could have good data and still not be able to share it. Access to the raw data might involve a significant breach of network security protocols. And/or extraction/reproduction of that data is too expensive (includes staff resources) to justify an effort that might not satisfy all the critics.
If other companies are able to do it, then Tesla can (and should) be held to the same standard.
Really? And which other companies have released their driver assistance accident data? We're talking non-FARS here right?
Oh, none of the other ADAS companies currently, because none of them other than Tesla are claiming that they're currently safer than humans. It's the full AV companies that are required to provide their data, and anyone making such safety claims about an ADAS should also be required to do so. Which is just what CR has been calling on them to do, most recently on April 22:
Consumer Reports: Tesla Must Prove Safety Before Claiming “Self-Driving” Ability
https://advocacy.consumerreports.or...-safety-before-claiming-self-driving-ability/

. . . David Friedman, Vice President of Advocacy for Consumer Reports, said, “Technology has the potential to shape future transportation to be safer, less expensive, and more accessible. Yet, safety must always come first. Today’s driver assistance technologies have helped deliver on safety, but the marketplace is full of bold claims about self-driving capabilities that overpromise and underdeliver. For instance, Tesla’s current driver-assist system, ‘Autopilot,’ is no substitute for a human driver. It can’t dependably navigate common road situations on its own, and fails to keep the driver engaged exactly when it is needed most.

“We’ve heard promises of self-driving vehicles being just around the corner from Tesla before. Claims about the company’s driving automation systems and safety are not backed up by the data, and it seems today’s presentations had more to do with investors than consumers’ safety. We agree that Tesla, and every other car company, has a moral imperative to make transportation safer, and all companies should embrace the most important principle: preventing harm and saving lives.

But instead of treating the public like guinea pig, Tesla must clearly demonstrate a driving automation system that is substantially safer than what is available today, based on rigorous evidence that is transparently shared with regulators and consumers, and validated by independent third-parties. In the meantime, the company should focus on making sure that proven crash avoidance technologies on Tesla vehicles, such as automatic emergency braking with pedestrian detection, are as effective as possible.”
H'mm, those criticisms sound familiar.

Oils4AsphaultOnly said:
GRA said:
Oils4AsphaultOnly said:
I have no misconceptions that people won't become complacent again, but the trade-off is that during these years, there have been many lives saved. Yes, I know you want the data to prove this (and you know I only have a count of how many videos and arrests of people asleep/drunk at the wheel). I just want to spell out what I'm claiming and how your studies don't refute it. Also, the chilling effect that you're so worried about (but hasn't happened) is why I'm pointing out that A/P is ADAS. People are _correctly_ assigning responsibility to the drivers and not A/P.
If the ADAS is guaranteed to be abused by normal humans, and it is, and the manufacturer doesn't take steps to ensure that it can't be, then responsibility is shared. Seeing as how Teslas know which road they're on and the speed limit, a single line of code could have prevented both Brown's and Brenner's deaths, or at least, eliminated A/P's role in that death, by preventing A/P from being used a situation where Tesla knows that AEB doesn't work, which is dealing with cross traffic. Something like this is all it would take:

If ROADTYPE = "Freeway" then A/P = "OK" ELSE A/P = "Not OK"

This is what Cadillac does with Supercruise, so Tesla should take one of the programmers they have spending far more time writing cutesy Easter Eggs and put them to work making a simple change that will save lives.
You know nothing about programming if you think it's really that simple. And you know even less about neural nets if you expect the programmers to interject their code like that. If Cadillac's engineers really wrote code like that, then they'll never get to any level of self-driving. Considering that they at least made it to level 2+, there's still hope for them.
I make no pretense of claiming to be a programmer - the last time I did any was when I taught myself Basic (on a Commodore Pet, 8k RAM and program/data storage on a cassette tape) back it the late '70s, but you cannot deny that an instruction such as the above limiting A/P use to limited-access freeways with no cross traffic would have prevented both Brown's an Brenner's A/P-enabled deaths, in circumstances where Tesla knew it wouldn't work. Cadillac actually goes further than just limiting Supercruise usage to limited-access freeways (and having an eye-tracking camera to ensure drivers are looking at the road, as they only allow its usage on those portions of freeways for which they have high-def digital LIDAR maps. The latter on U.S. 101 at the Hwy 85 interchange would have presumably prevented Huang's death at the hands of A/P.

Here's CR's comparison test of the various ADAS systems: https://www.cadillac.com/content/da...es011519/02-pdfs/Consumer_Reports_Article.pdf

Oils4AsphaultOnly said:
GRA said:
Oils4AsphaultOnly said:
So what does the FARS database show? I don't have a desktop available to analyze the compressed 14MB csv file.
Demographics, # of occupants, driver/pax/motorcyclist, non-motorist, conditions, speeds, type of accident etc.

Not what I mean, but I guess you're not able to analyze the data yourself huh? Depending on other people to draw the conclusions? I'll dig into it later and see if at least make/model info is buried in there.
I am not a professional statistician nor do I play one on TV, so of course I'm not going to analyze the data myself. Neither is Elon, but that hasn't stopped him from making claims based on erroneous methodology.
 
Oils4AsphaultOnly said:
cwerdna said:
GRA said:
I'm a bit pushed for time this week, so I'll have to put off my reply for a while. I'll get to it when I can. In the meantime, you can ponder this while hopefully reconsidering your answer to my question re what constitutes adequate pre-public deployment development testing:
For those who don't follow along on TMC (I take hiatuses sometimes for months at a time), CR's findings (https://www.consumerreports.org/autonomous-driving/tesla-navigate-on-autopilot-automatic-lane-change-requires-significant-driver-intervention/) aren't really new. There are tons of threads and posts on TMC about how poorly NoA works, in general. It's been going on ever since the feature was rolled out.

And, IIRC, there are definitely some software releases that are worse than others from an AP and NoA POV. Latest isn't always the greatest.

Since GRA won't post this, I thought I'd do you both a favor and post it: https://insideevs.com/news/351110/consumer-reports-tesla-navigate-autopilot/amp/

Turns out, CR actually approves of NoA. They just don't like the version that does the lane-changes for you. Go figure. I'll still make my own judgements based upon the actual use of the product though, instead of hearsay.
If I'd seen the article (and IEVS' headline had accurately reflected CR's views), I'd have been happy to post it, but as I posted the link to and directly quoted from CR's own release, why bother to filter it through a particular forum? Your claim that CR approves of NoA is without foundation. From an article on GCR dated May 24th:
. . . It's worth noting that when the organization first tested Navigate on Autopilot last November on its Model S, it found the same problems, among others, and Musk later said the company would update the system to return to its original lane after a pass. The update Consumer Reports received didn't seem to include that ability. . . .
https://www2.greencarreports.com/ne...t-drives-itself-poorly-consumer-reports-finds

The article goes on to quote David Friedman about Tesla's pre-public release testing regimen:
Consumer Reports' vice president of advocacy, David Friedman, is the former acting administrator of NHTSA under the Obama administration. He said, "Tesla is showing what not to do on the path toward self-driving cars: release increasingly automated driving systems that aren’t vetted properly. Before selling these systems, automakers should be required to give the public validated evidence of that system’s safety—backed by rigorous simulations, track testing, and the use of safety drivers in real-world conditions."
IOW, pretty much what the FAA failed to ensure Boeing did adequately in the case of the 737 Max, the difference being that in an airliner accident people die by the hundreds, while for cars the total per individual accident is much smaller, but the number of accidents is far greater.

For an example of exactly the opposite approach to development testing compared to Tesla, and one which I obviously believe is necessary, see the following article. BTW, in a previous post you stated that there hadn't been any backlash owing to self-driving car accidents. I meant to reply at the time, but got distracted. In fact, as noted below there was a major backlash after the Herzberg death, and those where self-driving vehicles kill non-occupants are the ones that I'm worried will set back the development and deployment of AVs. The general public is far more worried about being put at risk by self-driving cars that they aren't in. Anyone who's riding in one has volunteered to act as a crash-test dummy for the company, so people aren't as concerned about those deaths as they are when an AV kills a non-occupant, potentially themselves:
May 19, 2019, 10:00am
Hand Gestures And Horses: Waymo’s Self-Driving Service Learns To Woo The Public
https://www.forbes.com/sites/alanoh...low-ride-to-self-driving-future/#3e6c74e11124

. . . “We've always had a very conservative approach to making both our users and the public feel safe with this technology and what it is we're doing here," a Waymo spokeswoman said. . . .

The cautious expansion, partly driven by a public that is both intrigued by the prospect of a digital chauffeur and easily spooked by self-driving blunders, contrasts with some of the more ambitious roadmaps projected by Waymo rivals. Tesla CEO Elon Musk claims his electric car company can deliver a self-driving future as early as next year–assuming his new computer and self-driving software, perfected by utilizing camera and sensor data collected in “shadow mode” from hundreds of thousands of Teslas on the road work as promised. General Motors’ Cruise unit has its own aggressive market plans, and Uber and Lyft are also pursuing own self-driving car technology as a way to keep the cost of rides low. Getting the cars to navigate on their own is just the start; getting the public comfortable with both driving alongside and in a car without a driver is another big challenge.

Numerous surveys of public sentiment, such as one released in March by AAA, show a high level of worry over driverless vehicles, especially in the wake of a deadly 2018 accident in which a self-driving Uber test vehicle struck and killed a pedestrian in Tempe and fatal crashes involving Tesla drivers who may have overestimated the capabilities of the carmaker’s Autopilot software. A new Capgemini study finds a majority of people surveyed globally are awaiting the technology with “anticipation,” plus “a degree of uncertainty and concern,” says Markus Winkler, Capgemini’s head of automotive research. . . .

MIT’s Reimer thinks it could take decades of refinement until robots truly change how most people get around. For now, winning public acceptance for the technology means finding ways to validate how well it performs in the real world. In other words, a steady ground game is better than a Hail Mary pass for autonomous technology.

“The view for companies like Waymo is `we have to be able to show functional safety. Otherwise, we can't protect our decisions in a court of law, where this will all end up long term,’” he said. “Elon is working mostly on the deep neural net side where a good chunk of it is a black box. Documenting, defending that in court is going to be tough.”
 
jlv said:
SalisburySam said:
All of the above work, and work quite well in my case. And note that all are optional...you decide, and you choose, and you can change your mind at any time. For me, I like Option 3 above as it is the most intuitive, again for me. To me, Option 4 is not exactly “no confirmation” as you have to apply a tiny amount of steering torque...another form of confirmation. But it’s there, it works, and it’s nice to have choices.
+1

Excellent description and summary. Had CR not been trying to be negative on Tesla, they might have posted something equally as informative as your single message here.
I'm curious about where you get the idea that CR is biased about Tesla. CR has never accepted any ads, buys all the products they test anonymously at retail, and tests them to the same standards. Were they biased against Tesla when they gave the model S the highest test rating of any car they'd ever tested? I've seen people claim that CR is biased against this or that vehicle because of the frequency of repair ratings and their recommending or r not recommending a car based on that, but that shows that the person making that claim is unaware of how those F-O-R ratings are arrived at - they are based solely on customer surveys, and CR just compares the % them to the norms and applies the appropriate rating,but only if they've got a large enough sample size to be significant. Otherwise, no rating.
 
International Business Times:
Tesla Autopilot Safety Issues Continue As EV Slams Into Another Car
https://www.ibtimes.com/tesla-autopilot-safety-issues-continue-ev-slams-another-car-2795153

Stopped car on highway in lane, other car swerved into then out of lane, so known problem, but one we'll see occur increasingly often as the number of Teslas on the road increase. From that article there was also this which I hadn't heard about, but which we can expect to see more and more of if Tesla doesn't dial it back:
. . . In fact, Tesla recently agreed on a $13 million settlement with a former employee who was struck by the Model S while working. . . .

A more complete analysis of the accident in Norway is available in the original Forbes article, in which the Tesla owner credits A/P with saving his life (which may or may not be true, as the article's author points out):
May 26, 2019, 11:28am
Tesla On Autopilot Slams Into Stalled Car On Highway, Expect More Of This
https://www.forbes.com/sites/lancee...-on-highway-expect-more-of-this/#29c07bdc4fe5
 
GRA said:
If ROADTYPE = "Freeway" then A/P = "OK" ELSE A/P = "Not OK"

This is what Cadillac does with Supercruise, so Tesla should take one of the programmers they have spending far more time writing cutesy Easter Eggs and put them to work making a simple change that will save lives.

You make a valid point that it's very easy to control when AP fully functions or what road/traffic/speed limits and other conditions should
limit what AP modes of operation are functional. Tesla obviously knows AP's present limitations.
 
GRA said:
Back from the Holiday weekend, and hope everyone enjoyed it,.

Oils4AsphaultOnly said:
GRA said:
If other companies are able to do it, then Tesla can (and should) be held to the same standard.
Really? And which other companies have released their driver assistance accident data? We're talking non-FARS here right?

Oh, none of the other ADAS companies currently, ...

Thank you. I think it's obvious that the point has been made right here.

The rest of your post pertains to the standards for self-driving, to which we both know that Tesla never claimed A/P to be self-driving - hyperbolic claims aside.

Your position on coding for edge cases is very much a case of "knowing just enough to be dangerous" school of thought. That assumption of how things _should_ be done [based on a lack of expertise] can be far more dangerous than what Uber had done. I'll address this point in more detail in response to your other post.

GRA said:
... because none of them other than Tesla are claiming that they're currently safer than humans. It's the full AV companies that are required to provide their data, and anyone making such safety claims about an ADAS should also be required to do so. Which is just what CR has been calling on them to do, most recently on April 22:
Consumer Reports: Tesla Must Prove Safety Before Claiming “Self-Driving” Ability
https://advocacy.consumerreports.or...-safety-before-claiming-self-driving-ability/

. . . David Friedman, Vice President of Advocacy for Consumer Reports, said, “Technology has the potential to shape future transportation to be safer, less expensive, and more accessible. Yet, safety must always come first. Today’s driver assistance technologies have helped deliver on safety, but the marketplace is full of bold claims about self-driving capabilities that overpromise and underdeliver. For instance, Tesla’s current driver-assist system, ‘Autopilot,’ is no substitute for a human driver. It can’t dependably navigate common road situations on its own, and fails to keep the driver engaged exactly when it is needed most.

“We’ve heard promises of self-driving vehicles being just around the corner from Tesla before. Claims about the company’s driving automation systems and safety are not backed up by the data, and it seems today’s presentations had more to do with investors than consumers’ safety. We agree that Tesla, and every other car company, has a moral imperative to make transportation safer, and all companies should embrace the most important principle: preventing harm and saving lives.

But instead of treating the public like guinea pig, Tesla must clearly demonstrate a driving automation system that is substantially safer than what is available today, based on rigorous evidence that is transparently shared with regulators and consumers, and validated by independent third-parties. In the meantime, the company should focus on making sure that proven crash avoidance technologies on Tesla vehicles, such as automatic emergency braking with pedestrian detection, are as effective as possible.”
H'mm, those criticisms sound familiar.

Oils4AsphaultOnly said:
GRA said:
If the ADAS is guaranteed to be abused by normal humans, and it is, and the manufacturer doesn't take steps to ensure that it can't be, then responsibility is shared. Seeing as how Teslas know which road they're on and the speed limit, a single line of code could have prevented both Brown's and Brenner's deaths, or at least, eliminated A/P's role in that death, by preventing A/P from being used a situation where Tesla knows that AEB doesn't work, which is dealing with cross traffic. Something like this is all it would take:

If ROADTYPE = "Freeway" then A/P = "OK" ELSE A/P = "Not OK"

This is what Cadillac does with Supercruise, so Tesla should take one of the programmers they have spending far more time writing cutesy Easter Eggs and put them to work making a simple change that will save lives.
You know nothing about programming if you think it's really that simple. And you know even less about neural nets if you expect the programmers to interject their code like that. If Cadillac's engineers really wrote code like that, then they'll never get to any level of self-driving. Considering that they at least made it to level 2+, there's still hope for them.
I make no pretense of claiming to be a programmer - the last time I did any was when I taught myself Basic (on a Commodore Pet, 8k RAM and program/data storage on a cassette tape) back it the late '70s, but you cannot deny that an instruction such as the above limiting A/P use to limited-access freeways with no cross traffic would have prevented both Brown's an Brenner's A/P-enabled deaths, in circumstances where Tesla knew it wouldn't work. Cadillac actually goes further than just limiting Supercruise usage to limited-access freeways (and having an eye-tracking camera to ensure drivers are looking at the road, as they only allow its usage on those portions of freeways for which they have high-def digital LIDAR maps. The latter on U.S. 101 at the Hwy 85 interchange would have presumably prevented Huang's death at the hands of A/P.

Here's CR's comparison test of the various ADAS systems: https://www.cadillac.com/content/da...es011519/02-pdfs/Consumer_Reports_Article.pdf

Oils4AsphaultOnly said:
GRA said:
Demographics, # of occupants, driver/pax/motorcyclist, non-motorist, conditions, speeds, type of accident etc.

Not what I mean, but I guess you're not able to analyze the data yourself huh? Depending on other people to draw the conclusions? I'll dig into it later and see if at least make/model info is buried in there.
I am not a professional statistician nor do I play one on TV, so of course I'm not going to analyze the data myself. Neither is Elon, but that hasn't stopped him from making claims based on erroneous methodology.
 
GRA said:
Oils4AsphaultOnly said:
cwerdna said:
For those who don't follow along on TMC (I take hiatuses sometimes for months at a time), CR's findings (https://www.consumerreports.org/autonomous-driving/tesla-navigate-on-autopilot-automatic-lane-change-requires-significant-driver-intervention/) aren't really new. There are tons of threads and posts on TMC about how poorly NoA works, in general. It's been going on ever since the feature was rolled out.

And, IIRC, there are definitely some software releases that are worse than others from an AP and NoA POV. Latest isn't always the greatest.

Since GRA won't post this, I thought I'd do you both a favor and post it: https://insideevs.com/news/351110/consumer-reports-tesla-navigate-autopilot/amp/

Turns out, CR actually approves of NoA. They just don't like the version that does the lane-changes for you. Go figure. I'll still make my own judgements based upon the actual use of the product though, instead of hearsay.
If I'd seen the article (and IEVS' headline had accurately reflected CR's views), I'd have been happy to post it, but as I posted the link to and directly quoted from CR's own release, why bother to filter it through a particular forum? Your claim that CR approves of NoA is without foundation.

You're right. I'll eat crow here. "CR was fine with it otherwise", is not the same as "approving".

GRA said:
From an article on GCR dated May 24th:
. . . It's worth noting that when the organization first tested Navigate on Autopilot last November on its Model S, it found the same problems, among others, and Musk later said the company would update the system to return to its original lane after a pass. The update Consumer Reports received didn't seem to include that ability. . . .
https://www2.greencarreports.com/ne...t-drives-itself-poorly-consumer-reports-finds

The article goes on to quote David Friedman about Tesla's pre-public release testing regimen:
Consumer Reports' vice president of advocacy, David Friedman, is the former acting administrator of NHTSA under the Obama administration. He said, "Tesla is showing what not to do on the path toward self-driving cars: release increasingly automated driving systems that aren’t vetted properly. Before selling these systems, automakers should be required to give the public validated evidence of that system’s safety—backed by rigorous simulations, track testing, and the use of safety drivers in real-world conditions."
IOW, pretty much what the FAA failed to ensure Boeing did adequately in the case of the 737 Max, the difference being that in an airliner accident people die by the hundreds, while for cars the total per individual accident is much smaller, but the number of accidents is far greater.

Ummm, no. Boeing screwed up on their UI design and pilot training. The software behaved exactly as it was programmed to do. This is a usability design issue. The only thing they have in common with Tesla's A/P is the word "autopilot".

GRA said:
For an example of exactly the opposite approach to development testing compared to Tesla, and one which I obviously believe is necessary, see the following article. BTW, in a previous post you stated that there hadn't been any backlash owing to self-driving car accidents. I meant to reply at the time, but got distracted. In fact, as noted below there was a major backlash after the Herzberg death, and those where self-driving vehicles kill non-occupants are the ones that I'm worried will set back the development and deployment of AVs. The general public is far more worried about being put at risk by self-driving cars that they aren't in. Anyone who's riding in one has volunteered to act as a crash-test dummy for the company, so people aren't as concerned about those deaths as they are when an AV kills a non-occupant, potentially themselves:
May 19, 2019, 10:00am
Hand Gestures And Horses: Waymo’s Self-Driving Service Learns To Woo The Public
https://www.forbes.com/sites/alanoh...low-ride-to-self-driving-future/#3e6c74e11124

. . . “We've always had a very conservative approach to making both our users and the public feel safe with this technology and what it is we're doing here," a Waymo spokeswoman said. . . .

The cautious expansion, partly driven by a public that is both intrigued by the prospect of a digital chauffeur and easily spooked by self-driving blunders, contrasts with some of the more ambitious roadmaps projected by Waymo rivals. Tesla CEO Elon Musk claims his electric car company can deliver a self-driving future as early as next year–assuming his new computer and self-driving software, perfected by utilizing camera and sensor data collected in “shadow mode” from hundreds of thousands of Teslas on the road work as promised. General Motors’ Cruise unit has its own aggressive market plans, and Uber and Lyft are also pursuing own self-driving car technology as a way to keep the cost of rides low. Getting the cars to navigate on their own is just the start; getting the public comfortable with both driving alongside and in a car without a driver is another big challenge.

Numerous surveys of public sentiment, such as one released in March by AAA, show a high level of worry over driverless vehicles, especially in the wake of a deadly 2018 accident in which a self-driving Uber test vehicle struck and killed a pedestrian in Tempe and fatal crashes involving Tesla drivers who may have overestimated the capabilities of the carmaker’s Autopilot software. A new Capgemini study finds a majority of people surveyed globally are awaiting the technology with “anticipation,” plus “a degree of uncertainty and concern,” says Markus Winkler, Capgemini’s head of automotive research. . . .

MIT’s Reimer thinks it could take decades of refinement until robots truly change how most people get around. For now, winning public acceptance for the technology means finding ways to validate how well it performs in the real world. In other words, a steady ground game is better than a Hail Mary pass for autonomous technology.

“The view for companies like Waymo is `we have to be able to show functional safety. Otherwise, we can't protect our decisions in a court of law, where this will all end up long term,’” he said. “Elon is working mostly on the deep neural net side where a good chunk of it is a black box. Documenting, defending that in court is going to be tough.”

Waymo had been developing self-driving for almost a decade, and their car still gets into accidents and causes road rage with other drivers. At the rate they're going, they'll never have a self-driving solution that can work outside of the test area.

One thing that people still seem to misunderstand and I suspect you do too, is the claim that Tesla's FSD will be "feature-complete" by the end of the year. "Feature-complete" is a software development term indicating that the functional capabilities have been programmed in, but it's not release ready yet. Usually at this point in software, when under an Agile development cycle, the product is released in alpha, and bugs are noted and released in the next iteration (usually iterations are released weekly, or even daily). After certain milestones have been reached, it will be considered beta, and after that RC1 (release candidate).

Under this development cycle, you'll see news about FSD being tested on the roads or in people's cars (who have signed up to be part of the early access program). That isn't considered the public availability of FSD! You might hate it, but there's no substitute for real-world testing.
 
Oils4AsphaultOnly said:
Waymo had been developing self-driving for almost a decade, and their car still gets into accidents and causes road rage with other drivers. At the rate they're going, they'll never have a self-driving solution that can work outside of the test area.
...
You might hate it, but there's no substitute for real-world testing.
The accidents that Waymo gets in which are Waymo's fault are few. Road rage, yes.

Problem is, there's no evidence that Tesla has even come close to Waymo's proficiency in the real world. So, if you think the bolded part is true of Waymo, then that's just as true if not even more so for Tesla.
 
GRA said:
International Business Times:
Tesla Autopilot Safety Issues Continue As EV Slams Into Another Car
https://www.ibtimes.com/tesla-autopilot-safety-issues-continue-ev-slams-another-car-2795153

Stopped car on highway in lane, other car swerved into then out of lane, so known problem, but one we'll see occur increasingly often as the number of Teslas on the road increase. From that article there was also this which I hadn't heard about, but which we can expect to see more and more of if Tesla doesn't dial it back:
. . . In fact, Tesla recently agreed on a $13 million settlement with a former employee who was struck by the Model S while working. . . .

A more complete analysis of the accident in Norway is available in the original Forbes article, in which the Tesla owner credits A/P with saving his life (which may or may not be true, as the article's author points out):
May 26, 2019, 11:28am
Tesla On Autopilot Slams Into Stalled Car On Highway, Expect More Of This
https://www.forbes.com/sites/lancee...-on-highway-expect-more-of-this/#29c07bdc4fe5

As I've told lorenfb, be careful about the FUD you read.

The $13 million lawsuit had nothing to do with A/P nor Tesla, other than it was a car driven by a Tesla contractor on Tesla's property: https://laist.com/2019/05/15/13_million_settlement_tesla_fremont_factory.php

As for the rate of A/P accidents, my claim on complacency re-curring seems to be bearing out. It's been 1 year since the last crash into a stalled vehicle, even though the number of Tesla autopilot capable vehicles have doubled.
 
cwerdna said:
Oils4AsphaultOnly said:
Waymo had been developing self-driving for almost a decade, and their car still gets into accidents and causes road rage with other drivers. At the rate they're going, they'll never have a self-driving solution that can work outside of the test area.
...
You might hate it, but there's no substitute for real-world testing.
The accidents that Waymo gets in which are Waymo's fault are few. Road rage, yes.

Problem is, there's no evidence that Tesla has even come close to Waymo's proficiency in the real world. So, if you think the bolded part is true of Waymo, then that's just as true if not even more so for Tesla.

That's definitely a possibility. But within a year, we should have more data to judge whether or not that's the case.
 
^^^
BTW, you are aware of the misleading video from Tesla from Oct 2016, right?

https://www.tesla.com/blog/all-tesla-cars-being-produced-now-have-full-self-driving-hardware

Even if you don't agree with the assertions at https://dailykanban.com/2017/02/ca-dmv-report-sheds-new-light-misleading-tesla-autonomous-drive-video/, the delays did happen (https://insideevs.com/news/329266/teslas-new-product-announcement-pushed-back-to-wednesday-needs-refinement-says-musk/) and you can see Tesla's CA autonomous testing disengagements at https://www.dmv.ca.gov/portal/dmv/detail/vr/autonomous/disengagement_report_2016 for yourself besides looking at what they reported for all the years at https://www.dmv.ca.gov/portal/dmv/detail/vr/autonomous/testing. It sure fooled my college classmate. He got the impression that Tesla was much further ahead than they were. We were both computer science majors in college and have worked in software most of our professional careers.

The 2018 reports weren't as nicely formatted for Waymo (for example), so you can look at https://thelastdriverlicenseholder.com/2019/02/13/update-disengagement-reports-2018-final-results/ for rankings.

This is besides all his missed dates of a cross-country autonomous drive. I haven't verified all the dates at https://www.inverse.com/article/55141-elon-musk-autonomous-driving, but they look right. Example below:
Oct. 2016: Musk announces all cars will ship with enough cameras to enable FSD, and that the company will complete a cross country trip “all the way from L.A. to New York” by the end of 2017.
 
cwerdna said:
^^^
BTW, you are aware of the misleading video from Tesla from Oct 2016, right?

https://www.tesla.com/blog/all-tesla-cars-being-produced-now-have-full-self-driving-hardware

Even if you don't agree with the assertions at https://dailykanban.com/2017/02/ca-dmv-report-sheds-new-light-misleading-tesla-autonomous-drive-video/, the delays did happen (https://insideevs.com/news/329266/teslas-new-product-announcement-pushed-back-to-wednesday-needs-refinement-says-musk/) and you can see Tesla's CA autonomous testing disengagements at https://www.dmv.ca.gov/portal/dmv/detail/vr/autonomous/disengagement_report_2016 for yourself besides looking at what they reported for all the years at https://www.dmv.ca.gov/portal/dmv/detail/vr/autonomous/testing. It sure fooled my college classmate. He got the impression that Tesla was much further ahead than they were. We were both computer science majors in college and have worked in software most of our professional careers.

The 2018 reports weren't as nicely formatted for Waymo (for example), so you can look at https://thelastdriverlicenseholder.com/2019/02/13/update-disengagement-reports-2018-final-results/ for rankings.

This is besides all his missed dates of a cross-country autonomous drive. I haven't verified all the dates at https://www.inverse.com/article/55141-elon-musk-autonomous-driving, but they look right. Example below:
Oct. 2016: Musk announces all cars will ship with enough cameras to enable FSD, and that the company will complete a cross country trip “all the way from L.A. to New York” by the end of 2017.

... yeah, I can think of a few good reasons for the delay, but that won't change the fact that Tesla missed their projections by a long shot. I still think Tesla would get to self-driving first, but you're right that history isn't on my side.
 
Oils4AsphaultOnly said:
Under this development cycle, you'll see news about FSD being tested on the roads or in people's cars (who have signed up to be part of the early access program). That isn't considered the public availability of FSD! You might hate it, but there's no substitute for real-world testing.
Minor point correction here: you must be invited by Tesla to join the early access program, it isn’t something you can sign up for. Once invited you have the option to decline, but if you accept you will be asked to agree to certain privacy policies and a non-disclosure agreement. At one time, there was thinking that all early purchasers of Full Self-Driving would automatically be invited, but that proved to be either erroneous or the program filled up very quickly.
 
Oils4AsphaultOnly said:
As I've told lorenfb, be careful about the FUD you read.

In reference to what? And speaking of FUD; https://teslamotorsclub.com/tmc/posts/3683239/

Linking to that as a viable approach, i.e. relying on only a numerical methodology, to forecast a market 5+ years out.
 
With regard to Tesla's AP arrogance, i.e. a total reliance on a probabilistic based system solution - a neural network approach,
by not augmenting the system with LIDAR, i.e. as an interim to solve its AP deficiencies, is very problematic from a safety standpoint.
 
lorenfb said:
Oils4AsphaultOnly said:
As I've told lorenfb, be careful about the FUD you read.

In reference to what? And speaking of FUD; https://teslamotorsclub.com/tmc/posts/3683239/

Linking to that as a viable approach, i.e. relying on only a numerical methodology, to forecast a market 5+ years out.

wrong use of FUD.
wrong thread to post this comment.
wrong argument even after being discussed about how you've missed the relevance and significance of the post: http://mynissanleaf.com/viewtopic.php?f=10&t=18016&start=3710#p558338

But since you asked "In reference to what?" - I had advised you against listening to the wrong people for your facts back in Jan: https://www.mynissanleaf.com/viewtopic.php?f=10&t=18016&start=3200#p546065
 
Oils4AsphaultOnly said:
GRA said:
Oils4AsphaultOnly said:
Since GRA won't post this, I thought I'd do you both a favor and post it: https://insideevs.com/news/351110/consumer-reports-tesla-navigate-autopilot/amp/

Turns out, CR actually approves of NoA. They just don't like the version that does the lane-changes for you. Go figure. I'll still make my own judgements based upon the actual use of the product though, instead of hearsay.
If I'd seen the article (and IEVS' headline had accurately reflected CR's views), I'd have been happy to post it, but as I posted the link to and directly quoted from CR's own release, why bother to filter it through a particular forum? Your claim that CR approves of NoA is without foundation.
You're right. I'll eat crow here. "CR was fine with it otherwise", is not the same as "approving".

Oils4AsphaultOnly said:
GRA said:
From an article on GCR dated May 24th: https://www2.greencarreports.com/ne...t-drives-itself-poorly-consumer-reports-finds

The article goes on to quote David Friedman about Tesla's pre-public release testing regimen:
IOW, pretty much what the FAA failed to ensure Boeing did adequately in the case of the 737 Max, the difference being that in an airliner accident people die by the hundreds, while for cars the total per individual accident is much smaller, but the number of accidents is far greater.
Ummm, no. Boeing screwed up on their UI design and pilot training. The software behaved exactly as it was programmed to do. This is a usability design issue. The only thing they have in common with Tesla's A/P is the word "autopilot".
By the same token, Tesla screwed up with the lack of "pilot training" as well as the system design and testing, as most people are completely unaware of A/Ps capabilities and limitations, so the system should be designed to prevent them (to the extent possible) from operating outside its limits. You have far more interest in the subject than most customers, yet you've shown that 3 years after Brown's death you didn't understand that the problem in that accident wasn't the lack of a target, it was that Tesla's AEB system as well as all other AEB systems at that time (and at least Tesla's still, as Brenner's accident confirms) don't recognize a crossing target as a threat. Being aware of this limitation, Cadillac chose to prevent SuperCruise's use on roads where such occurrences were not only possible but common. Tesla, having chalked up one A/P-enabled customer death in that situation, chose to do nothing despite being able to change A/P to easily avoid the problem, and thus enabled a virtually identical customer death almost 3 years later. In your opinion, which company shows a greater concern for customer and public safety through design?

Boeing's failure to track down the problem in their SPS after the first occurrence (and the FAA's lack of urgency in forcing them to do so) is the same sort of casual attitude to putting customers at risk as Tesla showed, but Tesla's case is more egregious because they could make a simple, inexpensive change that would have prevented a re-occurrence. Instead, as well as pointless Easter Eggs they put their effort into developing NoA which was inadequately tested prior to initial customer deployment, unquestionably less safe than a human driver in some common situations, and the 'fix' which was rolled out some months later is just as bad if not worse.

Oils4AsphaultOnly said:
GRA said:
For an example of exactly the opposite approach to development testing compared to Tesla, and one which I obviously believe is necessary, see the following article. BTW, in a previous post you stated that there hadn't been any backlash owing to self-driving car accidents. I meant to reply at the time, but got distracted. In fact, as noted below there was a major backlash after the Herzberg death, and those where self-driving vehicles kill non-occupants are the ones that I'm worried will set back the development and deployment of AVs. The general public is far more worried about being put at risk by self-driving cars that they aren't in. Anyone who's riding in one has volunteered to act as a crash-test dummy for the company, so people aren't as concerned about those deaths as they are when an AV kills a non-occupant, potentially themselves: https://www.forbes.com/sites/alanoh...low-ride-to-self-driving-future/#3e6c74e11124
Waymo had been developing self-driving for almost a decade, and their car still gets into accidents and causes road rage with other drivers. At the rate they're going, they'll never have a self-driving solution that can work outside of the test area.
Why yes, they do get into accidents, as is inevitable. But let's compare, shall we? Waymo (then still Google's Chauffeur program IIRR) got into its first chargeable accident on a public road seven years after they'd first started testing them there, and that was a 2 mph fender-bender when a bus driver first started to change lanes and then switched back. No injuries. All of the accidents that have occurred in Arizona have so far been the other party's fault. They haven't had a single fatal at-fault accident, or even one which resulted in serious injuries.

Tesla had its first fatal A/P accident less than 7 months after A/P was introduced to the public. Actually, I think it was less than that, as we didn't know about the one in China at the time (the video I linked to earlier showing the Tesla rear-ending the street sweeper). and has had 2 more that we know about chargeable to A/P.

Road rage is inevitable as humans interact with AVs that obey all traffic laws, but as that is one of the major reasons AVs will be safer than humans, it's just something that will have to be put up with during the transition as people get used to them. The alternative, as Tesla is doing, is to allow AVs to violate traffic laws, and that's indefensible in court and ultimately in the court of public opinion. As soon as a Tesla or any other AV kills or injures someone while violating a law, whether speeding, passing on the right, or what have you, the company will get hammered both legally and in PR. Hopefully the spillover won't take more responsible companies with it, and only tightened gov't regs will result.

Oils4AsphaultOnly said:
One thing that people still seem to misunderstand and I suspect you do too, is the claim that Tesla's FSD will be "feature-complete" by the end of the year. "Feature-complete" is a software development term indicating that the functional capabilities have been programmed in, but it's not release ready yet. Usually at this point in software, when under an Agile development cycle, the product is released in alpha, and bugs are noted and released in the next iteration (usually iterations are released weekly, or even daily). After certain milestones have been reached, it will be considered beta, and after that RC1 (release candidate).

Under this development cycle, you'll see news about FSD being tested on the roads or in people's cars (who have signed up to be part of the early access program). That isn't considered the public availability of FSD! You might hate it, but there's no substitute for real-world testing.
I have no problem whatsoever with real-world testing, indeed, that's exactly what I, CR and every other consumer group calling for better validation testing before release to the general public are demanding, along with independent review etc. Please re-read David Friedman's statement:
"Tesla is showing what not to do on the path toward self-driving cars: release increasingly automated driving systems that aren’t vetted properly. Before selling these systems, automakers should be required to give the public validated evidence of that system’s safety—backed by rigorous simulations, track testing, and the use of safety drivers in real-world conditions."
 
Back
Top