Oils4AsphaultOnly wrote:GRA wrote:
Oils4AsphaultOnly wrote:
Ummm, no. Boeing screwed up on their UI design and pilot training. The software behaved exactly as it was programmed to do. This is a usability design issue. The only thing they have in common with Tesla's A/P is the word "autopilot".
By the same token, Tesla screwed up with the lack of "pilot training" as well as the system design and testing, as most people are completely unaware of A/Ps capabilities and limitations, so the system should be designed to prevent them (to the extent possible) from operating outside its limits. You have far more interest in the subject than most customers, yet you've shown that 3 years after Brown's death you didn't understand that the problem in that accident wasn't the lack of a target, it was that Tesla's AEB system as well as all other AEB systems at that time (and at least Tesla's still, as Brenner's accident confirms) don't recognize a crossing target as a threat. Being aware of this limitation, Cadillac chose to prevent SuperCruise's use on roads where such occurrences were not only possible but common. Tesla, having chalked up one A/P-enabled customer death in that situation, chose to do nothing despite being able to change A/P to easily avoid the problem, and thus enabled a virtually identical customer death almost 3 years later. In your opinion, which company shows a greater concern for customer and public safety through design?
Boeing's failure to track down the problem in their SPS after the first occurrence (and the FAA's lack of urgency in forcing them to do so) is the same sort of casual attitude to putting customers at risk as Tesla showed, but Tesla's case is more egregious because they could make a simple, inexpensive change that would have prevented a re-occurrence. Instead, as well as pointless Easter Eggs they put their effort into developing NoA which was inadequately tested prior to initial customer deployment, unquestionably less safe than a human driver in some common situations, and the 'fix' which was rolled out some months later is just as bad if not worse.
You're conflating multiple incongruent issues again.
AEB is crash mitigation, not avoidance. All the examples of why AEB didn't brake were in small-overlap type crashes, where the correct maneuver is a steering correction, not emergency braking.
https://www.caranddriver.com/features/a ... explained/
It has nothing to do with threat detection of a crossing vehicle (requires path prediction).
AEB systems can be capable of both crash avoidance and mitigation; avoidance is obviously preferred, mitigation is next best. For instance, CR from last November:
New Study Shows Automatic Braking Significantly Reduces Crashes and Injuries
https://www.consumerreports.org/automot ... ihs-study/
General Motors vehicles with forward collision warning (FCW) and automatic emergency braking (AEB) saw a big drop in police-reported front-to-rear crashes when compared with the same cars without those systems, according to a new report by the Insurance Institute for Highway Safety (IIHS).
Those crashes dropped 43 percent, the IIHS found, and injuries in the same type of crashes fell 64 percent. . . .
These findings were in line with previous findings by the IIHS. In earlier studies involving Acura, Fiat Chrysler, Honda, Mercedes-Benz, Subaru and Volvo vehicles, it found that the combination of FCW and AEB reduced front-to-rear crash rates by 50 percent for all crashes, and 56 percent for the same crashes with injuries.
As to crossing vehicles requiring path prediction, no, that's not necessary, although it's certainly helpful. As I pointed out previously, NHTSA found the issue with current AEBs in that situation is not one of target detection, it's classification. Current AEB radar systems are told to ignore braking for large, flat zero-doppler objects because they can be nothing more than highway signs on overpasses or off to the side on curves (or overpass supports, FTM); a human would recognize what they are and not brake for them, but current AEB systems aren't that smart. The Mobileye EyeQ visual system in use by Tesla and others at the time also made use of a library of objects, and the library didn't contain side views of such objects (apparently because that was beyond the capabilities of the system at the time).
Oils4AsphaultOnly wrote:A side skirt doesn't present any other permitted corrective action other than emergency braking. So yes, it would've triggered AEB. Your reference video (from when you last brought this up and I failed to address) isn't the same situation.
As pointed out just above and previously, the reason current AEB systems don't work for either crossing or stopped vehicles is the same, a classification rather than detection issue. Lack of side skirts for detection isn't the problem, teaching the AEB to classify a crossing vehicle as a threat instead of ignoring it as harmless is. Here's the product spec sheet for one such radar (note the vertical FoV, ample to pick up the entire side of a trailer and then some at detection distances):
https://www.bosch-mobility-solutions.co ... -(mrr).pdf
Oils4AsphaultOnly wrote:And just because you think Tesla has a simple fix doesn't make it a reality. GM's SuperCruise requires no high level logic other than, "is this road on my allowed map?", since GM geofences supercruise to ONLY mapped highways. Foul weather and construction zones are also excluded. You can inject human code into that situation, since it's a defined algorithm. You can't define your driving logic through a fixed algorithm if you want a car that can achieve full self-driving. That's why GM's supercruise will never advance past level 3 autonomy (can handle most well-defined traffic situations).
Are you suggesting that Teslas don't have the data to know which road they're on despite the lack of high-def digital mapping, when they can not only map out a route while choosing the type of roads to take and then follow that route, and they also know the speed limit of the different sections of that route? That's ridiculous. But let's say that you're right, and A/P is incapable of doing that. Since limiting the system's use only to those situations which it is capable of dealing with and preventing its usage in those which it can't is obviously the safest approach, should any company be required to adopt the latter approach to minimize the risk to both its customers and the general public? You consider Supercruise to be limited in where it can be used, and it is. To be specific, it's limited to ensure the safest possible performance, and I have no problem at all with that; indeed, I celebrate them for doing so, and wish Tesla acted likewise.
Oils4AsphaultOnly wrote:The driver versus pilot training analogy isn't even applicable, since sleeping at the wheel isn't a training issue.
Who was talking about sleeping at the wheel? Not I. I was talking about the lack of required initial training and testing in the system's capabilities and limitations as well as the lack of re-currency training; lacking those an autonomous system has to be idiot-proofed to a much higher level. We know that pilots, despite being a much more rigorously selected group than car buyers, still make mistakes due to misunderstanding automation system capabilities or through lack of practice, even though they are required to receive instruction and be tested on their knowledge, both initially and recurrently. As none of that is required of car buyers, you have to make it as hard as possible to misuse the system, which certainly includes preventing it from being used in situations outside of its capabilities.
Oils4AsphaultOnly wrote:GRA wrote:Oils4AsphaultOnly wrote:
Waymo had been developing self-driving for almost a decade, and their car still gets into accidents and causes road rage with other drivers. At the rate they're going, they'll never have a self-driving solution that can work outside of the test area.
Why yes, they do get into accidents, as is inevitable. But let's compare, shall we? Waymo (then still Google's Chauffeur program IIRR) got into its first chargeable accident on a public road seven years after they'd first started testing them there, and that was a 2 mph fender-bender when a bus driver first started to change lanes and then switched back. No injuries. All of the accidents that have occurred in Arizona have so far been the other party's fault. They haven't had a single fatal at-fault accident, or even one which resulted in serious injuries.
Tesla had its first
fatal A/P accident less than 7 months after A/P was introduced to the public. Actually, I think it was less than that, as we didn't know about the one in China at the time (the video I linked to earlier showing the Tesla rear-ending the street sweeper). and has had 2 more that we know about chargeable to A/P.
Road rage is inevitable as humans interact with AVs that obey all traffic laws, but as that is one of the major reasons AVs will be safer than humans, it's just something that will have to be put up with during the transition as people get used to them. The alternative, as Tesla is doing, is to allow AVs to violate traffic laws, and that's indefensible in court and ultimately in the court of public opinion. As soon as a Tesla or any other AV kills or injures someone while violating a law, whether speeding, passing on the right, or what have you, the company will get hammered both legally and in PR. Hopefully the spillover won't take more responsible companies with it, and only tightened gov't regs will result.
Waymo hasn't killed anyone, because it hasn't driven fast enough to do so. At 35mph, any non-pedestrian accidents would be non-fatal. Granted they've tackled the more difficult task of street driving, but their accident stats aren't directly comparable to Tesla's. I only brought them up to highlight the difference in scale of where their systems can be applied.
Who says Waymo has only tested on public roads at slow speeds? I mentioned previously that while they were testing their ADAS systems (in 2012, before abandoning any such system as not being safer than a human), including on freeways, they observed exactly the same human misbehavior that A/P users have exhibited from the moment of its introduction up to the present. That included one employee fast asleep on the freeway. A correction, in my earlier reference I mis-remembered that the car had been going 65 for 1/2 hour. Checked my source, and I see it was 60 mph for 27 minutes, which is certainly fast enough to be fatal. They've continued testing on freeways since then, but have only deployed AV systems for public use where speeds are more limited (still with safety drivers, although that essentially serves as elephant repellent), precisely because they consider that it's necessary to walk before they run. I am wholly in favor of this approach.
Oils4AsphaultOnly wrote:GRA wrote:Oils4AsphaultOnly wrote:One thing that people still seem to misunderstand and I suspect you do too, is the claim that Tesla's FSD will be "feature-complete" by the end of the year. "Feature-complete" is a software development term indicating that the functional capabilities have been programmed in, but it's not release ready yet. Usually at this point in software, when under an Agile development cycle, the product is released in alpha, and bugs are noted and released in the next iteration (usually iterations are released weekly, or even daily). After certain milestones have been reached, it will be considered beta, and after that RC1 (release candidate).
Under this development cycle, you'll see news about FSD being tested on the roads or in people's cars (who have signed up to be part of the early access program). That isn't considered the public availability of FSD! You might hate it, but there's no substitute for real-world testing.
I have no problem whatsoever with real-world testing, indeed, that's exactly what I, CR and every other consumer group calling for better validation testing before release to the general public are demanding, along with independent review etc. Please re-read David Friedman's statement:
"Tesla is showing what not to do on the path toward self-driving cars: release increasingly automated driving systems that aren’t vetted properly. Before selling these systems, automakers should be required to give the public validated evidence of that system’s safety—backed by rigorous simulations, track testing, and the use of safety drivers in real-world conditions."
Funny. I wrote that to mean Tesla's method of iterating improvements and functionality into A/P, then NoA, and eventually FSD. You read it to mean Waymo's method of iterating from one geo-fenced city at a time.
Which just brings us all back to my old point of speed of deployment. Waymo's method would take YEARS (if not decades) to successfully deploy, and during that time, thousands of lives will be lost that could've been saved with a method that reaches FSD faster. At least 3 lives have been saved (all those DUI arrests) due to A/P so far, not counting any unreported ones where the driver made it home without being arrested. Eventually, you'll see things my way, you just don't know it yet.

And that brings me back to my and CR's and every other safety organization's point, so I'll repeat it:
[David Friedman, former Acting NHTSA Administrator, now employed by CR] instead of treating the public like guinea pig[s], Tesla must clearly demonstrate a driving automation system that is substantially safer than what is available today, based on rigorous evidence that is transparently shared with regulators and consumers, and validated by independent third-parties. In the meantime, the company should focus on making sure that proven crash avoidance technologies on Tesla vehicles, such as automatic emergency braking with pedestrian detection, are as effective as possible.”
Tesla's claims of increased safety remain unverified. As more and more Teslas are out there and they get into more and more accidents, I imagine the costs of fighting all the A/P lawsuits as well as the resulting big payouts will force them to clean up their act, if regulators don't. Until they (and any other company making such claims) do that, it's so much hot air. As it is, their ADAS system's design is inherently less safe than what currently appears to be the best extant, Supercruise, and needs to be improved to bring it up to something approaching that level. Government regulation mandating minimum acceptable equipment/performance standards is needed in this area, much as it is in aviation e.g. RNP (Required Navigation Performance) or RVSM (Reduced Vertical Separation Minimum).
Aside from limiting ADAS usage to limited-access freeways until such time as Tesla (or any company) can show that their system is capable of safely expanding beyond them, they need to shorten the hands-off warning time, from 24 seconds down to something around Supercruise's 4 seconds (somewhere way uptopic, I said I thought anything over 3 seconds was excessive if you're serious about keeping drivers engaged, and would still like to see that). For comparison, Google used a 6 second warning time back in 2012 in their ADAS system, and as we know Tesla essentially didn't have one at all until after the Brown crash, and it remains far too long*. Also, since we know that steering wheel weight/torque sensors can be easily fooled and that people are in fact doing so, adding eye-tracking cameras and the appropriate computer/software, or equipment of which can be shown to be of equal or greater effectiveness in keeping drivers engaged, should be required. Personally, if I though it was safe and legal I'd be in favor of the "pay attention" warning being given by a small shock to the driver, but that's obviously not going to happen. Naturally, all such such systems must collect data and have it publicly accessible so that actual performance and safety benefits can be compared, so as to allow regulations to be improved and safety increased.
We've completed yet another argument cycle, so as you gave me the last word last round, you get the last word this one. I'm sure another round will start in the near future.
*One thing, I asked uptopic how it was possible for Brenner to engage A/P and be going 13 mph over the speed limit when A/P was supposed to have been modified to limit its use to no more than 5 mph over the speed limit. I never got an answer. ISTM that there are three possibilities, but this is one question where hands-on knowledge of current A/P is definitely valuable, and I lack that.
Anyway, can A/P be engaged even though it's traveling at a speed well above the speed limit + 5 mph, and it will then gradually slow to that speed? Given the short time span between engagement and Brenner's crash, that might explain how he was able to engage it and be going that fast at impact.
Or should it not have been possible to engage A/P while traveling so much over A/P's allowed speed (a far safer approach), but for some reason the system failed to work as designed?
Or has Tesla eliminated the speed limit + 5 mph limitation they added after Brown's crash, and I missed it?