Autonomous Vehicles, LEAF and others...

My Nissan Leaf Forum

Help Support My Nissan Leaf Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
^^^ Just to note, the name of the first non-passenger to be killed by an AV was Elaine Herzberg. She will undoubtedly be the first of many as the tech is developed and deployed, and one of the requirements for deploying AVs on public streets is that we as a society must come to agreement on what a human life is worth, and that we also are willing to accept the occasional death caused by an AV because the overall accident rate goes down. It remains to be seen whether we've yet reached that stage. I look forward to the NTSB report.
 
Via ABG, the inevitable legal questions now have a test case, if this isn't settled quietly:
Fatal accident with self-driving car raises novel legal questions
Who's at risk here? Uber? Volvo? Tech suppliers? All of the above?
https://www.autoblog.com/2018/03/20/uber-self-driving-death-legal-questions/

Also ABG:
Toyota pauses self-driving car testing amid Uber accident probe
Toyota Research Institute conducts on-road testing in California and Michigan
https://www.autoblog.com/2018/03/20/toyota-pauses-self-driving-car-testing-amid-uber-accident-probe/
 
InsideEVs has a link to dashcam video of the Uber vehicle right up to the point of impact. I see several things in the video:

1) The person in the vehicle was not paying attention to the road.
2) The headlights appeared to be on low beams which did not allow the jaywalker to be seen until it was really to late to avoid them. In other words, it seems the vehicle was overdriving the headlights. I wonder if the autonomy modulates the high beams like a human driver would. All that said, there were streelights in the area and they were behind another vehicle, so high beams really would not have been appropriate.
3) I don't see any indication that the vehicle attempted to avoid the accident by slowing or swerving.
4) I do not see any indication of reflectors in the bicycles wheels.
5) Even though there were streetlights in the vicinity, the person walking the bicycle was crossing where there was no streetlight. I will point out that the presence of streetlights makes it more difficult to see objects which are in the dark.

I guess I'm not convinced that I could have avoided that collision unless the perception of the cameras are significantly worse than that of human vision.
 
RegGuheert said:
InsideEVs has a link to dashcam video of the Uber vehicle right up to the point of impact. I see several things in the video:

1) The person in the vehicle was not paying attention to the road.
2) The headlights appeared to be on low beams which did not allow the jaywalker to be seen until it was really to late to avoid them. In other words, it seems the vehicle was overdriving the headlights. I wonder if the autonomy modulates the high beams like a human driver would. All that said, there were streelights in the area and they were behind another vehicle, so high beams really would not have been appropriate.
3) I don't see any indication that the vehicle attempted to avoid the accident by slowing or swerving.
4) I do not see any indication of reflectors in the bicycles wheels.
5) Even though there were streetlights in the vicinity, the person walking the bicycle was crossing where there was no streetlight. I will point out that the presence of streetlights makes it more difficult to see objects which are in the dark.

I guess I'm not convinced that I could have avoided that collision unless the perception of the cameras are significantly worse than that of human vision.
I've seen both videos as well, and assuming the car's sensors (LIDAR, RADAR and/or FLIR) were working properly*, and there's no indication that they weren't, she should have been detected. I suspect this is a problem not of detection but of classification, which is the most difficult part of AI. She was walking on the opposite side of the bike from the car, and had at least one plastic shopping bag hanging from the bike. It's this sort of thing which makes rules-based AI almost impossible, and machine-learning essential. Humans are really good at pattern recognition even when the pattern varies significantly from the norm; computers are a lot better than they used to be, but still have a ways to go. In any case, while it might not have made a difference in this instance given the lack of attention by the safety driver, my bike is equipped with reflectors all around as well as flashing head and tail lights (two of the latter, on different programs) at night, plus I wear reflective clothing, just to make me as visible and obviously a non-car as possible. Most experienced night cyclists take similar measures.

There's no question that the pedestrian should have seen the car and avoided it if they'd have been operating with the attitude that they're either invisible to drivers, or else the driver's deepest desire is to kill pedestrians and bicyclists that they do see, unless and until proven otherwise. That attitude has kept me alive for almost 50 years of riding in heavy traffic, with only a couple of relatively minor injuries when the cars partially defeated my defense. But roads have gotten significantly more dangerous to those of us not in cars since the late '90s (cell phones got cheap) and especially since 2007 (the advent of the iPhone), so autonomous cars can't come soon enough.

*Being able to take control after a major system failure is indicated well in advance of need is the main value of the safety drivers. As the video shows and every bit of research has confirmed, humans will quickly allow themselves to be distracted when an automatic system is operating routinely, so safety drivers serve mainly as a placebo for the public. That's why Google decided to go direct for full autonomy instead of partial; they had cabin video of their safety drivers, who were fully aware of the system limitations, eating, futzing around in the car looking for something, watching videos etc., instead of paying attention to the road so they could instantly take over if needed. The human brain just isn't wired that way, and the delay during the transition (the 'startle' reflex) makes the necessary quick reactions in an emergency especially problematic. In this case I'd put money that the 'safety driver' was looking at their smart phone.

Link to a NYT article which includes both forward and in-cabin videos: https://www.nytimes.com/interactive/2018/03/20/us/self-driving-uber-pedestrian-killed.html
 
I mentioned in my immediately preceding post that Google had decided to go directly for full autonomy rather than partial, based on behavior they observed of drivers in their own AVs, which merely confirmed many decades of studies (many aviation-related), and those studies had already brought me to the conclusion that I'd wait for at least L4 autonomy before trusting my life to an AV, and that I wanted nothing to do with either lane-keeping or ACC systems until we've reached that point owing to the handoff problem. I thought I'd provide Google's own statement giving their rationale, from their October 2015 Self-driving cars project monthly report, which can be found here: https://static.googleusercontent.co.../selfdrivingcar/files/reports/report-1015.pdf
Why we’re aiming for fully self-driving vehicles

As we see more cars with semi-autonomous features on the roads, we’re often asked why we’re aiming for fully
autonomous vehicles. To be honest, we didn’t always have this as our plan.

In the fall of 2012, our software had gotten good enough that we wanted to have people who weren’t on our team
test it, so we could learn how they felt about it and if it’d be useful for them. We found volunteers, all Google
employees, to use our Lexus vehicles on the freeway portion of their commute. They’d have to drive the Lexus to
the freeway and merge on their own, and then they could settle into a single lane and turn on the self-driving
feature. We told them this was early stage technology and that they should pay attention 100% of the time -- they
needed to be ready to take over driving at any moment. They signed forms promising to do this, and they knew
they’d be on camera.

We were surprised by what happened over the ensuing weeks. On the upside, everyone told us that our technology
made their commute less stressful and tiring. One woman told us she suddenly had the energy to exercise and
cook dinner for her family, because she wasn’t exhausted from fighting traffic. One guy originally scoffed at us
because he loved driving his sports car -- but he also enjoyed handing the commute tedium to the car.

But we saw some worrying things too. People didn’t pay attention like they should have. We saw some silly
behavior, including someone who turned around and searched the back seat for his laptop to charge his phone --
while travelling 65mph down the freeway! We saw human nature at work: people trust technology very quickly
once they see it works. As a result, it’s difficult for them to dip in and out of the task of driving when they are
encouraged to switch off and relax.

We did spend some time thinking about ways we could build features to address what is often referred to as “The
Handoff Problem” -- keeping drivers engaged enough that they can take control of driving as needed. The industry
knows this is a big challenge, and they’re spending lots of time and effort trying to solve this. One study by the
Virginia Tech Transportation Institute found that drivers required somewhere between five and eight seconds to
safely regain control of a semi-autonomous system. In a NHTSA study published in August 2015, some participants
took up to 17 seconds to respond to alerts and take control of the the vehicle -- in that time they’d have covered
more than a quarter of a mile at highway speeds. There’s also the challenge of context -- once you take back
control, do you have enough understanding of what’s going on around the vehicle to make the right decision?

In the end, our tests led us to our decision to develop vehicles that could drive themselves from point A to B, with
no human intervention. (We were also persuaded by the opportunity to help everyone get around, not just people
who can drive.) Everyone thinks getting a car to drive itself is hard. It is. But we suspect it’s probably just as hard to
get peoople to pay attention when they’re bored or tired and the technology is saying “don’t worry, I’ve got this...for
now.”
Although it's very detailed, I also recommend this NHTSA report from May 2010, which can be downloaded:
An Analysis of Driver
Inattention Using a
Case-Crossover Approach
On 100-Car Data: Final Report
 
Via IEVS, Mobileye is claiming that they tested their tech using the video of the accident:
MOBILEYE SEES WHAT UBER DIDN’T, CYCLIST DETECTED 1 SECOND BEFORE IMPACT
https://insideevs.com/mobileye-sees-what-uber-didnt-cyclist-detected-1-second-before-impact/

Apparently the Volvos are equipped with cameras and RADAR; LIDAR would probably be too slow to react in time for this.

Linked to the above, via Bloomberg:
Uber Disabled Volvo SUV's Safety System Before Fatality
https://www.bloomberg.com/news/arti...-suv-s-standard-safety-system-before-fatality

Uber Technologies Inc. disabled the standard collision-avoidance technology in the Volvo SUV that struck and killed a woman in Arizona last week, according to the auto-parts maker that supplied the vehicle’s radar and camera.

“We don’t want people to be confused or think it was a failure of the technology that we supply for Volvo, because that’s not the case,” Zach Peterson, a spokesman for Aptiv Plc, said by phone. The Volvo XC90’s standard advanced driver-assistance system “has nothing to do” with the Uber test vehicle’s autonomous driving system, he said.

Aptiv is speaking up for its technology to avoid being tainted by the fatality involving Uber, which may have been following standard practice by disabling other tech as it develops and tests its own autonomous driving system. . . .
 
Via ABG:
Nvidia halts self-driving tests; Uber drops test permit in California
Chipmaker CEO on fatal crash: ‘We don't know what happened’
https://www.autoblog.com/2018/03/28/nvidia-autonomous-testing-uber-accident/

Also ABG, the blame-shifting begins:
Lidar maker Velodyne is confused by fatal Uber crash
‘Our lidar doesn't make the decision to put on the brakes ...’
https://www.autoblog.com/2018/03/26...rend|lidar-maker-confused-by-fatal-uber-crash

Velodyne president Marta Thoma Hall told Bloomberg, "We are as baffled as anyone else. Certainly, our lidar is capable of clearly imaging Elaine and her bicycle in this situation. However, our lidar doesn't make the decision to put on the brakes or get out of her way."

The company, which supplies lidar units to a number of tech firms testing autonomous cars, wants to make sure its equipment isn't blamed for the crash. The accident took place around 10 p.m., and in fact, lidar works better at night than during the day because the lasers won't suffer any interference from daylight reflections. . . .
 
Via Reuters:
Uber avoids legal battle with family of autonomous vehicle victim
https://www.reuters.com/article/us-...ly-of-autonomous-vehicle-victim-idUSKBN1H5092

Smart move to settle so quickly. It gets the story out of the news cycle, and only those of us who are really interested will be looking for the NTSB report, which will likely draw only brief media attention when issued unless it's damning. It's already clear that the car failed what should have been a fairly basic test.
 
As Tesla has now confirmed that A/P was engaged at the time of Walter Huang's fatal barrier crash at the 101/85 interchange, I thought I'd do some simple calcs based on Tesla's own claims and the numbers Tesla has released. Before it was confirmed that A/P was engaged, Tesla said:
Our data shows that Tesla owners have driven this same stretch of highway with Autopilot engaged roughly 85,000 times since Autopilot was first rolled out, and roughly 20,000 times since just the beginning of the year, and there has never been an accident that we know of. There are over 200 successful Autopilot trips per day on this exact stretch of road.
The bolded claim is no longer operative. Ignoring the claim of Walter Huang's brother that A/P had previously failed to handle this junction 7-10 times and that he'd taken the car in to try and get it fixed which would reduce the divisor, and only considering the rate without that, we have 1 accident / 85,000 trips at this particular intechange, or a reliability (reliability defined here as 'no accident') of 99.998823529%, a touch under 5 nines. However, as I pointed out earlier in IIRR either this or the "Tesla's Autopilot, on the road" threads, for aviation, safety of life critical systems are generally required to have 6 to 8 nines of reliability, i.e. 99.9999 to 99.999999%. Even Tesla says that six nines is a requirement:
This is the true problem of autonomy: getting a machine learning system to be 99% correct is relatively easy, but getting it to be 99.9999% correct, which is where it ultimately needs to be, is vastly more difficult. One can see this with the annual machine vision competitions, where the computer will properly identify something as a dog more than 99% of the time, but might occasionally call it a potted plant. Making such mistakes at 70 mph would be highly problematic.
Full statement here:
https://www.tesla.com/support/correction-article-first-person-hack-iphone-built-self-driving-car

There's a much higher number of car than airplane trips: U.S. a/c trips 6/14 - 5/15 = 779 million; U.S. car trips 1.1 billion per DAY. While 6 nines reliability or 1 failure in 1 million (reliability defined as above) would in fact improve safety statistically, reducing the U.S. daily accident rate to 1,100 and annual U.S. auto accidents to around 400,000/year (U.S. emergency room treatments alone due to auto accidents currently run about 2.5 million/year, plus 40k fatalities and many more injuries that don't require medical treatment), that may not be enough to get public buy-in given the extremely high number of car trips as well as public resistance to turning their safety over to computers. We may ultimately insist on 7 or 8 nines, and we certainly should if that's achievable. Once AVs have become the majority and actual accidents and deaths have seen major reductions we may accept the remaining ones as routinely and fatalistically as we do human-caused car crashes now, but we've got a long way to go to get there.
 
GRA said:
Edmunds' video comparison test of Tesla's AP2.0 in a Model 3 and Cadillac's Supercruise in a CT6: <snip>

their Editor-in-chief says, after first praising the fact that AP was updated to fix the specific problem identified in the first video and that it was done OTA, goes on to say
But it also reveals the extent to which customers like us are being used as Guinea pigs on a safety critical system. . . Yes, Tesla litters the handbook with caveats. And yes, they describe Autosteer as a beta system. But Autopilot was launched back in October 2014. At what point does it stop being beta?
A good question, but I believe the wrong one. I think the correct question is "When is it acceptable for a manufacturer to use members of the public to test immature, safety-of-life critical systems, with or without their consent?" IMO the correct answer to that question is "Never".
CR (and the NTSB back in September) weigh in with the same criticisms about AP, via IEVS: https://insideevs.com/tesla-told-to-improve-autopilot-release-claimed-worlds-safest-data/
 
ProPilot helps the 2018 Nissan LEAF to achieve a 5-star rating as the first car to undergo Europe's new extended NCAP testing:
NCAP as quoted by InsideEVs said:
The LEAF is the first car to be assessed against Euro NCAP’s improved and extended protocols for 2018. The 2018 protocol sees the introduction of a raft of new tests which address key crash scenarios involving cars, pedestrians and now also the growing number of cyclists.

In these Euro NCAP tests, the LEAF earned a 93 per cent rating for adult safety and an 86 per cent rating for child protection. The safety rating is determined from a series of vehicle tests that, in a simplified way, reflect important real-life accident scenarios that could result in injuries.

New LEAF 5-star rating reflects the advanced driver assistance systems packaged on the car. Technologies such as camera and radar feature extensively to provide benefits such a pedestrian recognition and form the basis of Nissan’s acclaimed ProPILOT system for safer, more confident driving.

[youtube]http://www.youtube.com/watch?v=CVeSCjgACiA[/youtube]

In light of the recent accident in Tempe, I'm wondering if those pedestrian tests may also be conducted at nighttime in future tests.
 
Via ABG:
Uber hires former NTSB chair to advise on safety culture after fatal crash
Fatal accident was likely caused by software issue
https://www.autoblog.com/2018/05/07/uber-hires-ntsb-chair-safety/

. . . Online news outlet The Information reported Monday that Uber has determined the likely cause of the fatal collision was a problem with the software that decides how the car should react to objects it detects. The outlet said the car's sensors detected the pedestrian but the software decided it did not need to react right away.

"We have initiated a top-to-bottom safety review of our self-driving vehicles program, and we have brought on former NTSB Chair Christopher Hart to advise us on our overall safety culture," Uber said Monday. "Our review is looking at everything from the safety of our system to our training processes for vehicle operators, and we hope to have more to say soon. . . ."
 
As was to be expected, via ABG:
Americans are even more wary of autonomous cars now
AAA study finds people's distrust of self-driving cars is increasing
https://www.autoblog.com/2018/05/22/americans-are-even-more-wary-of-autonomous-cars-now/

n the wake of recent autonomous vehicle crashes including an Uber fully autonomous test car killing a woman pushing a bike, and a semi-autonomous Tesla Model X hitting a concrete divider, AAA conducted another study of the public's perception of such cars. It's a follow-up to a study conducted at the end of the past two years. . . .

Specifically, 73 percent of those surveyed said they would be too afraid to ride in a fully autonomous vehicle, which is up 10 points over the last survey. Americans also aren't comfortable being a pedestrian around autonomous vehicles either, with 63 percent saying they would feel less safe as a pedestrian or cyclist on the road with self-driving cars.

Another interesting statistic is that while millennials were almost split down the middle in whether they would be too afraid to ride in a fully autonomous car in the first survey, they shifted negatively this time around. Instead of 49 percent being too afraid to ride in a self-driving car, the percentage jumped to 64 percent. In previous studies, younger people surveyed had been more accepting of the technology.
 
Via GCC:
Suning completes testing of autonomous heavy-duty truck Strolling Dragon; SAE Level 4
http://www.greencarcongress.com/2018/05/20180525-suning.html

. . . Strolling Dragon is the largest unmanned truck in Suning Logistics’ automated fleet, featuring SAE Level-4 self-driving capabilities; it is highly automated, and is able to operate without human input within pre-programmed parameters. It is the first self-driving truck developed by a Chinese e-commerce company to pass logistics campus tests and highway-scenario road tests in China.

Using artificial intelligence and deep-learning technologies, in conjunction with sensors, Strolling Dragon can accurately recognize obstacles at a distance of more than 300 meters, even at high speed. In addition, the unmanned truck can make emergency stops, or avoid obstacles at a response rate of 25 ms, allowing for safe autonomous driving even at a speed of 80 km/h (50 mph). . . .

Suning’s plans call for drivers to be assisted rather than replaced, for the foreseeable future. The relatively complex and tiring job of long-distance driving puts drivers in high risk of accidents, while automation helps reduce such risk. With the help of self-driving technology, Suning Logistics can help create a more comfortable, and safer, working environment for the more than 100,000 truck drivers working for the company.
 
I'm still trying to get my head around the demand level for off-road autonomy.

But on the positive side, there should not be much danger of Autopliot-type crashes caused by following the wrong road markings directly into freeway dividers...

Jaguar Land Rover Is Taking Its Autonomous Technology Off-Road

Jaguar Land Rover is investing as heavily as any of its competitors in autonomous technologies. But with its expertise in off-roading, the automaker division is taking its self-driving tech off the beaten path, too.

Called “Cortex,” the project aims to develop Level 4 and 5 automation off-road. That presents a particular challenge, because most autonomous driving systems rely on highly precise digital mapping of public roadways.

With an investment of £3.7 million (or about $4.9 million at current exchange rates), the Cortex project takes advantage of acoustic, video, radar, light detection and distance sensing (LiDAR) data to venture off-road with little or no driver intervention required.

“It’s important that we develop our self-driving vehicles with the same capability and performance customers expect from all Jaguars and Land Rovers,” said Chris Holmes, head of JLR’s connected and autonomous vehicle research program. “Self-driving is an inevitability for the automotive industry and ensuring that our autonomous offering is the most enjoyable, capable and safe is what drives us to explore the boundaries of innovation. CORTEX gives us the opportunity to work with some fantastic partners whose expertise will help us realise this vision in the near future.”...
https://www.carscoops.com/2018/05/jaguar-land-rover-taking-autonomous-technology-off-road/
 
Waymo will add up to 62,000 FCA minivans to self-driving fleet
https://www.usatoday.com/story/tech/talkingtech/2018/05/31/waymo-add-up-62-000-fca-minivans-self-driving-fleet/659160002/

GM’s self-driving unit (Cruise Automation) gets $2.25 billion from SoftBank’s venture fund
https://www.theverge.com/2018/5/31/17412716/gm-cruise-self-driving-car-softbank-investment
 
Game over?

In a few years we may Waymo everywhere...

As Uber and Tesla struggle with driverless cars, Waymo moves forward

When it comes to self-driving cars, there's Waymo and then there's everyone else.


...The last few months have had a lot of bad news for fans of self-driving cars. ...

In March, an Uber self-driving car struck and killed pedestrian Elaine Herzberg in Tempe, Arizona, forcing Uber to suspend its testing program indefinitely. In May, the National Transportation Safety Board released a preliminary report finding big problems with Uber's software.

The same month, a Tesla customer died in a crash in Mountain View, California, while Tesla's driver-assistance program, Autopilot, was engaged. Tesla is working to upgrade Autopilot to a fully self-driving system, but the leader of that effort, Jim Keller, left the team in April. He was the third Autopilot boss to leave Tesla in the last 18 months.

These and other negative headlines have contributed to a darkening public mood on self-driving cars. A poll released in May found that the proportion of Americans who would be afraid to ride in a self-driving car had risen from 63 percent in late 2017 to 73 percent.

But if the self-driving industry is in crisis, nobody told Waymo. Over the last 18 months, the company has been methodically laying groundwork to launch a commercial driverless car service.

Uber, Nvidia, and Toyota all suspended self-driving car testing in the wake of the March Uber crash—but not Waymo. Waymo continued logging miles in Arizona and elsewhere. And days after the crash, the company announced a deal with Jaguar Land Rover to build 20,000 fully self-driving I-PACE cars.

Then on Thursday, Waymo announced a massive deal for 62,000 Chrysler Pacifica minivans—by far the biggest deal for self-driving vehicles so far. Waymo wouldn't be making deals this big unless the company was very confident that its technology was on track for commercial use within the next year or two....

So it might be true that the rest of the industry is failing to live up to early self-driving car hype. But Waymo is in a class by itself...
https://arstechnica.com/cars/2018/06/as-uber-and-tesla-struggle-with-driverless-cars-waymo-moves-forward/
 
This is the sort of independent statistical review of accident data we need to see from Tesla et al before they're allowed to make claims of accident reductions. Via GCC:
IIHS HLDI study finds Subaru crash avoidance system cuts pedestrian crashes
http://www.greencarcongress.com/2018/06/20180611-hldi.html

A recent analysis by the Insurance Institute of Highway Safety (IIHS) Highway Loss Data Institute (HLDI) foundthat Subaru’s EyeSight system (earlier post) cut the rate of likely pedestrian-related insurance claims by a statistically significant 35%.

According to IIHS, 2016 data show pedestrian deaths account for 16% of all auto collision fatalities, and the numbers are on the rise—up 46% since reaching their lowest point in 2009. The increase, IIHS points out, has been mostly in urban and suburban areas where safe crossing locations might be scarce, including on busy roads designed to funnel vehicle traffic toward freeways, and mainly in the dark.

  • The data clearly show that EyeSight is eliminating many crashes, including pedestrian crashes.

    —Matt Moore, HLDI Senior Vice President
Subaru’s EyeSight performs several functions, including forward collision warning, automatic emergency braking, adaptive cruise control, lane departure warning and lead vehicle start alert. It also includes pedestrian detection, enabling the system to brake automatically for pedestrians in addition to other vehicles. . . .

To study the system’s effect on pedestrian crashes, analysts looked at bodily injury liability claims that lacked an associated claim for vehicle damage. Past HLDI investigations have found that such claims tend to represent injured pedestrians or cyclists.

They compared the rate of these claims per insured vehicle year for Subaru vehicles with EyeSight, compared with the rate for the same models without the optional system.

The first generation of EyeSight, which used black and white cameras, was available in the US on the 2013–14 Legacy and Outback and the 2014–16 Forester. The second generation, introduced on the Legacy and Outback in 2015 and on the Forester in 2017, uses color cameras and has longer and wider detection ranges and other improvements.

EyeSight was offered for the first time on the Crosstrek and the Impreza sedan and hatchback in 2015. Only the second-generation system was offered on these vehicles. . . .

Looking at the Legacy, Outback, Forester, Crosstrek and Impreza individually, HLDI found reductions in claim frequency for each of them, though only the results for the Legacy and Outback were statistically significant.

HLDI also separated out first-generation and second-generation results for the Legacy, Outback and Forester. The first-generation system reduced claim frequency 33%, while the second-generation system lowered it 41%.
There's a graph comparing different models.
 
More evidence that there will never be a shortage of human stupidity, via GCR:
Autopilot Buddy defeats Tesla's safety systems; it is not your friend
https://www.greencarreports.com/new...s-teslas-safety-systems-it-is-not-your-friend

At $179 it's more expensive than a coke can and duct tape, but does the same job:
Autopilot Buddy is a small weight that attaches to one side of the steering wheel. It's designed to fool the car's Autopilot-linked torque sensor into thinking the driver has their hands on the steering wheel. . . .

According to Autopilot Buddy's website, it works on Model S and Model X and the company is developing one for the Model 3. . . .
 
Back
Top