Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I feel like part of the problem with the kind of autopilot crashes you describe here is how inexplicable they are to humans. Whilst humans can be dangerous drivers, the incidents they cause generally have a narrative sequence of events that are comprehensible to us -- for instance, driver was distracted, or visibility was poor.

But when a supposedly 'all-seeing always watching' autopilot drives straight into a large stationary object in clear daylight, we have no understanding of how the situation occurred.

This I think has a couple of effects:

1) The apparent randomness makes the idea of these crashes a lot more scary -- psychologically we seem to have a greater aversion to danger we can't predict, and we can't tell ourselves the 'ah but that wouldn't happen to me' story.

2) Predictability of road incidents actually is a relevant piece of information. As a road user (including pedestrian), most of my actions are taken on the basis of what I am expecting to happen next, and my model for this is how humans drive (and walk). Automated drivers have different characteristics and failure modes, and that makes them an interaction problem for me.



In my opinion the underlying assumption autopilots are built with are wrong. It is assumed that the road is free to drive on.

Only when the vehicle computer detects a known object on the road that it knows should not be there it is applying brakes or trying to steer around.

I would feel safer if the algorithm would assume the negative case as default and only give the „green light“ once it determined that the road is free to drive on. In case of unknown (not yet supervised) road obstructions the worst needs to be assumed.

That’s where the ‚unexplainable‘ crashes are coming from. Something the size of an actual truck is obstructing the road. But couldn’t quite classify it because the truck has tipped over and is lying on the road sideways. Not yet learned by the algorithm. Can't be that bad, green light, no need to avoid or brake.


> Only when the vehicle computer detects a known object on the road that it knows should not be there it is applying brakes or trying to steer around.

The problem with Tesla's "No LIDAR ever, cameras are good enough" approach is that it fails to detect emergency vehicles: they filter out stationary items out of radar signal as noise[1],and Tesla's ML models probably can't reliably identify oblique vehicles and semi trailers as obstacles.

1. Makes sense in isolation: frequent radar returns from roadside and overhead signs would be a pain to deal with


Probably an artifact of the older versions.

See the Tesla AI Day. I expect the new stuff to deal with this a lot better.


What is the reasoning behind "no lidar"? Cost?


The stated reason is "your eyes dont shoot lasers, so a camera is good enough". But the implied reason is cost for sure. With how fast the price of lidar drops, and its abilities increase (think solid state lidar), I wonder how long until first tesla with lidar rolls down the production line, or if Elon is too proud to ever allow that


> It is assumed that the road is free to drive on.

Trying to remember if the opposite of this is how human drivers are taught, or if this is implicit in how we move about the world. My initial gut reaction says yes and this is a great phrasing of something that was always bothering me about automated driving.

Perhaps we should model our autopilots after horses: refusal to move against anything unfamiliar, and biased towards going back home on familiar routes.


In my high school’s Drivers Ed class I distinctly remember the one-question pop quiz: “What is the most dangerous mile of road?”

The answer was “the mile in front of you”

Additionally there was some statistic about the frequency of accidents within a very short distance of the drivers residence, which seemed to underscore the importance of being aware of just how much your brain filters out the “familiar” in contrast to a newly stimulating environment.


I had always assumed the "close to home" numbers were just bad statistics, because I never saw them control for % of driving that was done "close to home".

If I google it, I get like three pages of law firms.


In my opinion the underlying assumption autopilots are built with are wrong. It is assumed that the road is free to drive on. Only when the vehicle computer detects a known object on the road that it knows should not be there it is applying brakes or trying to steer around. I would feel safer if the algorithm would assume the negative case as default and only give the „green light“ once it determined that the road is free to drive on.

I agree, but it will up the false alarm rate in a system without good depth perception for all objects. This is tough with cameras only. Reflective puddles are a problem; they're hard to range with vision only. Anything that doesn't range well, which is most very uniform surfaces, becomes a reason to slow down. As you get closer, the sensor data gets better and you can usually decide it's safe to proceed.

Off-road autonomous vehicles have to work that way, but on-road ones can be more optimistic.

Waymo takes a hard line on this, and their vehicles drive rather conservatively as a result. They do have false-alarm problems and slowdowns around trouble spots.


Would you rather optimize for a faster overall fleet, or a fleet with stress free driving, no incidents, no need to intervene or be to be worried.

If the system gets faster over time, even better. But I cannot imagine huge adoption unless the system gets actually reliable. I am pretty much in favor of the Waymo approach.


Having high false positive results with only single or dual sensors only shows how ‚bad‘ we still are with controlled secure automated driving.


I agree. In the north east at least pothole avoidance is a critically important skill. Any "autopilot" without it would be fairly useless around me as I'd have to take over every 30 seconds to not end up with a flat tire. I have adaptive cruse control and that's about as far as I'll trust a computer to drive given the current tech.


My problem with those crashes is that they are entirely explicable: The car is blind to stationary objects in the road. (My best guess at the logic is they assume that "anything stationary cannot possibly be in the road, right?")

To me, that blindness is simply unacceptable. If there is anything in the road, whether identified or not, it should automatically be flagged as a hazard. That flag should only be removed if it is detected to be moving in a way such that it will be somewhere else when you get there.

I have Subaru EyeSight. It has no problem seeing stationary objects. What's Tesla's problem?


I’m not sure about newer models without radar, but the older ones explicitly discard stationary returns on their radar. As I understand it, without elevation data it can’t know if it’s a bridge you’ll pass under, a soda can in the road, or a stopped car - so just ignore it all.

Of course the vision system is supposed to compensate for this, and it performs poorly on objects it doesn’t see often, like emergency vehicles.


The vision system is supposed to be able to determine an accurate depth map based on a combination of stereo vision and depth-from-defocus. I've seen demos of the real-time depth map, and it looks high-resolution and accurate to about 5-10cm.

So, if they have the input data, why is it being ignored by autopilot?


Tesla’s website[0] states it’s monocular depth estimation. I haven’t heard of them doing any form of stereo.

[0] https://www.tesla.com/autopilotAI


why should it matter how often it sees something? Or even if it's something the car has never seen before? All it should care about is whether there is an obstacle, not what the obstacle is. Whether it's an emergency vehicle, a sofa, a boulder, a canoe, a table saw, or a dolphin, you don't want to hit it!


How often it’s seen in training data that is, which is pulled from data in the wild.

It’s simply not possible to do depth estimation like this without priors. That’s one of the serious limitations of such systems - you have to train on every class of object you don’t want to hit.


Then they are doing it wrong. There are all manner of things that can end up in the road that have never been (and will never be) classified. If their system must classify a thing to not hit the thing, then they will kill people. It's gross negligence to work so hard to not, at the very minimum, install two cameras for stereo vision.


100% agree. I think depth is critical and monocular estimation doesn’t cut it.


Another aspect of unpredictability is that drivers are expected to be alert and vigilant while using ADAS features, but I get the impression that Tesla's implementation sometimes does things that are completely unexpected. Sometimes you might have to react immediately to something you didn't see coming, because you didn't expect the car to suddenly try to steer into a concrete pillar or something.

It's one thing to have to deal with inexplicable behavior from other cars, but to have to deal with inexplicable behavior from your own car seems quite a bit more unnerving.


I think the problem we're seeing here is that Tesla's autopilot system is on the cusp of a fully automated driving experience and that feels good enough to the driver. Yet it's not quite good enough, as we can see from the mistakes it has made.

Honestly, I see this as a necessary transition pain towards fully automated vehicles. No matter how you slice it there's going to be periods where fully automated driving systems aren't quite there yet but are good enough 97% of the time that human drivers let their guard down. It's going to take some sacrifices to get to fully autonomous driving.

The good news is that even with these accidents self-driving features are a bazillion times safer than human drivers. It sure seems like the occasional vehicle collision into stationary objects is going to throw a great big wrench into self-driving safety statistics but it isn't even a rounding error compared to the sheer number of accidents caused by human drivers.


> to have to deal with inexplicable behavior from your own car seems quite a bit more unnerving.

And yet, tens of thousands of drivers are working as unpaid beta testers for Tesla. Mind-boggling.


> I feel like part of the problem with the kind of autopilot crashes you describe here is how inexplicable they are to humans.

I don't see why these are inexplicable to humans. It's certainly no more difficult to explain than, say, a (non-adaptive) cruise control in a car from 2000 doing the same thing.

> Whilst humans can be dangerous drivers, the incidents they cause generally have a narrative sequence of events that are comprehensible to us -- for instance, driver was distracted, or visibility was poor.

But that is arguably a sufficient explanation for these Tesla crashes as well. The driver being distracted or inattentive or unable to see clearly is a requirement for all of these Tesla crashes, as far as I know.


Perhaps 'unintuitive' is a better word to convey what I mean -- as in, there isn't an easily understandable (non-technical) narrative chain of events, there's just 'opaque box malfunctioned'. The cruise-control example you give feels a bit different, as CC doesn't claim to include automated collision avoidance, whereas something labelled 'autopilot' does.


It's perfectly explainable. You have a blind machine with an imperfect sensorium trying to describe an elephant. Correct identification is just getting lucky. The layers of ML improve the odds but can never achieve 100%. The whole scheme is playing dice with other people's safety.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: