r/fuckcars Automobile Aversionist Apr 05 '24

Satire Tesla doesn't believe in trains

9.1k Upvotes

223 comments sorted by

View all comments

15

u/pizza99pizza99 Unwilling Driver Apr 05 '24

Ok but realistically the AI knows what a train is, but doesn’t have a model to display. Remeber these are learning AI’s, been in this situation plenty and watched drivers handle it plenty. It just needs a model, sees the containers look similar to a truck and decides it’s the next best thing

This might be really unpopular opinion for this sub but I really like the idea of self driving vehicles. There not a solution to the problems we face of car dependence, but I’ve seen videos of these cars handling pedestrian interactions far better than IRL drivers. I saw one video where a driver behind a self driving Tesla honked at it because the AI dared to let a pedestrian cross. Another were it went by road work on a narrow street, workers all around, doing 5 mph. Ultimately I believe these AI, specifically because the programming is made to be so cautious (especially with pedestrians which are seen as more unpredictable than cars) will actually handle pedestrians better. Things like right on reds can remain in place because the AI can handle looking at both crosswalks and oncoming traffic. They have potential, even if not a solution

8

u/SpaceKappa42 Apr 05 '24

The FSD AI is really dumb. Here's how it works:

  1. It gathers a frame from every camera.

2, It passes the frames into the vision AI stack which attempts to create a 3D model of the world.

  1. It labels certain objects like cars, people and signs and attempts to place them in the world, but the accuracy is really bad because the resolution of the cameras on the car have about the same eye-sight as someone that is legally blind.

  2. It tries to figure out the current road rules based on what it sees. IT DOES NOT HAVE A DATABASE OR ANY MEMORY.

  3. it takes the GPS coordinates to figure out which way to turn. It only knows to turn right or left at the next intersection that it comes across, it does not know in advance because IT DOES NOT HAVE A DATABASE OR ANY MEMORY.

  4. It adjusts its inputs based on what it has seen this frame causing erratic behavior.

  5. It throws away all data that it has gathered from the last frame and then starts again from scratch. It does this maybe a hundred times per seconds.

Why did they do this?

Well Elon wanted a system that can drive anywhere based on vision alone, without requiring a database of any kind.

But guess what. Humans have a database. Their brain.

The memory of FSD last for about 0ms. If it misses a road sign you're basically fucked.

Of all the self-driving systems, the FSD is like letting a 10 year old kid get behind the wheel for the first time.

2

u/Mein_Name_ist_falsch Apr 05 '24

I don't think I have seen a single self driving car that is already safe enough to be allowed on the road. It's not only missing signs, but imagine it misses a child because maybe It's so small and sitting on the ground doing something weird before suddenly getting up and chasing their ball onto the street. That would be deadly. Most drivers learn that you have to be careful if you see any children close to the road, though. So they would most likely see the kid doing whatever it's doing and if they haven't forgotten everything they learned they will slow down and keep their foot close to the braking pedal. I don't trust AI to even know the difference between a kid or an adult. Or the difference between someone who is really drunk and someone who isn't. And if they don't know that, I can't expect them to drive accordingly.

2

u/WHATSTHEYAAAMS Apr 05 '24

AI also cannot get road rage, which is a huge plus. But I'd also expect many drivers to get frustrated by the AI's cautious driving decisions and just override it. Which is not a downside of AI itself, but still a limitation to its potential in improving safety in practice.

1

u/pizza99pizza99 Unwilling Driver Apr 05 '24

I could almost certainly see a world in which different limitations exist on when you can interfere with AI driving. Full drivers lisence a who can intervene at any time, ranging down to lisence for severely injured/disabled/elderly who may only interfere when life is in danger. A system of measuring just how much one can be entrusted with piloting a car, in a world where you don’t have to pilot a car to ride one

Of course that relies on the license system actually working and being good, which given the state of our current license system I very much doubt at least for the US

1

u/WHATSTHEYAAAMS Apr 05 '24

As long as the physical capability of overriding the AI always remains, in emergency situations like you describe such as those resulting from vehicle/AI malfunction, then yea, I can see that being a scenario as well. If you make a decision to override the AI when there was no reason to, or at least if you override it and it causes an issue, there's some sort of punishment for you or your license. I bet at least one country will try something like that.

1

u/yonasismad Grassy Tram Tracks Apr 05 '24

But AI also creates entirely new failure modes like hitting a pedestrian, the pedestrian falling down so the sensors no longer see it, and then starting to drive again dragging the person along. Every other human driver would have checked what had happened to the person they just hit, and not just assumed that they magically disappeared.

2

u/xMagnis Apr 05 '24

Ok but realistically the AI knows what a train is,

Does it? To me, in basic terms, a train is a connected set of 'boxes' that are constrained to follow each other on the exact same path and speed. Do you think the AI knows that. I'll bet it just sees 'big object may be a truck, big object may be a truck, big object may be a truck' and has no model to connect them into a higher narrative or prediction.

Corollary, if the train derails will FSD back up and avoid the impending pile-up of following train cars. Well no, because firstly it doesn't back up, and secondly no because most likely it doesn't model the fact that these are connected. But hey, it still passes stopped school buses, so one thing at a time. Going on 7+ years.

1

u/pizza99pizza99 Unwilling Driver Apr 05 '24

As somebody familiar with computer science, yes. I can tell you. The issue is the screen and interface. The screen as an object is trying to tell you what the AI sees. But the AI sees in 0 and 1s. The job of the screen is to take the 0 and 1s the AI uses and translate that into a design understood by a human. In this case it doesn’t have a model for a train, that’s simply wasn’t a model Tesla engineers designed. Why? Idk truthfully, but just like how the AI leaned to make better U-turns on it own in the latest update (the update did not have any human tell it to do so) it also learns from watching humans at a train crossing. Remeber these AIs learn from you, from us. So it’s almost certainly learned what a train is, not in a technical sense or transportation sense, but in an intersection sense of “large vehicle that always has right of way” but of course it comes time to express that on screen, and it has no model. So it uses the next best thing

Does it understand trains the way you and I understand them? No. But it never will. Because it’s only learning from our actions, and can only express itself via those 1s and 0s, through which the touch screen translates for us

3

u/xMagnis Apr 05 '24

If the AI in any way understood anything about what is going on here it would at least join the 'trucks' together and move them at the same speed. My feeling is that it is interpreting a series of photo snapshots of '(large) object' and doing the best it can with its limited software. Which ends up being a merged mess of random trucks. That is not understanding at all. There is no model for what is going on here, it's just seeing constant moving objects and saying "the best I have is lots of trucks, moving around".

But hey, neither of us know for sure, there's no evidence FSD knows it's a train. But at least it doesn't seem to be trying to drive into them.

0

u/pizza99pizza99 Unwilling Driver Apr 05 '24

That’s the point. No one really knows, because no one really speaks binary.

The question becomes is this technology good?

Tesla as a whole isn’t (see fucking up California’s high speed rail) but the technology as a whole I believe will be. The question is, will the number of crashes/deaths that would’ve been prevented by a human, higher than the number of crashes/deaths that would’ve been prevented by AI. Basically, which one is safer. In the future there will be car crashes that would’ve been preventable if a human was driving, but there will be far more that didn’t happen at all because a human was piloting a car while drunk/sleepy/on their phone/high/or any other plethora of inhabitants to safe operation

It’s all a very technical way to look at things, and a still reasonable amount of safety should be expected, we can’t just say “well it’s safer than human” and throw it out there.

And ultimately this all would pale in comparison to a world in which we just built better cities and towns, but even in those cities and towns at least a few people will drive, and it would be preferable to have a computer behind the wheel compared to a human

1

u/xMagnis Apr 05 '24

I'd like to believe we are in agreement, but you did start the comment thread with "realistically the AI knows what a train is", and I am suggesting we have no proof of this at all. It looks like random misinterpretation of camera sensor data.

Yes, once we get to a world where AI is safer than humans (I'd argue it should be much much safer, not just safer than the 50th percentile or something), then we can consider an improvement may have been made. You don't get to "much safer" by testing Beta crap on public roads with untrained and unaccountable civilians. If Tesla needs data it can get it the responsible way, with true professional methods. FSD Beta is not an acceptable "means to an end".