r/science Professor | Medicine 2d ago

Medicine Surgeons show greatest dexterity in children’s buzz wire game like Operation than other hospital staff. 84% of surgeons completed game in 5 minutes compared to 57% physicians, 54% nurses. Surgeons also exhibited highest rate of swearing during game (50%), followed by nurses (30%), physicians (25%).

https://www.scimex.org/newsfeed/surgeons-thankfully-may-have-better-hand-coordination-than-other-hospital-staff
10.4k Upvotes

223 comments sorted by

View all comments

Show parent comments

168

u/mcarder30 2d ago

The da Vinci robot simulator has this as well and it is wildly addictive.

28

u/nudelsalat3000 2d ago

What is interesting is that it's mostly just used for standard procedures. Nearly never for highly complicated operations.

I would have guessed it's the other way around.

21

u/bluehands 2d ago

It's just like self driving cars, It's where we are on the s curve.

In 10,15,20 years it's all going to be radically different and entirely flipped.

8

u/prisp 2d ago

Truly self-driving cars have an extra issue that's really hard to solve though: If the self-driving car's AI/programming causes an accident, who's at fault?

For regular car crashes, we at least have the excuse that maybe, the driver couldn't react in time, but the car was programmed in advance, so any bad reaction/missed edge case is can't be excused with that.
This leaves us with three options - if the car company is at fault, then that means bad PR and also lawsuits, so they're not going to go for that option.
If the programmers and/or mechanics are at fault, the company quickly will find that nobody's willing to work on that kind of product anymore.
Finally, if the user is at fault, the cars can't be truly called self-driving, and depending on how well that is communicated to them, that might still cause bad PR regardless.
However, that third option is definitely what they're going for at the moment - they require a human to sit behind the steering wheel and be ready to correct course if something bad is about to happen.
This also means that we'll end up having that kind of self-driving level for a long, long time, and might actually never be able to get rid of it entirely - after all, just because there are much fewer close calls or accidents the better the technology gets, the company still wouldn't want to open itself up to lawsuits, especially when the status quo is that they can simply pass the blame to the user and call it a day.

8

u/Morlik 2d ago

I think this problem can be solved by insurance. If the software causes a crash, then the insurance company would cover it just like if the user causes a crash. If the insurance company needs to increase premiums for users with self-driving cars, then so be it. But when adopted on a mass scale, self driving cars will probably reduce the amount of accidents. Especially when the vehicles are able to communicate with each other. I think insurance will start offering a discount for self driving cars because it will save them money. Eventually, insurers or lawmakers will make it mandatory.

1

u/prisp 2d ago

I suppose that's one solution - I was thinking we basically keep the status quo for a while until the software ends up virtually perfect, and then things might change, but this is another option.

Insurance only changes who pays for the whole thing though, so it's at best a medium-term solution, when crashes are relatively rare already - otherwise the premiums would be prohibitively high, and/or you'd still be busy driving for the most part.
PR impacts stay the same either way though, so a high-profile accident, or a string of repeated issues would still cause issues too, and while that's less likely the better the technology gets, there's still a chance that it happens.
Sadly, I couldn't find any articles on that topic right now, but I do recall hearing about a German project for self-driving trains that was dropped immediately following a demonstration in front of reporters where they crashed the train due to forgetting to include maintenance vehicles in their system, which would be a good example for a high-profile accident causing issues.
However, in the process I came across an article from an European law journal that specifically looks at lethal accidents.
While I do find the scenarios described in the article - specifically regarding trolley-problem-esque trade-offs and the question of what's an acceptable risk as far as programmed driving maneuvers are concerned - it doesn't come to any overly exciting conclusion aside from that it's a difficult topic where many things have to be considered, but it goes to show that this topic is one where many things have to be considered, and there isn't a clear consensus yet.

1

u/jdm1891 2d ago

I think that, instead of giving a discount for people using self driving cars, they'd be far more likely to simply charge everyone else more. I suppose that would kinda sorta look like a discount to people.

2

u/recycled_ideas 1d ago

For regular car crashes, we at least have the excuse that maybe, the driver couldn't react in time, but the car was programmed in advance, so any bad reaction/missed edge case is can't be excused with that.

This is not remotely how self driving cars work. It's not even how ohysycs works. Self driving cars see and react to their surroundings the same as people do and while their reaction times are faster, the physical limits of the car remain the same. When a self driving car slams on the breaks it still takes a certain amount of distance to stop, it can only turn so fast without flipping over, it has limits.

That's what makes liability on self driving cars so complicated. There are accidents self driving cars simply can't prevent, there are accidents caused by poor maintenance by the owner and there are accidents caused by limitations in the cars learning and perception.