You are viewing a single comment's thread from:

RE: The Moral Dilemmas of Self Driving Cars

in #technology7 years ago

This could be fixed by asking delicate question to the driver first, and coding them in the car.

Questions like, if A happens then the car does A or B or C. That can be easily coded in the car.

The same goes for situations, if X situation present itself, then the car must do A or B or C.

The car wouldn’t have to decide for itself, it would only need to take the decision previosly coded in it.

Sort:  

This is another good idea. The car could then behave accordingly. But I think, most people will choose self-preservation even if it means killing more lives to save one.

Not to mention, the car cannot possible generate questions and answers for every possible situation. It has no way of predicting every single possible outcome. It could ask you whether to hit the people or the child or the car, but what if it instead has water on either side, does it then drive off the road? what if it's a cliff? what if the road has no obstacles but a tree falls in the road? What if there's a lot of wind and it knocks the car off course enough to require either going in the ditch or giving the driver whiplash? etc etc etc. This is something neither people nor computers can code in because there's an infinite number of possibilities.

This would make liability a lot more interesting. It would no longer be clear that an accident is the manufacturer's fault...

... and that is why I think that manufacturers would think it's a great idea.