The Moral Dilemmas of Self Driving Cars

in #technology7 years ago

smart-1348189_1280.jpg

It doesn’t happen too often that a new technology is developed that causes a paradigm shift throughout the world, but we are at the very cusp of such a shift. I’m talking, of course, about self driving cars.

Although self driving cars or autonomous vehicles are already a reality and people are already quite familiar with the technology and may have even used it, they are still not mainstream as the technology itself needs a lot of work before your conventional car can be replaced for good.

Companies like Tesla and Google have played an instrumental role in the development of such cars and have brought them from the realm of science fiction to reality much sooner than anyone had expected. I certainly was surprised when I came to know that they were already on roads, this early in the 21st century.

Anyways, as with any revolutionary technology such as this, there are a million questions attached to it and a technology that is supposed to make decisions on its own and for us, there arise some really deep questions pertaining to what we think are unique to us humans like ethics and morals.

Moral Decisions Made By A Machine?

ethics-2991600_1280.jpg

Each year hundreds of thousands of people lose their lives in road accidents throughout their world. It is said that most of these accidents are due to human error. That is where the self driving cars step in.

Since autonomous vehicles will have computing power that could take much faster decisions than us, and since future autonomous vehicles could be interconnected, we could see a reduction of 90% in road accidents after the world has switched to self driving cars.

However, accidents will still happen because technology, no matter how cutting edge, still fails from time to time. The problem here is that, in this case, lives are involved and split second decision making that can decide someone’s fate.

And for the first time, we will be relying on machines to make the ethical and moral decisions for us. Can we really build them to that degree where they can do that? How would a machine even decide?

Which Life To Spare?

car-accident-2165210_1280.jpg

Let’s say you are in an autonomous car and suddenly there comes a small child running in front of the car. Now there are only two options that the car has. Either to steer right or to steer left, because applying the brakes would still mean colliding with the child.

Now, let’s say that steering left kills pedestrians while steering right causes the car to collide with other cars and killing the passengers in the car. What does the car do?

Many experts have suggested that an autonomous car should take such a decision where the least possible lives are lost and while that sounds perfectly logical, would people want to be in a car, where they could be killed by the car to save other people, while in reality human drivers do the exact opposite and try to save themselves first?

Moreover, would it be ok to let a child get killed to save an adult’s life? We are reaching the depths of our human thoughts and feelings and maybe we should be questioning our own morality before encoding anything into the machines first.

Sort:  

In my opinion the answer is very clear: the car should take the decision where the least possible lives are lost or harmed.

DQmQdyiUoU8QFHoW3HHr64cTxzCfPJiXMcNNrvAfvAAtkH6.gif

That would be the most logical, yes.

The gif is way too cute and entertaining. Your opinion could be to kill them all and I still wouldn't be able to resist the urge to upvote!

I don't think any AI coder will ever have to program moral choices into an autonomous vehicle. They will only make efficient mechanical solutions based on the limitations of the car's breaks/steering options. When the car 'sees' a hazard it will hit the breaks and turn away to avoid trying to hitting anything. If it still plows into a person or a barrier then it was beyond the laws a physics to avoid the accident. So basically making the same action a human driver would except with a lower decision latency.

for me the most interesting question is who will get the liability, will the car makers be responsible or will the car owner or passengers take the blame?
I can't wait for these to become more commonplace because I feel like even the worst self driving car is far better than a human operator who is texting behind the wheel.

Yes the liability part is a whole different set of questions that will be asked soon and regulations would need to be clear from the get go.

The only thing that makes sense to me is to hold the manufacturer of the self driving system as liable in at-fault accidents, and with that tremendous liability it will be incumbent upon the manufacturers to make their systems as close to accident-proof as possible because otherwise they will be sued into oblivion.

The main problem is that the car HAS to make a decision. It‘s not possible to make no decision because not making a decision is still a decision to do nothing. So we all have to face the problem of cars deciding about human lives.

Exactly!! It's the rare but certain-to-happen situations where the car WILL HAVE TO decide which lives to save.

No decision is still a decision in this case.

I recently was in a self-driving car in Las Vegas and it was the most helpless feeling. Not for me... at all!

Really? It must have been awesome!

It was but it was creepy at the same time. :)

Was it some kind of novelty ride or were you actually going somewhere?

Interesting thought provoking topic! Especially about if a machine was forced to choose between two bad outcomes. Im not sure if human choice and reaction could be replaced or replicated in that instance

Yeah, the intricacies of human decision making mught not be replicated and we might have to do with coded decision making.

Yes self driving car will have these problems and it is not an easy task to decide who to kill (crash into) and who to spare. As you said some smart people are trying to figure out a solution for this.
If you want to try this by yourself I highly reccomend you the http://moralmachine.mit.edu/ where you can decide for yourself who to kill and who to spare and I can tell you. It is really difficult to do so.

These questions certainly are very difficult. That is why it's taking so much brainpower behind them to try and find the right ones. (If there are even right ones for these.)

This reminds me of the famous Trolley Problem, once a tought experiment, now a reality.
There is still much debate around this topic.
In any case, the use of self-driving cars will inevitably reduce the number of incidents on the road.
That's already a "win"situation.

Oh yes, the 90% reduction in road accidents that experts are estimating are already a huge win. The debate that is ongoing is related to the rare situations where the car will have to choose which life to spare. But yes, overall, the self driving cars will save a lot of lives.

That is a lot of questions to think about that might not let me sleep tonight :P

LOL yeah, I think it is especially because these relate to the core of humanity.

We can replicate it, these are just rules. Whatever you wrote are a set of rules ingrained in us. We just need to ingrain them in the machines and hope they don't learn to break them as we do 🤔🤔

True! And as AI advances more, it will get better at making such life and death situations.

This is a very important issue. They are working on artificial intelligence to try and figure out which is the best moral choice and if a machine can learn it.

It is indeed very important. More so because in the future, almost everyone is going to be affected by it.

Each year hundreds of thousands of people lose their lives in road accidents throughout their world. It is said that most of these accidents are due to human error.

and that's a fact. I was a truck driver for almost 4 million miles. People turn off their brains when they turn on their cars.

there IS no question.
If robot cars will save lives.
DO IT.

end of story.

4 million miles? That's a lot! You must have seen everything that happens on the road. I agree, people surely turn off their brains lol!

And yes, there is no question about it. Replacing the human driver with a robot car will save lives and that too by as much as 90% if the estimates are to be believed.

thats an easy solution! god dang the things you guys cloud your minds with. all you do is equip each car with a airbag on the bottom of the car. when such decision comes the air bag deploys sending the car soaring over the kid and pedestrian. done... just saved both their lives.

It's easier said than done. Do you know how much thrust you would need to send a car soaring over people's heads? I can't tell if you are joking or not.

HAHAHAAHA come on man of course I'm joking! you crazy? but it might be crazy enough to just work. ya man a tesla model s is like fricken over 5000 lbs curb weight u would need some massive thrust to jump that thing.

This could be fixed by asking delicate question to the driver first, and coding them in the car.

Questions like, if A happens then the car does A or B or C. That can be easily coded in the car.

The same goes for situations, if X situation present itself, then the car must do A or B or C.

The car wouldn’t have to decide for itself, it would only need to take the decision previosly coded in it.

This is another good idea. The car could then behave accordingly. But I think, most people will choose self-preservation even if it means killing more lives to save one.

Not to mention, the car cannot possible generate questions and answers for every possible situation. It has no way of predicting every single possible outcome. It could ask you whether to hit the people or the child or the car, but what if it instead has water on either side, does it then drive off the road? what if it's a cliff? what if the road has no obstacles but a tree falls in the road? What if there's a lot of wind and it knocks the car off course enough to require either going in the ditch or giving the driver whiplash? etc etc etc. This is something neither people nor computers can code in because there's an infinite number of possibilities.

This would make liability a lot more interesting. It would no longer be clear that an accident is the manufacturer's fault...

... and that is why I think that manufacturers would think it's a great idea.

This problem reminds of that Black Mirror episodes where a dating app simulates many dating situations in order to test two people's relationship viability. I'm imaging an AI that simulates the adults and child's life it has to choose between a couple thousand times in order to decide which life has more value. We'd still have to establish the value-giving-parameters though....

I haven't watched that particular episode so, I am unable to understand the reference. But yeah, this sure could get complex, to determine the life value of people like that.

I don't think it is a question of will it kill the many or the few, being designed by corporations with a vested interest in not taking legal casualties. The real question is: is it the car's fault, the child's fault, or the company's fault? Who pays for the accident?

Yeah many more questions to ask.

It is said that in every human being there is no such specialty, which it takes a lot of time to recognize, but whatever lines you write, all tell your importance. Sir, you are a very good writer. I will post your every post. Read very carefully

Thank you so much for the kind words :)

You cannot answer these questions with right or wrong. The difference in a selfdriving car is that you have to thing about these things before they happen and not instantly in the event of an accident.

That may be right. There might be no right and wrong in these cases.

That may be right. There might be no right and wrong in these cases.

AI will rely on big data about what people do in similar situations and do the same. After many years and millions of miles of driving, the amount of data these cars will collect about human driving decisions will be monumental. Every situation they encounter will be like something in the past. We are not talking about rules: AI follows a regression model of great complexity. It can also do more than what people have done. it can also collect data about its own past actions and pick out what was successful in the past. So, if it successfully minimized death in a similar situation in the past as well as making decisions that were less successful, the AI will choose the most successful. This is not a rule: it is just a global constraint.