You are viewing a single comment's thread from:

RE: AI is stupid: Humans Judgment Matters

in #steemit7 years ago (edited)

I think he is entitled to his opinion (like everyone else). Every time technology threatens to push the boundary of our moral and ethical understanding to a territory it is unfamiliar or unable to deal with (nuclear weapons, GM crops, genome sequencing, etc.) there are opposing forces trying to work out where to draw the line. Yet I have never seen any technology that is purely good or evil because it is still just an extension of human thought and behaviour. Every science fiction writer has been talking about the same things before and I am sure humanity will find ways to create catastrophe using AI and then try to figure out some way to save the world, if only so we can have more science fiction movies and books to watch :D

Not sure how much regulations and laws has prevented people that want to do something very foolish from doing exactly that. But it has ensured that people who want to manipulate the law know exactly how to plan their legal arguments.

I guess the answer to your question is that he is both right and wrong (is that in between or not?).

Sort:  

I think the difference the people say about AI is that our previous tools lack agency and AI will either get sophisticated enough to have agency of their own or we will, at least, be surrendering more of our human agency to the AIs.

This was a recent article I read that talks about various algorithms for autonomous vehicles but failed to see how they can be applied consistently when people have different values:
https://www.fastcodesign.com/90149351/this-game-forces-you-to-decide-one-of-the-trickiest-ethical-dilemmas-in-tech
If we have trouble programming cars when there are traffic rules and insurance policies, how far would we get with other applications of AI?

That only deals with one AI. What if there's an AI in the van and it acts differently; there's a very real chance they both mow down the cyclist and have a collision that damages both vehicles, has the miscarriage and injures the men. Dumb AI.

I guess that's my point. If we can't get the simpler problems correct what are the chances we will get the more complex problems right?