Morality is subjective.
It's simple, there are no moral absolutes in nature.
How can something under another person's control teach another about morality?
Morality is subjective.
It's simple, there are no moral absolutes in nature.
How can something under another person's control teach another about morality?
Easy, suppose you have a pet and it is under your control. You train it, you feed it, you care for it. Are you saying you can learn nothing about morality from these interactions? What you can learn is how to care for another (most basic).
But will this tell us absolute right and wrong? Maybe not. AI doesn't answer every question. AI does for lack of better explanation, number crunching. It does the heavy lifting that our brains cannot handle. It can augment our ability to follow the rules we tell it we want to adhere to. It can help us avoid contradicting ourselves, help us become disciplined, and most importantly it can process big data.
For example I don't know a lot about you or the morals of your country. Suppose I want to be perceived as a moral person in your country? So I require a data set on the moral attitudes on different topics in your country. AI can be helpful because it can interpret and analyze this data so that I can understand that a large percentage of women in your country feel a certain way about a certain issue for a certain reason.
As a very limited human it is impossible for me to know for instance how women in your country might react to a decision I make. AI would analyze data and offer advice based on how women in your country are expected to react to certain decisions. Some of this might seem simple but it's more a matter of number crunching.
To put it in another context if we are marketing and trying to sell a product, we will want to know how the potential customer is responding to the changes we are making. Remaining in this symbiosis or good graces with the customer is a matter of analyzing customer sentiment.
On a certain level morality is similar. We do not know how people will react to what we do or say. In the future we will not be able to afford to go trial and error because to say the wrong thing could mean blackballed or censored or demonetized for life. So in order to avoid these exceptionally harsh consequences we must rely on analysis of current sentiment, trends, feelings, what people of 2018 perceive as moral.
Jeez.
You're right.
Doesn't make it any less scary though.
The change is here.