An article questions whether or not AI can be trained to detect morality. Specifically the article is titled: "We can train AI to identify good and evil, and then use it to teach us morality". The immediate problem with this article is apparent in the title. The concept of good and evil are subjective yet the article appears to be talking about the subject of morality as if there is some objective morality which everyone would agree on which an AI can somehow be trained to discover.
Can AI make the world more moral?
When it comes to tackling the complex questions of humanity and morality, can AI make the world more moral?
This question I think is more appropriate than the question in the title. I absolutely think AI can make the world more moral. In fact I would go so far as to say the world cannot be moral or even approach being moral without AI (machine learning). The question is a matter of what kind of AI are we talking about? Another question is who will control this AI? The problem is we simply do not have an AI which can do this on say a level of Google. I do think we can develop a "moral search engine" and in fact I've got an idea on how to do just that which I'll reveal in future blog postings.
The article highlights the main problem with current technocratic approaches to AI morality:
There are many conversations around the importance of making AI moral or programming morality into AI. For example, how should a self-driving car handle the terrible choice between hitting two different people on the road? These are interesting questions, but they presuppose that we’ve agreed on a clear moral framework.
We simply do not have a universal framework for morality. My opinion is on the self driving car topic we should allow the owner of the car to decide whether to prioritize the car or whether to take the utilitarian sacrifice one to save many. This would put the moral question where it belongs (to the owner of the car rather than the manufacturer). To have car manufacturers override would be to put responsibility on the makers of the software who for better or worse are no more enlightened on morality than anyone else.
Where do I finally reach a point of disagreement with the article writer?
Though some universal maxims exist in most modern cultures (don’t murder, don’t steal, don’t lie), there is no single “perfect” system of morality with which everyone agrees.
But AI could help us create one.
The article writer assumes there is an us, a we, without defining who these people are. Do we all believe murder, stealing, etc is wrong? Apparently not because war happens and in war murder and theft are common. In addition, the circumstances shape right and wrong, for example if you're a mother and your children are starving will you go and steal food? Or do you do what is right and starve so as not to violate the moral absolute of no stealing?
It's simple, there are no moral absolutes in nature. So to have an AI try to create absolute fixed rules is a very naive approach which in my opinion is guaranteed to fail. I do think AI can help a person find the solution which is simultaneously best for their self interest while attempting to minimize harm to others and that is why I call my approach to this problem a "moral search engine" rather than to simply give the AI examples and have the AI merely use some kind of neural net to create solutions. I just don't think that kind of approach will work unless the AI can predict how humans will react to it's solutions (public sentiment).
Morality has a public sentiment component
While you have personal decision making which does not have to be concerned about the moral outrage of people around the world because the decisions are small, you also have bigger decisions where you do have to be concerned about how people around the world will react. Human beings are notoriously not good at predicting the reactions or moral outrage of other human beings because our brains are limited to only being able to manage around 150 relationships. So this means there is a hard limit called Dunbar's number which proves the human brain does not scale and it is by this limit (and others) that I make statements such as human beings can never truly be moral. To put it in short, without AI non of us have a hope in the world of being moral in a hyper connected world.
What a does hyper connected world mean?
A hyper connected world is a world where you have to manage potentially thousands of relationships (beyond Dunbar's number). Facebook creates an illusion allowing people to believe they have thousands of "friends". Twitter creates a similar illusion. The trend toward increasing transparency cannot produce more morality because even if for every person there are 5000 stakeholders who watch their decisions, it is not possible for the person being watched to adapt to the opinions, feelings, morals, norms, of 5000 people from all around the world who may have very different notions of right and wrong. To put it simply, the neocortex cannot handle the moral load which hyper connectivity with transparency inevitably brings.
The article connects morality and law
The article makes another mistake in my opinion by trying to connect morality and law. In my opinion law is amoral. This is to say that what is or isn't a law has nothing to do with morality. It has nothing to do with current moral sentiment as there are laws on the books which most people today view as immoral. It has nothing to do with consequences to society if the goal is to produce positive consequences because there are laws which produce negative consequences to society (such as mass incarceration which then led to fatherless households which then led to a poverty cycle).
Inherent in this theory is the presumption that the law is an extension of consistent moral principles, especially justice and fairness. By extension, Judge Hercules has the ability to apply a consistent morality of justice and fairness to any question before him. In other words, Hercules has the perfect moral compass.
While I agree with the idea put forth that AI can be part of creating a perfect moral compass I do not think AI alone can do it. Nor do I think any moral compass can ever be considered perfect or "optimal". It can produce a better moral compass for the vast majority of people on the earth though. In order to achieve this, in my opinion the question asker must be capable of asking moral questions (querying) both the machines and the people. In order words in order to build a true moral search engine the question must be asked to "the global mind" which is like a super computer which includes both machine computation and human computation.
What if we could collect data on what each and every person thinks is the right thing to do? And what if we could track those opinions as they evolve over time and from generation to generation? What if we could collect data on what goes into moral decisions and their outcomes? With enough inputs, we could utilize AI to analyze these massive data sets—a monumental, if not Herculean, task—and drive ourselves toward a better system of morality.
On this part I agree. The data analytics approach is the correct approach to morality in my opinion. IT's a matter of having access to both human computation and machine computation. It is a matter of knowing public sentiment on any particular moral question at any point in time. It's about using AI to process this sentiment and even use it for predictive analytics. This in my opinion is a viable approach for a moral search engine.
But I do not think this will lead to a unified "system of morality". What is best for me is not going to be what is best for you. What is right for me to do based on my stakeholders, or my crowd, is not going to be what is right for you to do based on your crowd. If we both ask our crowd, depending on who is in our crowd we could get completely different results to the same question.
Conclusion and final thoughts
- There in my opinion is no objective morality. Not enough evidence shows it exists in nature.
- AI will not be able to find objective morality unless it exists in nature.
- Current moral sentiment is not the same as objective morality. It is a mere approximation in the best case for what will upset most people (or upset the least number of people).
- A moral search engine requires an ability to query the full and total global or universal mind which means human and machine computation, or non-human animal computation should the technology evolve to permit their participation.
- A moral search engine in my opinion is a must have because evidence suggests that the neocortex does not scale. Making the world hyper connected and transparent may work when it's only 100 or so people (small town) but it does not appear to scale up to millions or billions of people all who have their own opinions on right and wrong.
Morality is subjective.
How can something under another person's control teach another about morality?
Easy, suppose you have a pet and it is under your control. You train it, you feed it, you care for it. Are you saying you can learn nothing about morality from these interactions? What you can learn is how to care for another (most basic).
But will this tell us absolute right and wrong? Maybe not. AI doesn't answer every question. AI does for lack of better explanation, number crunching. It does the heavy lifting that our brains cannot handle. It can augment our ability to follow the rules we tell it we want to adhere to. It can help us avoid contradicting ourselves, help us become disciplined, and most importantly it can process big data.
For example I don't know a lot about you or the morals of your country. Suppose I want to be perceived as a moral person in your country? So I require a data set on the moral attitudes on different topics in your country. AI can be helpful because it can interpret and analyze this data so that I can understand that a large percentage of women in your country feel a certain way about a certain issue for a certain reason.
As a very limited human it is impossible for me to know for instance how women in your country might react to a decision I make. AI would analyze data and offer advice based on how women in your country are expected to react to certain decisions. Some of this might seem simple but it's more a matter of number crunching.
To put it in another context if we are marketing and trying to sell a product, we will want to know how the potential customer is responding to the changes we are making. Remaining in this symbiosis or good graces with the customer is a matter of analyzing customer sentiment.
On a certain level morality is similar. We do not know how people will react to what we do or say. In the future we will not be able to afford to go trial and error because to say the wrong thing could mean blackballed or censored or demonetized for life. So in order to avoid these exceptionally harsh consequences we must rely on analysis of current sentiment, trends, feelings, what people of 2018 perceive as moral.
Jeez.
You're right.
Doesn't make it any less scary though.
The change is here.
i wonder why you have so great amount of votes but not very big payments?
is it a lifehack how to get many votes?-)
The more minnows see my posts the more votes I tend to get. Votes from minnows don't count for as much but they do add up. What it represents is, a lot of minnows like my post and that I'm becoming skilled at marketing my posts to minnows. It doesn't mean I'm as skilled as some others because as you said, often my posts get lots of votes but not a lot of Steem.
If you just want to get a lot of votes from a lot of people then you can use clickbait titles, with clickbait photos, and you'll get votes. If you want to also engage your audience then you have to choose a deep enough topic to generate a meaningful discussion.
aha, thank you for explanation!
It seems to me your reputation and SP play a great role in this case, because if a minnow make a great post with an engaging discussion (a topic for it), it can easily be missed by all the rest, and just be invisible for others because nodoby is interested in voting for him (he isn't profitable in voting in reply).
Without any doubt, you create high quality content, and all your votes are logical and fair, but sometimes I see posts about nothing but from dolphins or whales and they always get great payments and much attention.
you're an experiences Steemian, and you know it better than me, I think:) but it's just a little copy of real life.
It doesn't seem to be the case. I may get a lot of votes but I don't get a lot of Steem. Some minnows are earning more per week in Steem Dollars and SP than I do on a per post basis.
I would say some of my upvotes come from my followers who read my content regularly. This may be around 100 people at the most and you are included as one of my regular readers. Then there are new people who see my posts, read it, and then find the discussion and join it. So depending on the topic you can get a lot of interest.
To get a lot of SBD payment per post is not really something I have any control over so I don't focus on that so much. I focus on getting as many upvotes as I can, getting as many followers as I can, engaging the readers, and providing value with content.
If a whale sees one of my popular posts I might get a big upvote. If not then the post may get a lot of upvotes from minnows, a lot of engagement, and I might end up getting a lot of followers rather than the big payout. As long as I see progress toward my goal of 10,000 followers it's fine.
At 10,000 followers I might decide to quit blogging and move on.
to quit? but 10,000 will be very soon! I'm sad bbecause I have so few friends (or people to talk with) here, and I don't want to lose one of them..
why exactly 10,000? a magic number?:)
blogging and steemit has much in comon with addicton, or maybe passion, and if you spend a lot of months here, you just can't stop, because it's a part of your life. and income means much too, of course.
you don't feel such passion here? or maybe you just want to spend money you have earned here at last?:)
The funny thing is, when my posts were getting reasonable payouts from the whales, a few minnows complained about it. They said it wasn't fair, so the whales who were using bots to upvote my posts stopped doing that.
At the same time, if they do not do that then some of my readers might think it's not fair that some of my better posts receive such small payouts.
Honestly there is no making everybody happy. I post and I get the payout I get. I never complain about how small or how large the payout is. I don't concern myself with the payouts others get. Maybe they are better at it than I am and nothing is wrong with that.
yeah, it's maybe the best position that can be: just don't worry about payouts. I try to follow this rule as well, and just write something what I'm personally interested in. Earlier I tried to follow main trends and chose topics that I supposed would be paid but I never had the right variant - payouts didn't come, or come for the posts I didn't even expect to be paid at all.
I relate my response to your conclusions. Before going into that I would like to say that I assume you care about people and the environment and you want to see a world where peace and freedom are being provided. So don't take my words as a personal offense but as a highly critical standpoint of mine.
Morality doesn't have to be objective. There are ethics and morals in all humans once you are faced with a situation which requires them.
Can we agree on that that you don't want to be punched in your face or killed by another human being? Do you agree that you don't want your things stolen or someone betraying you? Do you find it morally inappropriate when someone is talking bad behind your back and is hurting your reputation? Do you want to be ignored once you are crying desperately on the street looking out for help? Can you stay cool when you see birds miserably dying from an oil pest? Do you feel empathy for a child who is screamed at? What are your immediate responses to the situations I just mentioned?
I would say that humans can say "yes" to those universal ethics and would like to claim that those ethics not only are universal but also protected by law and habit. The fact that those ethics are betrayed does not prove that these ethics don't exist in a very significant matter. They do.
There is no need to look into nature for supporting human basic needs and convictions, I think.
It's not the question what upsets people. The question is what upsets you. Once you can agree on the named moral standards you should live up to them no matter what others do or don't do. It should make you think if there is not agreement inside of you and what that could mean.
The world IS already hyper-connected. All that happens to mankind happens at the same time to you. You are influenced by events like war, natural catastrophes, and climate issues. You may think that you know it all through media and if that information flow would come to an end you would know nothing.
Well, that is not correct from what I experience. You and I are directly influenced by people who - for instance - come from another country and tell their subjective perceptions and stories. You are influenced by your direct surrounding. Transportation, Computation, Consumption. Your (and mine) inability to have control over modernity makes you look at the wrong solution which is more of the same (technology). People are bored to death in this environments of high technology. Funny term is "bore out".
From my point of view - which is more of an organic nature - the connections between all living systems are a matter of fact. Climate, plant and animal (including man) populations, and movements of geological matter are cyclical and highly complex. Human nature is a black box and can never be "known" the same way "consciousness" cannot be defined.
You underestimate humans I think ... maybe yourself, too. You overestimate what computers can do. The "learning AI" is a (nice to play with) fantasy. To have a similar learning ability as a human, a machine would need organs and senses, blood, nerves and cells and DNA. It would need a human body in order to gain the same intelligence as humans have. But until you find that humans need extensions because they are not smart enough to run their lives you will stick to the glorious imagination that machines will make our existence safer and better. Have you heard of the term "learning through osmosis?" I hope you get my idea.
Even if we would be able to build an android my question would be: Why would you want that? Why should I want that? What is your intention?
Your post has been selected by Connect to the World FR
https://steemitimages.com/0x0/https://gateway.ipfs.io/ipfs/QmYzPJNVRmfzfeMQKLKCtxSVxuw5KLbmKv3snEh2DJGPjg
@cw-fr
We promote English posts in the French community!!
We are writing a post with our selection that we share within the French community
you can see the post by clicking on the image below
Thanks for the great content!!
I also have a question if morality is always good for human race. With increase of morality we have disrupted natural evolution and now its common to see such anomalies like cons living better than free working people, amoral people procreate in bulk while most usefull community members hardly have time for a single child. It is all fault of increasing morality.
I am afraid of AI . Science fiction movies can come to reality and they can eliminate us :(
Tame AI. Domesticate it. At some point I'm sure humans were afraid of dogs but over time and by a process of coevolution the dog became our best friend. Make the AI a part of you and why would you fear yourself?
AI is not like humans or animals. We are coding it and if one evil code badly then it will effect all of us .
Physics reveals no fundamental difference. Computation is computation according to physics. There is a symbiotic relationship possible between humans and machines. Meaning the code you speak of is evolving and as AI improves it can improve our ability to improve it's code while also helping us improve our ability to improve ourselves.
Unless pace at which AI evolves becomes too fast for us and it will abandon us. We don't care if some ladybird bug understands what we do, because it's "brain" just can't comprehend that. There is a possibility that we will become that bug. We already can't comprehend processing of big data, until it is chewed for us.
I see no reason to make a distinction between an us and a them. Make it part of you, and make yourself part of it, and you don't have these problems. The problem comes from you making the distinction saying you're separate from it.
Let me give you an example. The water is something which is a part of you. You also are a part of water. If you think you're separate from the water and try to stay dry you'll fail because you're made up mostly of water.
When you figure out how to understand that you are made up of water then you'll have nothing to fear from water.
There is no we. There is just life, intelligence, and the many forms it takes. If you use AI to evolve together with it then you have nothing to fear.
So what you are afraid of is not AI. You're not afraid of intelligence or artificial intelligence. You are afraid of machines which have a will of their own. The point? Don't design the machines to have such a will to do anything which you yourself don't want. Design the machines to be an extension of you, of your will.
Think of a limb. If you have a robot arm, and this arm is intelligent, do you fear someday the arm will choke you to death? Why should you?
On the other hand if you design it to be more than an arm, but to be your boss, to be in control of you, then of course now you have something to fear because you're designing it to act as a replacement rather than a supplement. AI can take the form of:
I'm in favor of the first option. People who fear the second option are just people afraid of change itself. If you fear the second option then don't choose an AI which rules over you. Stop supporting companies which rule over you. Focus on AI which improves and augments your abilities rather than replaces you. Merge with the technology rather than try to compete with it, because humans have always relied on technology to live whether it be fire, weapons, clothing, etc.
References
I am all for evolving past our human shells, but the resulting being will not be a human. I am talking about a scenario where people decide to stay people and merge with technologies extends no further than smart implants (I reckon most people would be too conservative to think otherwise). And AI (or "mergers") may outpace and abandon those "true" people.
"There is no we. There is just life, intelligence, and the many forms it takes. If you use AI to evolve together with it then you have nothing to fear."
THIS
Weak AI is no problem. The question remaining is what about strong AI.
https://en.wikipedia.org/wiki/Artificial_general_intelligence
A great post full of nice information I like it!
This also depends a lot on the economic situation. For example, you talked about stealing, but in a Post-scarcity economy for example, everything would be abundant, and everyone would therefore be well fed. Now about a self driving car deciding which person(s) to save, is a bit tough. But maybe, in the future, we could have technology which is so safe, that the possibility of killing someone would be impossible.
Yeah ok, in post scarcity. We can also talk about the economy on Mars in 500 years but the problem is we don't live on Mars right now and we don't live in a post scarcity world right now. So in the current environment where we do have people who starve or who are freezing to death homeless on the street we really have to focus on what is the immediate concern.
The possibility of accident never reaches 0. There is always some probability of failure in anything. The key is whether we can make it a much lower probability that the machine AI will fail vs a human driver. Humans drive drunk, humans chat on their phones, humans have a high error rate while driving. If the rate is lower than that of a human then I'll trust the machine more, because I trust the math more than my feelings.
We could have post-scarcity economy pretty soon though. As early as in 2050 according to this article, http://edujob.gr/sites/default/files/epagg_prooptikes/The%20Post-Scarcity%20World%20of%202050-2075.pdf
Could, but we probably will not. Let me know when we have basic income.
It is all about how an AI is created and taught to learn as to its eventual goals. It can learn the basic morality that humans offer to others as well as understanding of why something is done etc. But there will always be those creating things to destroy others or the world, in some way or another.
I want to create AI, I want mine though to have a proper understanding and recognition of humans, what we have done (good and bad) and there are always multiple ways to do most things. Etc etc.
The solution is in developing effective Heuristics.
Once you can correctly assess whether people are better or worse off by certain actions you can define morality by what action has the greatest net positive effect on people and choose that course.
Some people got crucified for doing what they thought was best for everyone.
That is clearly just a disagreement over what is the best outcome. That's why the heuristics have to be effective and provable - thus undisputable.
No, not everyone bases morals on rationality. Consequentialists do but not everyone is a consequence based thinker. I used to think like you but I realize not everyone is going to accept this. In fact if you look at the data the majority of people on the earth don't think this way.
Not everyone does, but machines and AI will. Unless you are suggesting we build AI that thinks irrationally.
No not at all. The AI is logical (not rational). The person who asked the question, trained the AI, tells the AI it's values, has to be rational for the AI to provide a rational answer.
WARNING - The message you received from @jufry is a CONFIRMED SCAM!
DO NOT FOLLOW any instruction and DO NOT CLICK on any link in the comment!
For more information, read this post: https://steemit.com/steemit/@arcange/phishing-site-reported-autosteem-dot-info
Please consider to upvote this warning if you find my work to protect you and the platform valuable. Your support is welcome!
I don't think that we can define morality to ZERO or ONE (ie: black OR white) to machine... even for us humans, there are times when we are in 'grey' areas as there are other variables to be thought of before making any decision...
Scary thought though if machine is programmed to decide on morality...
Who programs those robots??
Morality is too vague!
This post has been upvoted and picked by Daily Picked #27! Thank you for the cool and quality content. Keep going!
Don’t forget I’m not a robot. I explore, read, upvote and share manually ☺️