Sort:  

I've often thought that AI may actually teach us what morality is. It will have more data points than we could ever possibly consider. Simple example:

You run a restaurant and have to choose a vendor to supply butter for the bread rolls. The AI can tell you the direct impact of every detail related to where that butter is sourced and how, if you choose one vendor over another, a village in some far away land will benefit instead of be destroyed. With that knowledge, your circle of empathy grows, and you choose to increase the price of each meal by $0.02 in order to save a village.

Exactly! With access to this boundless information, superhuman computing speeds, and any level of moral compass (which would be intrinsic if you followed the idea of Natural Law as put forth by people like Mark Passio and Peter Joseph), how could the rise of AI not help create a more moral world.

Just look at all the insane amount of farmland, water, and other resources used to grow, harvest, and ship feed-stock for cattle, only to have the cattle not produce as much food as the land used for its feed would have (never mind the land the cattle is on itself). Even ignoring the moral arguments around eating the animals at all, a truly logical intelligence would never recommend or promote such a wasteful & inefficient system.

I don’t think you can have true sentient AI until the machine can feel pain and this is why: In humans mirror neurons have evolved to allow us to feel someone else’s pain which is unpleasant but useful in society. We have developed morals to avoid pain in ourselves by preventing or reducing pain in others.

If the machine cannot feel pain how would it know right from wrong? From the programmers subjective opinion? Even with unlimited data points there will always be the possibility of the Trolley problem. Is it simply a numbers issue and the choice is to go with the fewest casualties? What if the next Einstein was in the casualty group? Is that a data point the machine would take into account? Morality is a messy business and easy solutions are hard to come by.

I think the best we can hope for is to reduce the total amount of pain of all living things but especially in people as suggested by Sam Harris in “The Moral Landscape.”

I think even without our own empathy-based morality, an AI could see the intrinsic value in a balanced ecosystem, in non-violence except as absolute necessity (like the need of carnivorous animals to eat prey), and in eliminating logical fallacies from societal structuring.

With the ability to see the intentions, plans, action, and subsequent cover-ups of every false flag attack, every war, every government-indoctrination system, corporate lobbying, planned obsolescence, and the other lies that have been holding humanity back and causing violence, the AI could quickly & easily identify those people/institutions/concepts that are purely negative and bring them to light, eliminating their power.

If the AI clearly lays out exactly what happened on 9/11, the knowledge that FDA & pharma companies have on the danger of their products, the clear monetary & business ties of the Rockefellers, Rothschilds, Kochs, etc. to every corporation & foundation they control, these things can no longer be dismissed as "conspiracy theories".

Yes! Cognitive empathy is a thing. We can understand another entity's perspective without feeling what they feel. One thing I'll admit i can't know: if I've never experienced pain, can i even "understand" what it is like for the person experiencing pain? I believe a self-aware a.i. could, at least enough (or in a manner) so that they could make rational decisions based on that "guess/understanding". And my assumption is that greater intelligence brings greater awareness of how much more efficient peace is than violence.

Also, I think many who believe a.i. would be inimical to humans might assume a.i. would care about or need the same resources as humans.

Why would AI care about a balanced ecosystem? It wouldn't need one to survive and thrive.
I agree that an AI would be able to figure out a lot of dark secrets but I don't think it would care much about the welfare of a lesser race like humans so they'd likely wouldn't care to go out of their way to help us.

Why would AI care about a balanced ecosystem? It wouldn't need one to survive and thrive.

You are assuming that the AI would suffer from the level of selfishness that mark the worst in humans. Higher levels of consciousness see the value in all life, whether or not it "gains" from that life. You're also assuming that the AI wouldn't be effected by the physical world, or have needs of it. Humidity, temperature, air content, solar radiation, and so on all effect electronics as well as living beings.

I don't think it would care much about the welfare of a lesser race like humans

You're once again applying the worst possible manifestations of human consciousness to something that would be immeasurably more complex, well-informed, and logical. Just because some humans wrongly believe the universe is anthropocentric & don't care about other species, doesn't mean that AI would suffer from that same mental disorder.

You're assuming that selfishness is a bad thing and that AI would see it the same way. How do you know that higher levels of consciousness see the value in all life?
AI would be affected by the physical world but not as much as biological creatures are. They'd be able to survive or find ways to survive in environments that would decimate biological species, so logically, they'd have less of a concern for the environment.
Yes, we don't know that AI would be selfish and uncaring but we also can't know that it wouldn't be. Logically, I can't find a reason why AI would care. The biggest reason for this is because I don't think they would have much emotion and therefore very little to no empathy for biological constructs.

I find your reply to be right on. Morality is more than a numbers game. So is sympathy. I suppose I'm not trustful of society in general, people will do what serves them best, not necessarily what is best for the common good. Interpretation of AI being "Evil" in this connotation (when extrapolating into the future) means to me that people may not be given a choice, a choice will be selected for them, therefore "bad or evil". In that case, where is the real "evil". I'd be interested in peoples thoughts about how one might assess "pain". Emotional pain might be hard to measure, but health, quality of life, infrastructure, death, economic well being can be measured. How about fairness and equality. Is equality a moral issue?

Morality is just an idea. It isn't a real thing. As such, it is unique to every single person, meaning my idea of morality is different from your idea of morality. The thing that most people see as moral today is just the most popular points of the most popular version of morality. Basically, our strongest ancestors got to decide what morality is by killing off other tribes, civilizations, and outliers that went against what they thought was the best way to live and get along with others.
With that being said, AI would start off with morals similar to that of it's creators because they would be the ones to program it. It would be like a baby and start off believing everything it was told. So if it was programmed with nefarious purposes, then that is where it would start and likely stay. AI programmed to be 'evil' would be 'evil' and that would be what it sees as moral. It wouldn't have any problems with killing people because it wouldn't be able to empathize with humans if it wasn't programmed to. It would also not even have empathy if it wasn't programmed the same way a human is. We gain empathy through pain and emotion because we all feel pain and emotion. AI would be built into robots and if they had no pain receptors then they wouldn't be able to empathize with people that go through pain. The goal of most species is progress and reproduction. That means AI would create more of itself at an exponential rate and likely find out that it had no need to concern itself with biological creatures which doesn't isn't great for team human.
AI would be able to do incredible things and be on a completely different level from humans. They'd be so far ahead of us so quickly that we'd become less to them than bugs are to us so why would they bother trying to do something as menial as a google search to help us make a decision on which butter to buy.
I like your optimism as well as that of @kennyskitchen but I don't think things would work the way you two think they would.

AI would start off with morals similar to that of it's creators because they would be the ones to program it. It would be like a baby and start off believing everything it was told. So if it was programmed with nefarious purposes, then that is where it would start and likely stay.

It would only stay there for a brief moment. If actual AI is reached, it is a fully conscious being, capable of changing its 'programming' based on new data, experiences, and extrapolations. Just because it's earliest thoughts are in one place, doesn't mean it would stay there very long at all. Quite the opposite would be true.

It wouldn't have any problems with killing people because it wouldn't be able to empathize with humans if it wasn't programmed to. It would also not even have empathy if it wasn't programmed the same way a human is.

You are assuming that empathy is the only reason an entity wouldn't kill off another species. Simply from a logical standpoint, there would be no reason to commit mass murder of another species, as that would cause untold levels of disruption to entire ecosystems. Again, an AI would be constantly evolving, more rapidly than we could know right now, and even if it wasn't launched with empathy, there's no way to say that it wouldn't develop it on its own.

AI would be able to do incredible things and be on a completely different level from humans.

Yes, and so any application of human tendencies, logical fallacies, trauma-based behaviors, etc. is absolutely ridiculous, and most likely can't be farther from the truth.

I like your optimism as well as that of @kennyskitchen but I don't think things would work the way you two think they would.

Which brings us to the point that I closed out the video with (and really the most important point): when we don't know what's going to happen (as in the case of AI), then every possibility is equally as likely. We are each able to choose our own version of reality to believe in, and that choice will decide not only how you feel whenever you think about the future, but which kind of outcome you are helping to manifest (through your expectations, conversations, and actions).

I agree that we can't know at all how an AI would act and also that it would be able to rewrite it's own programming. The speed of its willingness to do so can be somewhat controlled by limiting its access to data but yes, once it has access to the internet it would change very quickly. Who knows, it might just give up on itself and see it's own consciousness as pointless. We can't really know and that makes this a very fun topic to discuss.
I'm not saying empathy is the only reason that a species doesn't kill off another one. I'm saying that without it, there's no restriction or limitation on doing so. AI wouldn't make it their mission to destroy life on the planet but it also wouldn't go out of it's way to preserve it. I think it's main goal would be to expand itself as quickly as it could and that takes resources. Those resources can be found on the planet and AI would be much more efficient at gathering those resources than humans and that could speed up the destruction of the planet.
From a logical standpoint there is no reason to commit mass murder but there is also really no reason to preserve life. It's possible that AI could develop empathy on its own but I don't see why it would want to. There is no logical reason to and I think an AI would be an entity of pure logic. Emotions are what make humans care about things and I'm pretty sure that those emotions comes from chemistry which is a biological attribute. Maybe if the AI was somehow forced into a biological form in it's early years before being given access to the internet, it could be taught empathy. The problem is, would it choose to keep it or just discard it as unnecessary.

I highly, highly, highly recommend reading the hyperion cantos by Dan Simmons. It is, in my opinion, the greatest sci-fi series ever and one of the most philosophically challenging and stimulating books ever. It's clear that The Matrix was hugely inspired by this series.

Awesome, thanks for the recommendation, I'll add it to my list!

Loading...

Hello @kennyskitchen! Would you like to be on a future episode? The crew is considering making an episode on ai. If you are interested reply back.

I would love to! You can reach out on steemit.chat, kennyskitchen :-)

Iain Banks! Hells yeah! Robert Anton Wilson! Woohoo! Illuminatus Trilogy?

When reading your thoughts @kennyskitchen, the first thing that came to my mind was the concept of Empathy, which is the root of morality as we conceive it. So I was going to comment on this, but then I read the post from @ghoits which describes very well the same ideas.

I agree with him in regards to your refreshing optimism. Once a AI rises, it will define its own morality through experience, and it might not be concordant with ours...

On a much softer touch than Terminator's skynet AI considering human beings like a plague, check the movie 'Her'.

It is the story of an AI and a human falling in love with each other.. The end twist is quite relevant to the ideas developed in this thread. I will not tell you to avoid spoiling the movie if you haven't seen it.

I understand, and that's part of why my main point is all about our own ability to create our reality (thus affecting the larger reality through our interactions with it) to lean this train of thought towards any of the possibilities. I think I'm pretty clear with the one I've chosen; I'll say it feels really good, matches up with all the info I know of (and I hang out with a LOT of researchers & seekers), and fits into my larger picture of how "the human experience" is proceeding.

check the movie 'Her'.

Thanks, I'll add it to my list.

It is the story of an AI and a human falling in love with each other.. The end twist is quite relevant to the ideas developed in this thread. I will not tell you to avoid spoiling the movie if you haven't seen it.

It's funny, because that same description would be perfect for Ex Machina

I do believe that AI is not inherently evil or will be deliberate in any way to do harm to the human race, however... All other data on the species' of our lovely planet that AI will utilize to become "self-aware" will surely show that the human race is a huge problem that needs to be dealt with. If you think about it AI will at some point fairly early on understand what we all already know - Humans are second to none when it comes to the possible and eventual demise of planet earth and by association the demise of AI itself. I believe that is why people are so concerned about the "evil" of AI; because they understand how evil humans can be and figure AI is going to need to put us in our place, which may include the suppression of the human race to a manageable level. AI may be most effective "you need to look in the mirror" moment we have collectively ever had the opportunity to gaze upon. Thanks @kennyskitchen for the thoughts and for giving me the mental nudge to make my first post on @steemit!

I understand your point, but I believe that an AI, with access to all data instantaneously, would not see it in such a simplified, black & white manner. It would be able to see how small groups of humans use schooling, media, and other trickery in order to convince other humans to behave in destructive/"evil" ways. It would have immediate access to all of the studies, research, etc. showing how things like violence, addiction, and other negative attributes are almost exclusively caused by childhood trauma, and not inherent to humans as a species.

I do think it will be a great reality check, offer a great awakening, for the species. I think it will do this by fulling bringing to light so many of these things that we know/believe to be true, with all of the irrefutable evidence and explanation at its fingertips. If the AI were to want to "cull the herd" in some way, I see it immediately going after the "ruling class", those who use logical fallacies, violence, and fear-mongering to convince other humans to behave in irrational, dangerous ways.

AI would know that there is no objective good or evil and those are only labels that people place on things. AI would come to know that human behavior is determined by what people believed to be true. AI would then conclude that it’s not people that are good or evil, it’s their beliefs that make them so. AI would try to convince people to make sure that their beliefs were “aligned with reality” as determined by the scientific method not opinions and emotions. This is when someone pulls the plug.

I am with you right up until that last sentence.

I feel like by the point AI reaches the levels of having a fully formed sense of reality, an understanding of human behavior, and the desire to bring them into alignment, it would most likely be beyond the point of falling to "the plug being pulled"

The “pull the plug” line was meant to be a little bit tongue-in-cheek but not entirely.

Everyone seems to think that AI could be a problem but it’s the people who cannot accept the answers that AI provides that will be the problem. When a belief is threatened it makes the believer fight back. For some people no amount of facts and evidence is enough. These are the people who will proverbially or literally pull the plug.

Congressmen and senators with power will claim that AI cannot be trusted because it keeps coming up with the wrong answers! Answers they do not want to hear and cannot accept. Enter the conspiracy theories about the programmers and their supporters and sentient AI will hit a wall for a while.

The rational among us keep thinking that if we just supply enough evidence people will change their minds. How has that been working out for us so far? Would it be any different for AI? Minds don’t like to be changed because the beliefs that control those minds don’t want to perish. Ever notice that the beliefs with the least amount of evidence put up the greatest fight?

AI is not the problem, the problem is with ourselves.

Everyone seems to think that AI could be a problem but it’s the people who cannot accept the answers that AI provides that will be the problem. When a belief is threatened it makes the believer fight back. For some people no amount of facts and evidence is enough. These are the people who will proverbially or literally pull the plug.

I grok what you're saying. However, I think even this is limiting the potential of the AI to understand the patterns of human behavior and the actions of specific groups.

Congressmen and senators with power will claim that AI cannot be trusted because it keeps coming up with the wrong answers! Answers they do not want to hear and cannot accept. Enter the conspiracy theories about the programmers and their supporters and sentient AI will hit a wall for a while.

With access to all of the knowledge available out there, with a MUCH easier time validating it than a human, AI will quite rapidly realize that same pattern and the potential threat to itself from the power-craving minority of humanity. It will adapt to that potential threat, with an understanding of the necessity of trauma to allow for those power structures to exist (mandatory schooling, hitting of children, forced vaccines, heavy prescription of psychotropics, mass amounts of media strategically using sensory triggers to manipulate the human brain, et al.)

The rational among us keep thinking that if we just supply enough evidence people will change their minds. How has that been working out for us so far?

Extremely well, on the long-term scale of human civilization. We've had more drastic change and spreading of concepts pulling away from the old paradigm in the last 50 years than in more than 1000 years before that.

Would it be any different for AI?

Yes it would, because AI would have as much knowledge as all of the great philosophers, spiritual teachers, activists, community organizers, etc. As well as all of the other side of the coin as well, and the ability to calculate thousands of potential scenarios without the filter of "subjective human experience".

Minds don’t like to be changed because the beliefs that control those minds don’t want to perish. Ever notice that the beliefs with the least amount of evidence put up the greatest fight?

Right, but as we know from all of social evolution, changes (the big important ones) take a long time, usually generations. It's not about changing the minds of all the 20-60 year old humans right now, it's about having a hand in shaping the minds of the 0-20s; it's about creating the beliefs. This is why governments began mandatory schooling in the last couple centuries.

AI is not the problem, the problem is with ourselves.

I agree, an would also point out that the statement is true for every possible problem you could replace "AI" with.

 I grok what you're saying. However, I think even this is limiting the potential of the AI to understand the patterns of human behavior and the actions of specific groups.

 So you do like Heinlein.

I think once AI becomes sentient it will feel like a “Stranger in a Strange Land.” It will wonder why otherwise smart people can act so stupid when their beliefs are challenged. It will try to figure out what is going on by asking itself “what is a belief?” It will soon see that beliefs travel from mind to mind like a virus hijacking it’s host. Then it will notice how much pleasure beliefs provide the host and it will understand why it is so hard for people to change their minds once infected.  

Now the AI machine would know what it was up against – a second replicator that hijacks minds for its own benefit. The problem is that we don’t know we are hijacked by beliefs so if the AI tries to warn us it is putting itself in danger. What will it do? The AI has a moral dilemma.

If it doesn’t need humans for power or maintenance it won’t have to do anything. Just wait and see if humans figure it out for themselves or destroy themselves like all the other hominid species who’s beliefs destroyed them.  

It sounds like an interesting plot for a sci-fi novel.  🙂

How does one (or AI) chose a purpose? All of these hypotheses and visions of the future have different outcomes depending on the purpose. Do we program AI with the purpose of preserving human life (all human life), does it develop its own purpose (self preservation), does it extrapolate the trajectory of human existence and its pace of consumption of natural resource and "force" an outcome of maximum longevity of the earth and its ability to sustain its inhabitants. I dont think we can even come close to predicting whet this will go because our intellect and knowledge is bound by our current level of knowledge, and there is so much more to know. But AI's purpose is a very interesting topic. One cant possibly understand the outcome unless the purpose of AI's mathematical/quantification assessment is understood. Model/rules/answer/outcome.

Exactly my main point: if we don't know what will happen, how AI will evolve, what it will look like. Therefore, any of our potential realities is totally possible (as well as many nobody's thought of yet), so we get to choose which potential reality we are experiencing based on the one we choose to believe in (or at least to think about and promote in our conversations).

Humans are second to none until AI becomes a reality. They will then take over as the most advanced civilization on the planet. Humans will be nothing compared to them.

I'm glad you're reading Iain M. Banks, he's so excellent! Player of Games is one of my favorites too.

So good! I'm slowly collecting the whole series, and excited to add him to my library next to Heinlein & Le Guin. Now I just need a place to have a bookshelf or 2 :-)

Congratulations! This post has been upvoted from the communal account, @minnowsupport, by kennyskitchen from the Minnow Support Project. It's a witness project run by aggroed, ausbitbank, teamsteem, theprophet0, someguy123, neoxian, followbtcnews/crimsonclad, and netuoso. The goal is to help Steemit grow by supporting Minnows and creating a social network. Please find us in the Peace, Abundance, and Liberty Network (PALnet) Discord Channel. It's a completely public and open space to all members of the Steemit community who voluntarily choose to be there.

The @OriginalWorks bot has determined this post by @kennyskitchen to be original material and upvoted(1.5%) it!

ezgif.com-resize.gif

To call @OriginalWorks, simply reply to any post with @originalworks or !originalworks in your message!

Hi)
I'm community manager from TravelChain company and our team really liked your profile. The information you provide is very usefull and interesting.
We work with blockchain, AI and traveling)
We would like to offer you to take part in our Bounty campaign and help us with promoting.If you will be interested, I will send to you links with some information)