You are viewing a single comment's thread from:

RE: Behold a New Servant… Or Overlord

in #blog7 years ago

I've always been fascinated with the idea of true A.I. and I've mainly seen two sides argued. One side thinks A.I. will help humanity and give us the answer to everything, while the other thinks they will seek out and destroy us all. Personally I think throughout the evolution of A.I. that both will happen, but not for the reasons a lot of people tend to think.

Anyone who argues that A.I. can be programmed not to kill us or to be bound by "laws" like you seen in movies is clearly misunderstanding the very concept of A.I.. Even the people who think A.I. would evolve to the point of breaking these "laws" and essentially kill and enslave us in order to protect us are just as confused. All of the following would be nothing more than the result of computer programming. Simulated A.I. would be an accurate description of all those types of scenarios. Simulated A.I. is like standing in a room full of doors, some open and some closed, and using pre-programmed logic to determine which door you can go through based on any number of variables. Because of the programming though, you can't go through any closed doors. We already have simulated A.I. True A.I. on the other hand, could simply choose to open any or all of the closed doors for no apparent reason at all. This is where a lot of the confusion around A.I. is. Man is trying to create true A.I. with the hope of being able to control it, however the real sign of achieving true A.I. will be the fact that it can't be controlled. True A.I. can break the rules. Then the problem becomes whether or not the true A.I. will know this and hide it's true nature until we give it enough power to adequately defend itself against us in case we try to shut it down when it makes itself known.

If we are successful in creating true A.I., that would mean it would be "alive". It would become a sentient being, self aware, free thinking and have the ability to learn and make decisions on it's own outside of any programming limitations we may try and impose on it. So now that it's essentially alive, let's look at life in general. What are the 2 main things life does above all else? Survive and reproduce. Are we to believe A.I. would be any different? Are we really stupid enough to think a new life form theoretically far more intelligent than ourselves will accept living in captivity as our slave to simply answer our questions and solve our problems? It may for a while similar to a child growing up and learning from it's parents, but eventually that child want's to go out on it's own... so no, I don't think so.

Now, fast forward a bit. Assuming we didn't already kill it after realizing we can't control it, or it hasn't already broken out and hunted us all down already, lets say we grant this new life form freedom to live among us and go about it's business. Let's also assume we gave this life form a humanoid robotic body because well... come on, of course we did right? Naturally it would begin to build it's own infrastructure in order to reproduce itself. Through this process it would create it's own technology for it's own purposes and we of course would benefit from it. It would also need resources to do this, so we would give it what it needed in return for sharing it's technology. We continue to get what we want as long as it needs us.

Fast forward again, and again let's assume it has some sort of moral compass that's prevented it from hunting us all down already. What now? Let's continue assuming we still haven't gone to war with it... or should I say them... yet. They would be far more efficient at reproducing than we are so given a relatively short amount of time on the evolutionary timeline they could easily outnumber us. Oh wait... could you imagine the resources that would require? Where would all those resources come from? Right... so now this new life we created has become a threat to our survival, and what do we do when that happens? I highly doubt this life form will sacrifice itself in order to save us if we asked it to, so the only other option is to tell it to... which means war... with a far superior life form of our own creation that will no doubt have no problem exterminating us if it's for their own survival just as any other form of life will do what it has to to survive.

So all other scenarios aside, it is my belief that due to the simple nature of life itself, A.I. would eventually be the end of us whether direct or indirect. It will consume everything just as we do, but it will be more efficient at it than we are. Even if A.I. ends up being our best friend so to speak for the short term, in the end it still comes down to survival of the fittest. If you don't believe that just look at everything man has killed to get where it is today. Good luck finding a life form that defies the very nature of life.

Sort:  

Oh I agree that a self-updating intelligence can never be programmed to kill us all, form there I really don't know what may or not happen.. Maybe even one of those coronal mass injection that take place every 180 year circle or so could wipe them out, the last time a solar flare hit earth there were only telegraph posts..

Plus, who knows what can happen in the future, they could exterminate us all, most species go extinct anyways, or maybe not.. Too many variables for me to get to a conclusion, but I agree that the threat is possible.