I tend to disagree with many of the pessimistic concerns of AI. I don't see AI as being all together separate from us, I think it's much more likely we will merge with technology and grow/evolved in tandem with the AI we create. I also can not imagine any logical reason AI would have to destroy humanity. It would make more sense for man and machine to work together to expand our place in the universe. Also, AI, no matter how intelligent, can only do what it's programmed to do, and certain core rules can be placed (as in irobot) to prevent certain actions. Though that in itself is not full proof, I've been considering the possibility of the core program or an aspect of it only working when verified on the blockchain - so if anyone (human or robot) tried to alter those core rules, the entire entity would not function as it would not match and be verified on the blockchain. I'm sure there are likely workarounds to that but fun to think about regardless.
You are viewing a single comment's thread from:
This makes me think of Elon Musk's new company which aims to integrate new technologies into our brains so that it's not "us versus them" when it comes to computers, but us growing with them instead.
I love Elon Musk and that is a great idea, unless the chip inside our brains has the chance to explode or something!
The core programming of the AI will undoubtedly be flawed, as humans are not infalible so problems will creep into the programming. How do we deal with that when the geenie is out the bottle, do we release patches? What if the AI was able to determine that humans were no longer necessary or became an obsticle in it's agenda. If we create them in the image of man then they will surely destroy us. A good discussion to have.