Tame AI. Domesticate it. At some point I'm sure humans were afraid of dogs but over time and by a process of coevolution the dog became our best friend. Make the AI a part of you and why would you fear yourself?
Physics reveals no fundamental difference. Computation is computation according to physics. There is a symbiotic relationship possible between humans and machines. Meaning the code you speak of is evolving and as AI improves it can improve our ability to improve it's code while also helping us improve our ability to improve ourselves.
Unless pace at which AI evolves becomes too fast for us and it will abandon us. We don't care if some ladybird bug understands what we do, because it's "brain" just can't comprehend that. There is a possibility that we will become that bug. We already can't comprehend processing of big data, until it is chewed for us.
I see no reason to make a distinction between an us and a them. Make it part of you, and make yourself part of it, and you don't have these problems. The problem comes from you making the distinction saying you're separate from it.
Let me give you an example. The water is something which is a part of you. You also are a part of water. If you think you're separate from the water and try to stay dry you'll fail because you're made up mostly of water.
When you figure out how to understand that you are made up of water then you'll have nothing to fear from water.
We don't care if some ladybird bug understands what we do, because it's "brain" just can't comprehend that. There is a possibility that we will become that bug. We already can't comprehend processing of big data, until it is chewed for us.
There is no we. There is just life, intelligence, and the many forms it takes. If you use AI to evolve together with it then you have nothing to fear.
So what you are afraid of is not AI. You're not afraid of intelligence or artificial intelligence. You are afraid of machines which have a will of their own. The point? Don't design the machines to have such a will to do anything which you yourself don't want. Design the machines to be an extension of you, of your will.
Think of a limb. If you have a robot arm, and this arm is intelligent, do you fear someday the arm will choke you to death? Why should you?
On the other hand if you design it to be more than an arm, but to be your boss, to be in control of you, then of course now you have something to fear because you're designing it to act as a replacement rather than a supplement. AI can take the form of:
Intelligence amplification.
Replacement for humans.
I'm in favor of the first option. People who fear the second option are just people afraid of change itself. If you fear the second option then don't choose an AI which rules over you. Stop supporting companies which rule over you. Focus on AI which improves and augments your abilities rather than replaces you. Merge with the technology rather than try to compete with it, because humans have always relied on technology to live whether it be fire, weapons, clothing, etc.
I am all for evolving past our human shells, but the resulting being will not be a human. I am talking about a scenario where people decide to stay people and merge with technologies extends no further than smart implants (I reckon most people would be too conservative to think otherwise). And AI (or "mergers") may outpace and abandon those "true" people.
"There is no we. There is just life, intelligence, and the many forms it takes. If you use AI to evolve together with it then you have nothing to fear."
I don't think strong AI would be a good thing to have on earth. I think if it has a function it should be to spread intelligent life off planet. It's something to put on a space probe with the seeds of life. The Von Neumann Probe in my opinion is a good use case for AGI.
The problem with developing AGI on earth is the motivation for it. Currently most technologies are developed for war. Most of the time the motivation for creating a technology is to exert control over people. An AGI in my opinion if developed now, will be used like a weapon to enslave or control the masses. I'm not in favor of developing WMDs so I'm not in favor of developing an AI which will trigger an arms race and be used to control society.
We might think it will not be used that way but look at surveillance. Look at how social media is used. Look at how the Internet is used. All of it has been weaponized to control people.
The only way I think an AGI can develop safely is if it emerges slowly in a decentralized fashion. Most attempts to develop it currently are centralized and look more like power grabs. Since there is no current safe way to develop AGI in a way which will not lead to an arms race I cannot say it's desirable to focus on that. If it emerges on its own then I would hope it emerges in a decentralized way which everyone can benefit from out of the gate.
Tame AI. Domesticate it. At some point I'm sure humans were afraid of dogs but over time and by a process of coevolution the dog became our best friend. Make the AI a part of you and why would you fear yourself?
AI is not like humans or animals. We are coding it and if one evil code badly then it will effect all of us .
Physics reveals no fundamental difference. Computation is computation according to physics. There is a symbiotic relationship possible between humans and machines. Meaning the code you speak of is evolving and as AI improves it can improve our ability to improve it's code while also helping us improve our ability to improve ourselves.
Unless pace at which AI evolves becomes too fast for us and it will abandon us. We don't care if some ladybird bug understands what we do, because it's "brain" just can't comprehend that. There is a possibility that we will become that bug. We already can't comprehend processing of big data, until it is chewed for us.
I see no reason to make a distinction between an us and a them. Make it part of you, and make yourself part of it, and you don't have these problems. The problem comes from you making the distinction saying you're separate from it.
Let me give you an example. The water is something which is a part of you. You also are a part of water. If you think you're separate from the water and try to stay dry you'll fail because you're made up mostly of water.
When you figure out how to understand that you are made up of water then you'll have nothing to fear from water.
There is no we. There is just life, intelligence, and the many forms it takes. If you use AI to evolve together with it then you have nothing to fear.
So what you are afraid of is not AI. You're not afraid of intelligence or artificial intelligence. You are afraid of machines which have a will of their own. The point? Don't design the machines to have such a will to do anything which you yourself don't want. Design the machines to be an extension of you, of your will.
Think of a limb. If you have a robot arm, and this arm is intelligent, do you fear someday the arm will choke you to death? Why should you?
On the other hand if you design it to be more than an arm, but to be your boss, to be in control of you, then of course now you have something to fear because you're designing it to act as a replacement rather than a supplement. AI can take the form of:
I'm in favor of the first option. People who fear the second option are just people afraid of change itself. If you fear the second option then don't choose an AI which rules over you. Stop supporting companies which rule over you. Focus on AI which improves and augments your abilities rather than replaces you. Merge with the technology rather than try to compete with it, because humans have always relied on technology to live whether it be fire, weapons, clothing, etc.
References
I am all for evolving past our human shells, but the resulting being will not be a human. I am talking about a scenario where people decide to stay people and merge with technologies extends no further than smart implants (I reckon most people would be too conservative to think otherwise). And AI (or "mergers") may outpace and abandon those "true" people.
"There is no we. There is just life, intelligence, and the many forms it takes. If you use AI to evolve together with it then you have nothing to fear."
THIS
Weak AI is no problem. The question remaining is what about strong AI.
https://en.wikipedia.org/wiki/Artificial_general_intelligence
I don't think strong AI would be a good thing to have on earth. I think if it has a function it should be to spread intelligent life off planet. It's something to put on a space probe with the seeds of life. The Von Neumann Probe in my opinion is a good use case for AGI.
The problem with developing AGI on earth is the motivation for it. Currently most technologies are developed for war. Most of the time the motivation for creating a technology is to exert control over people. An AGI in my opinion if developed now, will be used like a weapon to enslave or control the masses. I'm not in favor of developing WMDs so I'm not in favor of developing an AI which will trigger an arms race and be used to control society.
We might think it will not be used that way but look at surveillance. Look at how social media is used. Look at how the Internet is used. All of it has been weaponized to control people.
The only way I think an AGI can develop safely is if it emerges slowly in a decentralized fashion. Most attempts to develop it currently are centralized and look more like power grabs. Since there is no current safe way to develop AGI in a way which will not lead to an arms race I cannot say it's desirable to focus on that. If it emerges on its own then I would hope it emerges in a decentralized way which everyone can benefit from out of the gate.