The biggest problem with AI is that it will never be able to reach the state of self-consciousness therefore be able to make contextual and logical choices in some very specific situations.
You are viewing a single comment's thread from:
The biggest problem with AI is that it will never be able to reach the state of self-consciousness therefore be able to make contextual and logical choices in some very specific situations.
There is no need for consciousness (the perception/qualia of things being observed). Purely logical conclusions about oneself and action taken based on that, qualify as self conscious behavior. Self is just another object in the environment that can be observed.
It is hard to say at this point. Look at how much we've technologically accomplished in the past 10 years. Who can say that we can't accomplish 3x of that in the coming years? I think that superintelligent AIs are bound to emerge at some point in the near future. I look forward to seeing how the tech-community deals with it.
I'm waiting for my favorite AI to come to life :P This would possibly be the only way I can believe in a future way AI are capable of self conciousness.
Robo-Cop is a pretty nice movie. Watch Ex Machina. Not too much on the action side, but it presents some excellent thoughts on AI.
Ex Machina - Exciting movie! With a really good storyline and a wonderful final!
Why think this is true? If "self-consciousness" means "able to self-represent", this is possible for machines today. If it means "have experience / feelings / qualia", then it's equally a mystery how the meat in our heads can have it - and as @deepsynergy points out in another reply to you, "consious" in this sense has little to do with capacity for action. If it means something else, I'd have to hear what.