Sort:  

The word Intelligence means, "reading between the lines".

You read my reply with human cleverness.
Human cleverness is a product of thought - memory - knowledge - experience - language.

You read my reply with human intelligence "reading between the lines" to grasp the meaning.
Human intelligence is not the product of thought - memory - knowledge - experience - language.

Human cleverness (thought - memory - knowledge - experience - language) is measurable.

Human intelligence is not measurable.

Therefore the cessation of human cleverness is the awakening of human intelligence.

Therefore there is no Artificial Intelligence only Artificial Cleverness based on Human Cleverness.

PS: Read between the lines, and feel it, in your mind, in your blood, with your whole body!

Are you a philosophical Idealist who believes in the infinite creativity of the humans mind? That humans have agency but an AI is only ever an agent of its programmer? Probably like William Dembski, that AI isn't anything other than the inherited tricks of its programmers and trainers? Or is this a more Zen thing?
Maybe you've read Nagel's work on the experience of being a bat.
It's a fun topic.

I think this requires a thorough debate on what it is to be human; not just the limitations but also the extremes. What are the limits of human experience?

Just to bring one example, if one assumes the brain is just a neural net, then one can justify that one can build a better artificial neural net. But the brain is more than point-to-point interactions at the synapses, there are electromagnetic fields that communicate between neurons. (What are EEGs otherwise?) This implies that the geometry of the brain is as much part of its function as its electro-bio-chemistry. But that means that to build a better brain means building... a better brain!

This is just one example, but in essence, to assume limitations on human abilities because one has a preconceived model that builds in such limitations is not in itself proof that being able to surpass that model is the same as being "better" than human.

Great reply,

David Bohm wrote a book, Thought as a System.

I think we know too little to comfortably (ie based on evidence) put to rest the argument of how a mind functions. I do take an emergent perspective on intelligence and that colours what I think can ground a mind. The brain can ground a mind and a brain will have physical limitations. To what extent those the physical limitations of the brain are also limitations on the mind is not known.
But, let's just say we accept the brain is some form of neural net - the reality is we're not even close to building a neural net that matches the scale of the brain. That's even before going into how we train that neural net. In a practical sense, the level of sophistication is not there yet.

I am an israeli hacker educated into complex adaptive systems, cybernetics.

It's fun to read this reply from a clever person. HINT

You should try Ray Kurzweil : How to Create a Mind.

Off topic here, but CAST is one of my favourite topics. CAST is one reason why I think co-existing dumb AIs are a bigger risk than the singularity. CAST is also how I look at information flows around parts of a complex system - there are particular patterns that helps you segment the system and thus understand it more easily. For example, thinking about a database diagram as a CAST meant it was easy to quickly derive rules for where to look for the conceptual objects first time I looked at a diagram and then point out where problems might occur.
It's an extremely helpful conceptual tool. Equilibriums, dampening effects, amplifying effects - all good stuff.

Dumb AI's are indeed a bigger risk, I call dumb AI's the beast system. :)

Once the human becomes the ghost in the machine, it's over for the human.

(((They))) salivate over this.

Ray is one of my heroes. I tend towards cautiously thinking he is probably correct. He is at least correct enough that we should try to create the minds as he says and see where that leads us. The journey will teach us a lot even though I don't think the destination will be where we think.
There are some big research hurdles to even contemplate what Ray is proposing though. We're only just starting to work with AI that feeds back into itself (e.g. RNNs) previously our neural networks were sense-responses machine that worked in one direction only. And, we have engineering problems dealing with things at the scales needed to pull off Ray's brain simulation. But yeah, let's give it a shot.