You are viewing a single comment's thread from:

RE: The Turing Test and Machine Intelligence (part II)

in StemSocial2 years ago

Have you seen the very recent chat protocols between Google engineers and the "AI" LaMDA? Very disturbing stuff. Most likely LaMDA would easily pass the Turing test (if it would not give away the fact that it is artificial, it is very open about it).
But it shows that the test in itself is flawed. To pass the test, it is sufficient to make others believe that you are self-reflective, but nobody can really look inside a human being, an animal, or an algorithm.
So how to define then consciousness?
And another question: If sooner or later a computer program will be that smart so that it has more intellectual capacity than any human (after the "singularity"), how important it is at all if it´s emotions and feelings are "real" or only artificial? I think, I don´t care, we should still treat that being respectfully. Because likewise my feelings could also be faked and you would never know, right?

Sort:  
 2 years ago  

That's very interesting. I'll look into that later. I had checked another Google chatbot called Meena that is also very promising.

But it shows that the test in itself is flawed. To pass the test, it is sufficient to make others believe that you are self-reflective, but nobody can really look inside a human being, an animal, or an algorithm.

Yeah, it's true, it's becoming clear that these bots can fool us without showing real understanding or consciousness.

And another question: If sooner or later a computer program will be that smart so that it has more intellectual capacity than any human (after the "singularity"), how important it is at all if it´s emotions and feelings are "real" or only artificial?

Well, I guess we'll have to invent ethics for it. The discussion has already started. I have heard of some researchers who are a bit in love with robots and take them seriously. To them, consciousness is just an appearance, so any entity that displays it should be entitled to proper treatment. I wonder if Sam Harris's theories of an objective morality might be useful at a certain point. From that point of view, we can tell good things from bad things based on the good we do or the harm we inflict on conscious beings.

Indeed, we have to find a way of collaboration. Although some take it to the extreme and are going to worship a true AI like a god. There was even a church founded for this, but it was closed recently.

 2 years ago  

LOL. It should be rather a parody, like Pastafarianism or when Richard Stallman dresses up as a preacher.