You are viewing a single comment's thread from:

RE: AI is stupid: Humans Judgment Matters

in #steemit7 years ago

I'm not certain that's true. Sure, we don't have a general AI that can act like a human, but there has been astounding development in AI in the last few years. Much of it is due to increased access to data, processing, and pre-built libraries. Many useful techniques were created back in the 80's, and have only become apparent how useful recently.
General intelligence might be far off, but current AI is proving very useful and effective, and even out performing humans in some tasks, due to dedication, and access to data. Humans don't sit around looking at millions of MRI's until they can differentiate the ones that have cancer with a high percentage. They work on far fewer examples.

Sort:  

Astounding progress for sure. Though identifying things like cancer has a proveable measurement that they can work from and the AI is identifying the presence of real objects. Quality, a cultural construct, is not fixed which makes the predictive value of an AI that learned from historic data not so useful.
It's that ability to learn from a few examples that leads the Idealists to think that humans do something quite special.
Here's an AI that is good at judging creativity with the benefit of a ton of hindsight.

It doesn't have to be perfect though. The question is, if given enough data on people's opinions of what quality is, could it differentiate quality enough to be useful? I'm not certain that's not a yes. Perhaps there are quite a few cases where it would fail. Could it bring some quality to the top though? Lets say it was built as an upvote bot, with 70% accuracy of what a statistically significant portion of the audience thinks are quality articles. Would that not be helpful?
Not saying it would be worth the time to build, but it would probably be helpful. Also likely a fun project to play around with.

Such a bot would be very useful for a particular community to have. Provided that bot is not the only means for assigning rewards within the community then it's false-positives will not be so bad. Particularly if the bot kept on learning what the community liked.
Where I to produce a user interface for steem - this would be my point of difference. That there was AIs that learned what individuals would probably like to see. You can do that off very broad metrics and by observing user behaviour. But, this is an AI at the level of the tools and it acts as an assistant to users.
However, if the bot had way too much SP then it becomes economically attractive for bad actors to learn how to fool the bot. So, yeah, I'd rather tune the bot to learn individual preferences to get around that.