Elon Musk and Bill Gates and just about everybody else in technology have warned that AI could take over the world and wipe the humanity off the face of planet Earth. And Google’s AI guru recently predicted humans and machines to be merged within 20 years! No wonder. Today we see an unprecedented surge in product development using machine learning and AI: from some ridiculous apps like a Netflix tv-show inspired toll to turn a pic into a scary poster to marvelous developments like recent Airbnb’s AI that transforms design sketches into product source code.
One interesting point is that such a huge market as online reviews is not exposed to that much of AI at all, whereas it’s a logical leap to suggest that the industry would be breeding grounds for it. The problem of impartiality has been the biggest pain here for quite a while. How is one to distinguish a genuine review from the fake one? Quite often common sense of a human brain fails to do so. Recent studies suggest that deep learning language models (Recurrent Neural Networks or RNNs) can easily evade human detection, which means that well designed neural networks are now capable of producing realistic online reviews. A user study showed that these fake reviews can consistently avoid detection by real users. Moreover, there AI programs trained to generate fake reviews that are indistinguishable from real reviews written by humans. Recently a group of researchers at Chicago University[1] were able to train an AI defense mechanism — a neural network that can detect machine-generated reviews with high accuracy.
With a view to these tech advancements, Revain developers team managed to achieve solid operation of AI in their platform as well. The first and foremost function is to perform the first filtration stage. Here automatic filtration is using the state-of-art IBM Watson AI: each review after being submitted first has to go through automatic moderation. This is based on machine learning and neural networks. Similar systems are being implemented by major companies, like Instagram, for example. Here is shown how this happens exactly:
Thanks to tone analyzer feature of IBM Watson, Revain is able to automatically determine an emotional component of reviews. If we take a look at some obviously biased comment, like this one: ‘Oh, this place sucks, such a bullshit, never ever again shall I be back here!!’, Tone Analyzer will instantly be able to detect suspicious features and suggest this review to be fake, using such criteria as Anger, Disgust, Sadness levels, Language Style and Social tendencies. Reviews like this won’t even be reaching the second manual verification level.
But how much autonomy is too much for a machine trained by a human? It’s obvious that in some circumstances and industries, we’re going to have to let go of the reins. Deploying AI on online reviews market has a great potential to serve humans for a good reason. Furthermore, decision-making power is of extreme importance here too: letting the algorithm decide whether a review is genuine or not will be saving time and effort, also adding up both to the efficiency of a review platform and reviews ecosystem in general.
[1] Automated Crowdturfing Attacks and Defenses in Online Review Systems. University of Chicago, 2017
Congratulations @olyagreen! You have completed some achievement on Steemit and have been rewarded with new badge(s) :
Award for the number of posts published
Click on any badge to view your own Board of Honor on SteemitBoard.
For more information about SteemitBoard, click here
If you no longer want to receive notifications, reply to this comment with the word
STOP
Congratulations @olyagreen! You have completed some achievement on Steemit and have been rewarded with new badge(s) :
You published 4 posts in one day
Click on any badge to view your own Board of Honor on SteemitBoard.
For more information about SteemitBoard, click here
If you no longer want to receive notifications, reply to this comment with the word
STOP