Really nice article again! just wanted to ask you a quick question on reinforcement learning of computers (and AI). Even though I of course see the vast potential that AI have for the future, is it not also scary that they could start teaching themselves things we have no control over? Not sure if you have heard but Googles AI DeepMind has learned to act very aggressive in stressful situations. Do you think there needs to be some regulation regarding AI development and more control over it by humans? cheers :)
You are viewing a single comment's thread from:
In social situations, two AIs will also work together if the outcome benefits them both. Computer scientists from the Google-owned firm have studied how their AI behaves in social situations by using principles from game theory and social sciences.
During the work, they found it is possible for AI to act in an "aggressive manner" when it feels it is going to lose out, but agents will work as a team when there is more to be gained.
I think regulation is a must for any technology advancement as it may overturn human ability to fight back. This is also applicable to reinforcement learning...!
Correct me if i am wrong, but does that mean that AI will only work together with other AI, or humans, if there is more to be gained from it? Thus, if at one point humans were holding AI back they might act aggressively towards us?
You are right...