Hey, @neavvy. How you doin' buddy?
Thank for bringing your post to my attention.
Going through your posts made me ask several questions about morality and humanity to myself, that I think are a long way from being even acknowledged, let alone finding solutions for?
- Can we really believe what the AI suggests us? Can AI's be motivated by the thoughts or actions of the persons who created or hired them?
- How far should we allow AIs to penetrate our lives? And will that even matter?
- Should there be a universal regulating authority keeping a constant check on AI developments 'cause there sure seems to be an "anything goes" thinking when it comes to developing them?
I am sure these questions pose huge debate and moral recollection, but only if questions like these are asked will there be any form of a collective understanding.
Great article buddy. Keep going.
Thank you for your comment dear @reverseacid. I really appreciate it a lot. I'm sorry for such a late reply, but I were on holidays and had very limited access to Internet.
Yes, at the current stage of development, AI's perception of reality strictly depends on the "settings" made by person who created it.
As little as possible.
I think something like that is a must have nowadays. Although AI does not pose any threat to humanity, I think its development should be constantly tracked.