In his address to the Harvard school of Computer Science a few years ago, Steve Ballmer said: "Machine learning will be the next era of computer science ... I don't think being afraid of any innovation is a good thing.". (http://dataconomy.com/2014/11/steve-ballmer-advocates-machine-learning-as-the-next-era-of-computer-science/)
That is ridiculous. Costs and risks are associated with every innovation and every new technology, and AI is no different. Fear is a perfectly normal and healthy response to change and uncertainty, where it involves potential costs and risks. Whether it be or innovations in pharmaceuticals or nuclear energy or AI. Just ask thalidomide victims or former residents of Chernobyl or, and within a few years, Uber drivers. AI will displace most workers, and it will be used to kill people in future wars. I am not saying that it can be or should be stopped. But there are clear risks and costs associated with AI. A neural network with the capabilities of a housefly can do serious harm if it is used in a malicious device. And when neural networks are used to design neural networks, their future capabilities will become unpredictable.
Congratulations @peterthor! You have completed some achievement on Steemit and have been rewarded with new badge(s) :
Award for the number of posts published
Click on any badge to view your own Board of Honor on SteemitBoard.
For more information about SteemitBoard, click here
If you no longer want to receive notifications, reply to this comment with the word
STOP