The slippery slope of AI

in #science7 years ago


Source: technologyreview.com

Over the past few years we have witnessed amazing advances for the use of AI (artificial intelligence) such as self driving cars, programs to assist doctors in diagnosing patients, OCR (optical character readers, cryptocurrencies and don't forget HFT (high frequency trading). There are many more that are not included in the list.

The one main issue is whom is in control or oversees the conclusions arrived at by the AI algorithm? The answer for now is 'no one'. This is because the coder understands how the algorithm utilizes the perimeters set forth in the program but can not predict the outcome. Let's attempt to understand a HFT program on a very simplified and hypothetical case. The program is instructed to buy a stock if the price hits a certain level and to continue buying till the price moves from that target price. Logically this makes sense because as it buys more stock the price should naturally rise and move above the target price resulting in the program to stop buying. Or conversely, the program sells a stock when the target price is attained and this will cause the price to fall resulting in the program to stop selling. Consider the situation where a program is in the buying mode and 'bad news' comes out. The program is not aware of the bad news but the human traders are. The human traders will sell the stock and this will inhibit the rise in price. Since the price does not move off the target price, the program will continue to buy. If we project this logic the entity employing the HFT will end up with a boat load of bad stock. Understand this is very simplified, I tend to believe that the algorithm will have some input as to the current information of the stock's company. I just wanted to point out that AI programs have shortcomings. Recall the saying 'garbage in garbage out'.

Let's turn our attention to 'self learning' AI programs. Since the code is mostly proprietary we are not able to precisely determine what constitutes an error (as to the AI's logic) and how the algorithm is altered to correct the error. Since the AI program independently corrects the code, the program is the only one knowledgeable about the change. Where is the oversight if the program errs in correcting the supposed error.

I am not taking the position that all improvements arising from the growth in AI are bad. I am concerned about our increasing dependence on AI systems.

steem quake.jpeg

Sort:  

It is amazing to see the advances. Google's Go bot was in the news. Now it learns simply buy playing itself!!! Scary. It plat almost 5 million games of Go against itself in 3 days.
http://www.npr.org/sections/thetwo-way/2017/10/18/558519095/computer-learns-to-play-go-at-superhuman-levels-without-human-knowledge

Thank you for bringing up that article, I missed it. Some of the advancements in AI are truly amazing.

Indeed. It is especially true in this case that takes the learning from "watching" humans play out of the equation. Amazing and scary all at the same time!

Scary indeed. That's my biggest fear. Did you ever see 'West world'? I know that was just fiction, my fear is that it was actually faction.

Never saw it but you are in good company with you concerns...like Elon Musk and Stephen Hawking.

In short, 'West World' was about a fantasy park staffed by automatons. Patrons could fulfill their dreams (e.g. like a dual at high noon). The robots went awry and started killing the patrons.

I'm not sure if I like being in the company of Elon but I'm okay with Stephen.

Thank you for the good post..

You are very welcome. Thanks for reading.

They need to be installing kill switches in all AI robots lol but I guess AI would realize it has been installed and override or deactivate it

That is a good idea. You're probably right about the AI deactivating it. Thanks for your comment.