Sort:  

Basically when looking at ML (and Google Deepmind) just think of it as mirroring how we learn. It isnt exact but that is the simplest framework.

The neural network is set up to mimic the synapses in our brains. As data is fed in, it is reinforced through repetition, similar to humans. You hear something once you might not remember, by the 50th time, you get it.

These models are the same way.

That said, they are limited, at the moment, to context. The understanding of the world is limited since they reside in a box.

Thank you for this lucid explanation.

I appreciate it.

The next level, many believe, is context through embedded AI, ie models in robots and cars (things that move around and have a spatial component).

I am not sure the question about self driving cars.