Google's DeepMind's Developments with Neural Networks
The world is a confusing place, particularly for an AI. However a neural network developed by British computing firm DeepMind gives computers the power to understand how objects can be totally different to each other, and this could help bring the globe into focus.
Humans use this sort of logical thinking – known as relational reasoning – all the time, whether we are choosing the biggest steak at the market or piecing together evidence from a crime scene. The power to transfer abstract relations – like whether or not one thing is to the left of another or larger than it – from one domain to another offers us a strong mental toolset with which to understand the world. “It’s a fundamental part of our intelligence” says Sam Gershman, a computational neuroscientist at Harvard.
What’s intuitive for humans is incredibly troublesome for machines to understand. It's one thing for an AI to be told a way to perform a select task, like recognising what's in a picture, but transferring skills learned via image recognition to textual analysis – or the other reasoning task – could be a massive challenge. Machines capable of such versatility throughout their tasks are going to be one step nearer to general intelligence, the sort of smarts that lets humans shine at many various activities.
DeepMind has designed a neural network that specialises in this type of abstract reasoning and can be inserted into different neural nets to upgrade them with a relational-reasoning power-up. The researchers trained the AI using pictures depicting three-dimensional shapes of various sizes and colours. It analysed pairs of objects within the pictures and tried to figure out the connection between them.
The team then asked it queries like “What size is the cylinder that is left of the brown metal thing that is left of the big sphere?” The system answered these queries properly 95.5 percent of the time – slightly higher than humans. To demonstrate its versatility, the relative reasoning part of the AI then had to answer questions on a collection of short stories, responding properly 95 percent of the time.
Still, any sensible applications of the system are a long way off, says Adam Santoro at DeepMind, who helmed the study. It could be incredibly valuable for computer aids, however, “You can imagine an application that automatically describes what is happening in a particular image, or even video for a visually impaired person,” he says.
Outperforming humans at a specifically skilled task is not surprising, says Gershman. We are still a long way off computers that can understand the jumbled world that we live in, and Santoro agrees. DeepMind’s AI has begun this huge task by understanding variations in the size, colour, and shape, however, there’s a lot more to relative reasoning than that. It seems like it’s going to be a lot more work for the team.