Share

Computers Learn ‘Learning’, Take A Step Towards Human Intelligence

But the scientists’ approach, known as Bayesian Program Learning and described in a paper published in the journal Science, represents a remarkable advance in the drive to mimic aspects of human cognition with computer systems – one with far-reaching applications.

Advertisement

Humans and machines were given an image of a novel character (top) and asked to produce new versions.

Above and below you’ll see examples of generative works demonstrating the learning abilities of this new program (Mr. learning computer, we’ll call him), straight from the original research paper.

Whereas standard pattern-recognition algorithms represent concepts as configurations of pixels or collections of features, the BPL approach learns by “explaining” the data provided to the algorithm – in this case, the sample character.

Artificial Intelligence researchers at the New York University, University of Toronto in Canada and the Massachusetts Institute of Technology reported Thursday that they’ve developed an algorithm that captures the human-level learning abilities allowing machines to surpass humans for a narrow set of vision-related tasks.

The researchers challenged the computer to perform a very simple task: identifying, parsing and copying handwritten characters from alphabets around the world.

Armed with that model, the system then analyzed hundreds of motion-capture recordings of humans drawing characters in several different writing systems, learning statistics on the relationships between consecutive strokes and substrokes as well as on the variation tolerated in the execution of a single stroke.

“I feel that this is a major contribution to science, of general interest to artificial intelligence, cognitive science, and machine learning”, says Zoubin Ghahramani, a professor of information engineering at the University of Cambridge.

Building such machines that rely on the same amount of data as humans has proven to be a hard task.

Joshua Tenenbaum, a professor at MIT in the Department of Brain and Cognitive Sciences and the Center for Brains, Minds and Machines and a co-author of the article, echoed the sentiment.

Robots may soon be able to pick up new concepts more efficiently.

Researchers pitted humans against machines in trying to draw realistic characters.

This illustration gives a sense of how characters from alphabets around the world were replicated through human vs. machine learning.

“But if you want a system that can learn new words very quickly for the first time that it’s never heard before, we think you’d be best off using the approach we’ve been developing”, Tenenbaum said. And the last one was learning to learn, this idea that knowledge of previous concepts can help support the learning of new concepts. Now, researchers report a breakthrough in Artificial Intelligence: “A machine-learning program that mimics how humans learn”.

The authors asked both humans and computers with the algorithm to reproduce a series of handwritten characters after being shown a single example of each character – and then they compared the outputs from both machines and humans.

The research highlights how, for all our imperfections, people are actually pretty good at learning things.

When the researchers compared the computer-generated results with characters produced by humans, they found them to be “mostly indistinguishable”.

Advertisement

“I think for the more creative tasks – where you ask somebody to draw something, or imagine something that they haven’t seen before, make something up – I don’t think that we have a better test”, Tenenbaum told reporters on a conference call. “Still, the paper is an invaluable reminder that we need methods that can generalize from small numbers of examples both to model human abilities and to move AI forward”, said Etzioni. They focused on a large class of simple visual concepts – handwritten characters from alphabets around the world – building their model to “learn” this large class of visual symbols, and make generalizations about it, from very few examples. They don’t see characters as just inactive objects that are visual. “Your representation of one of these visual concepts is itself a program that can generate probabilistically different outcomes”.

Ai drawing letters