Teaching computers common sense

Abhinav Gupta stands near one of the computer clusters used in his research at one of the computer server areas on campus at Carnegie Mellon University in Pittsburgh.

Abhinav Gupta stands near one of the computer clusters used in his research at one of the computer server areas on campus at Carnegie Mellon University in Pittsburgh.

Published Nov 25, 2013

Share

Pittsburgh - Researchers are trying to plant a digital seed for artificial intelligence by letting a massive computer system browse millions of pictures and decide for itself what they all mean.

The system at Carnegie Mellon University is called NEIL, short for Never Ending Image Learning. In mid-July, it began searching the Internet for images 24/7 and, in tiny steps, is deciding for itself how those images relate to each other. The goal is to recreate what we call common sense — the ability to learn things without being specifically taught.

It's a new approach in the quest to solve computing's Holy Grail: getting a machine to think on its own using a form of common sense. The project is being funded by Google and the Department of Defense's Office of Naval Research.

“Any intelligent being needs to have common sense to make decisions,” said Abhinav Gupta, a professor in the Carnegie Mellon Robotics Institute.

NEIL uses advances in computer vision to analyse and identify the shapes and colours in pictures, but it is also slowly discovering connections between objects on its own. For example, the computers have figured out that zebras tend to be found in savannahs and that tigers look somewhat like zebras.

In just over four months, the network of 200 processors has identified 1 500 objects and 1 200 scenes and has connected the dots to make 2 500 associations.

Some of NEIL's computer-generated associations are wrong, such as “rhino can be a kind of antelope,” while some are odd, such as “actor can be found in jail cell” or “news anchor can look similar to Barack Obama.”

But Gupta said having a computer make its own associations is an entirely different type of challenge than programing a supercomputer to do one thing very well, or fast. For example, in 1985, Carnegie Mellon researchers programed a computer to play chess; 12 years later, a computer beat world chess champion Garry Kasparov in a match.

Robert Sloan, an expert on artificial intelligence and head of the Department of Computer Science at the University of Illinois, Chicago, said the NEIL approach could yield interesting results because just using language to teach a computer “has all sorts of problems unto itself.”

“What I would be especially impressed by is if they can consistently say 'zebra, zebra, zebra' if they see the animal in different locations,” Sloan said of the computers.

Gupta is pleased with the initial progress. In the future, NEIL will analyse vast numbers of YouTube videos to look for connections between objects.

“When we started the project, we would not sure it would work,” he said. “This is just the start.”

Neither Mountain View, Calif.-based Google nor the Office of Naval Research responded to questions about why they're funding NEIL, but there are some hints. The Naval Research website notes that “today's battlespace environment is much more complex than in the past” and that “the rate at which data is arriving into the decision-making system is growing, while the number of humans available to convert the data to actionable intelligence is decreasing.”

In other words, computers may make some of the decisions in future wars. The Navy's website notes: “In many operational scenarios, the human presence is not an option.”

NEIL's motto is “I Crawl, I See, I Learn,” and the researchers hope to keep NEIL running forever. That means the computer might get a lot smarter. - Sapa-AP

Related Topics: