New AI system mimics how humans visualize and identify objects
UCLA and Stanford University engineers have demonstrated a computer system that can discover and identify the real-world objects it “sees” based on the same method of visual learning that humans use.
The system is an advance in a type of technology called “computer vision,” which enables computers to read and identify visual images. It could be an important step toward general artificial intelligence systems—computers that learn on their own, are intuitive, make decisions based on reasoning and interact with humans in a much more human-like way. Although current AI computer vision systems are increasingly powerful and capable, they are task-specific, meaning that their ability to identify what they see is limited by how much they’ve been trained and programmed by humans.
Even today’s best computer vision systems cannot create a full picture of an object after seeing only certain parts of it—and the systems can be fooled by viewing the object in an unfamiliar setting. Engineers are aiming to make computer systems with those abilities—just like humans can understand that they are looking at a dog, even if the animal is hiding behind a chair and only the paws and tail are visible. Humans, of course, can also easily intuit where the dog’s head and the rest of its body are, but that ability still eludes most artificial intelligence systems.
Current computer vision systems are not designed to learn on their own. They must be trained on exactly what to learn, usually by reviewing thousands of images in which the objects they’re trying to identify are labeled for them. Computers, of course, also can’t explain their rationale for determining what the object in a photo represents: AI-based systems don’t build an internal picture or a common-sense model of learned objects the way humans do.
The engineers’ new method, described in the Proceedings of the National Academy of Sciences, shows a way around those shortcomings.