BALTIMORE — Many people call the human brain the ultimate computer. Until now, scientists thought there were clear differences in how these thinking machines “see” the world. Researchers from Johns Hopkins University say it’s actually “spooky” how similar artificial intelligence and the human mind function. Their study reveals both actually detect 3D objects in the same way.
Neuroscience professor Ed Connor says our brains detect three-dimensional shapes like bumps, hollows, shafts, and spheres in the early stages of object vision. It turns out that artificial intelligence networks, which are trained to recognize visual objects, apparently do the same thing.
The study reveals neurons in the brain’s V4 region, the first stage of the human vision pathway, identifies not only 2D shapes but 3D ones too. The Johns Hopkins team then uncovered an almost identical system in artificial neurons in the AlexNet computer. This AI system is an advanced computer vision network which sees 3D objects in an early stage called layer 3.
Study authors say the earlier a brain understands it’s looking at a 3D shape helps it interpret what kind of real world objects it’s seeing.
“I was surprised to see strong, clear signals for 3D shape as early as V4,” says Connor, the director of the Zanvyl Krieger Mind/Brain Institute in a university release. “But I never would have guessed in a million years that you would see the same thing happening in AlexNet, which is only trained to translate 2D photographs into object labels.”
AI ‘most promising models for understanding the brain’
Researchers say one of the constant obstacles facing artificial intelligence is its ability to recreate human vision. With AlexNet, scientists are getting closer by using high capacity Graphical Processing Units (GPUs) typically seen in gaming computers. These devices help handle the data created by the surge of images coming from the videos.
The team used the same tests for image responses with both natural and artificial neurons. The results show the patterns of V4 and AlexNet layer 3 brain networks handle visual information in incredibly similar ways.
“Artificial networks are the most promising current models for understanding the brain. Conversely, the brain is the best source of strategies for bringing artificial intelligence closer to natural intelligence,” Connor says.
The Johns Hopkins team adds their research may lead to future studies on how AI and human brain are slowly becoming one and the same.
The study appears in the journal Current Biology.