DURHAM, N.C. — One of the most noble characteristics of scientists is that they are constantly questioning their work and the work of their peers. Researchers need to make sure their findings are valid and “reproducible” — that is, they will yield the same results if the same experiment is performed again. Scientists from Duke University have discovered a reproducibility issue with brain scans that brings into question many studies on brain activity. They admit the problem even conflicts with most of the team’s past research studies.
Research scientists often use functional Magnetic Resonance Imaging (fMRI) to measure the brain activity of individuals as they perform tasks. The technology has been utilized since the 1990s. Many of the studies using fMRI claim it’s possible to predict how an individual will behave in a task from the activity of their brain while it’s being scanned.
After performing robust statistical analyses on more than 1,000 fMRI brain scans, the Duke researchers no longer think this can be done. They say that fMRI studies are great for identifying the general brain structures involved when people are performing an activity, but not for predicting an individual’s behavior.
“Scanning 50 people is going to accurately reveal what parts of the brain, on average, are more active during a mental task, like counting or remembering names,” says senior author Ahmad Hariri, a professor of psychology and neuroscience at Duke University, in a university release.
The Problem With Brain Scans Revolves Around Blood Flow
The trouble comes when analyzing an individual’s fMRI. The scans measure the blood flow in the brain to identify the areas that require more energy from the blood. That’s because those areas work harder to complete a task. This measure is useful for revealing the general brain structures involved in a mental task since researchers can use the fMRI scans of many individuals to reach a conclusion.
However, the level of blood flow activity for an individual is likely to be different for each individual from scan-to-scan. That makes the imaging an unreliable measure for predicting a person’s behavior in the future.
Surprising Findings, Controversial Conclusion
In the meta-analysis, the authors combined the fMRI data from 90 experiments they performed in the past so they could analyze the data of all 1,008 people studied at the same time. In these experiments the researchers had taken at least two fMRI scans of each individual. The results show that “the correlation between one scan and a second is not even fair, it’s poor,” according to Hariri.
Next, the team performed a similar analysis on the two scans taken from 45 individuals about four months apart from the “Human Connectome Project,” an open-source database of fMRI scans. Hariri calls it: “Our field’s Bible at the moment.” They found that the correlation between the two brain scans was weak for six out of seven measures; the correlation for the language processing measure was fair.
Lastly, the team reanalyzed the data collected through the a New Zealand study that includes two fMRI scans from 20 individuals taken two to three months apart. In that review, the correlation from one scan to the next in an individual was poor.
“The bottom line is that task-based fMRI in its current form can’t tell you what an individual’s brain activation will look like from one test to the next,” says Hariri, who voices his personal frustrations since the findings of this meta-analysis negates a lot of his own research. “This is more relevant to my work than just about anyone else’s! This is my fault. I’m going to throw myself under the bus. This whole sub-branch of fMRI could go extinct if we can’t address this critical limitation.”
How Can Researchers Use Brain Scans Moving Forward?
Hariri and his team have been using fMRI scans of 1,300 individuals to try and find biomarkers of individual differences in the way people process thoughts and emotions differently. Now, he feels like he cannot move forward with this study since the scans would be clearly different if taken again.
These types of meta-analyses are important because they tell scientists if they are moving in the right direction or not. “This is a good wakeup call,” says Russell Poldrack, the Albert Ray Lang Professor of Psychology at Stanford University, who was a co-author on one of the papers reanalyzed.
Despite their conclusion, Hariri and Poldrack aren’t entirely discouraged. They know their study debunking of a common research tool in psychology and neuroscience may lead to negative reactions from their peers. “There’s three things you can do,” Poldrack says. “You can just up and quit, you can stick your head in the sand (and act as if nothing has changed), or you can dig in and try to solve the problems.” Hariri and his team are already digging in to try and solve the problems.
The study is published in Psychological Science.