Uncorrected statistics in brain imaging

I admit I’m prejudiced, and that I’ve enjoyed watching the recent blast of public exposure for problems in statistical inference in FMRI, undermining many of the central papers of the field, spurred most centrally by Dorothy Bishop’s (“Time for neuroimaging (and PNAS) to clean up its act”) spanking of some often-cited works. click through for xkcd on statistical significance

Daniel Bor (“The dilemma of weak neuroimaging papers”) does a very nice job of explaining the corrected vs. uncorrected stats error along with offering some chastising of the field. And others have picked on these problems recently: Bennett et. al (“Neural correlates of interspecies perspective taking in the post-mortem Atlantic Salmon: An argument for multiple comparisons correction”) made a similar point in their terrific and very funny salmon study, and Vul et al captured some worries about way-too-high correlations in social psych w/FMRI studies (“Puzzlingly High Correlations in fMRI Studies of Emotion, Personality, and Social Cognition“) in a widely-circulated paper.

I think it’s pretty clear that this is a pretty big smackdown of something going on regularly in the actual practice of brain imaging studies. But I want to offer a couple of quick and off-the-cuff comments on the underlying biases that in a way supported the errors. Bor does a good job of saying “boy, we should be way more careful with our stats” (and who could disagree with that?), but if we don’t understand why we were so careless and mistaken, we won’t really be any better off, even if we fix this particular statistical error.

I believe that such errors were so widespread among so many practitioner who should (and do) know better is because of the reductive biases at work here. If you assume that there are or must be regular neural correlates for whatever human activity or behavior you’re interested in (as opposed to thinking such links are often fragile, temporary, holistic, and contextual), then the FMRI becomes the telescope or microscope with which we see those things. And failures to see just mean that you haven’t adjusted the focus correctly yet. After all, it takes me a while to get a nice clear focus on the bacteria with my microscope; but the blurry looks before that don’t count against what I see once the focus is set. We assume the fact that there is something to be seen there, and that correlates that we find should be regular and even causal, and so we tweak and twiddle until we see the thing that we are looking for — in these cases, a correlation between the target behavior and the brain activity that’s less than 5% likely by mere chance. Which, as xkcd so nicely reminds us, should turn up about 5% of the time just by chance.

The point isn’t that we don’t have ways to avoid this particular statistical error (clearly, we do), but that the assumption about neural correlates of behaviors and their causal significance masked us from seeing and attending to an error that we easily should have detected. In our rush and excitement to discover the (assumed to be there) neural correlates (and causes) of human activity (with, I’ll add, no actual model whatsoever of how those brain states are actually implicated in the causal story of human behavior), we were distracted from giving careful and critical attention to whether we had actually discovered what we thought we had.

The basic moral of the story is not that we carelessly screwed up our statistics; it’s that those screw-ups were completely in keeping with and likely in part motivated by our reductionistic localization fallacy biases about cognitive/behavioral neuroscience and the supposed neural correlates and causes of complex human behavior. It’s only when we keep those assumptions on the surface and explicitly call them into question that we will be doing our best to avoid these kinds of mistakes in the future.

15. March 2012 by Ron
Categories: philosophy, Uncategorized | Comments Off