The Challenge of fMRI Interpretation

July 27, 2010

Functional Magnetic Resonance Imaging (fMRI) is one of the most popular imaging modalities used to investigate the brain.  A 2008 review of fMRI relates that, at the time of its publication, over 19,000 peer-reviewed fMRI articles had been published, and new publications were coming out at a rate of approximately eight per day in 2007.  For all of its ubiquitous popularity, however, there are a number of issues surrounding fMRI, including not only the temptation to use dubious statistical methods to ensure finding some results (possibly through a lack of statistical understanding as well as malicious disregard for scientific ethics) but also the more fundamental question of just what can be concluded from changes to the fMRI BOLD signal.  Given that the aforementioned 2008 review estimated that approximately 43% of fMRI studies were involved in “functional localization and/or cognitive anatomy associated with some sort of cognitive task or stimulus”, the interpretation of fMRI results with respect to cognition is clearly an important question.

The question of the meaning of BOLD response has come once more to prominence, thanks to a recent Nature paper by Lee et al. that combines the promising new optogenetic technique with fMRI.  Optogenetics involves introducing light-activated transmembrane ion channels to specifically targeted cells via a viral vector.   The virus can be tailored to target specific cell-types, allowing much more control over neuronal activation than other techniques such as implanted electrodes.  Mo Costandi’s excellent (and more timely) review has already been posted, but there are a few aspects of the study that deserve to be specifically highlighted, so I will briefly go through the methods and results again.

Lee et al. first injected the virus into the motor cortex of mice targeting excitatory neurons.  Following the injection, they anaesthetized the mice and optically stimulated the targeted neurons while gathering fMRI readings.  Nicely fitting with expectations, the increased activity of the optically stimulated neurons resulted in a corresponding increase in the BOLD signal 3-6 seconds following the stimulation onset, while there was no change in BOLD signal for control mice injected with a saline solution.  The BOLD signal again began to drop within 6 seconds of stimulus cessation, returning to baseline after approximately 20 seconds.  The researchers next repeated the experiment, only this time targeting inhibitory neurons.  Once again, optically driving the infected neurons resulted in an increase in the BOLD signal, but this time the zone of increased activity was surrounded by a smaller area of decreased BOLD response.  Importantly, as Chris Chatham rightly points out, the result of optically driving inhibitory neurons is still a net increase in the BOLD signal.  Unfortunately, much of this aspect of the experiment was relegated to the supplementary information, leading to an incorrect report by Scicurious that optically driving inhibitory neurons actually decreased the BOLD response.

While the study showed that increased neuronal activity reliably evokes a BOLD response, that does not necessarily mean that an evoked BOLD response is always due to increased neuronal activity.  In fact, as Mo Costandi briefly mentioned in his post, Yevgeniy B. Sirotin and Aniruddha Das performed an experiment showing that the BOLD response could be preemptively evoked irrespective of actual neuronal activity.


Researchers should continue to be careful while interpreting fMRI data.  The BOLD response is a measure of haemodynamic changes rather than neuronal activity, and though this study does suggest that increased activity in a neuronal population fairly reliably evokes a BOLD response, the converse is not necessarily true.  Likewise, the activity of neurons driving the fMRI signal could be from inhibitory as well as excitatory neurons, complicating functional interpretations of BOLD responses.  Inhibiting inhibition to yield excitation is a valid computational strategy (and one which is commonly proposed for functional models of the basal ganglia and its role in voluntary movement), which provokes the question of whether regions of reduced BOLD response are still vital to the cognitive function in question or if they are simply the result of metabolic conservation resulting from increased load elsewhere.  Furthermore, if a region is undergoing a careful balance of excitatory and inhibitory input it might show only marginal haemodynamic alterations despite having a functional role in cognition.

Thus, while Lee et al. have provided an excellent study demonstrating powerful applications of optogenetic techniques as well as providing support for fMRI data, their study does not actually validate fMRI interpretations.  Further combining of optogenetic and fMRI techniques will likely allow more detailed probing of the complex relationship between BOLD response and neuronal mechanisms.  For now neuroscientists must still be careful of the underlying assumptions and possible alternatives involved in the interpretation of fMRI data.  Caution is particularly warranted in relation to fMRI since its aesthetic appeal tends to be hard to resist*.

*If anyone could send me a link of a similar study performed on neuroscientists as opposed to undergraduate students, it would be much appreciated.

Sloppy Language in Science on Human Uniqueness

November 15, 2009

Mental and behavioural attributes are notoriously difficult to quantify, whether one is speaking of an individual or population. While there are specific well qualified claims that can be made about intellectual abilities or behavioural adaptability, any sort of general sweeping statement about the ‘uniquely human trait of…’, though rife within popular and scientific literature from psychology or neuroscience, tend to crumble under close scrutiny. Making such an unqualified statement irks me, as I believe it fosters an undue reverence for the human brain (despite our heavy reliance on model organisms for neuroscience research), and is reminiscent of Descartes’ baseless theological labeling of all non-humans as mindless automatons. Of course, I am not suggesting that there are not unique or laudable aspects of our species’ cognitive abilities (for example, our penchant for complex linguistic communication is quite impressive, but in no way is communication or even language a uniquely human endeavour. Human language is a unique and impressive cognitive trait that has helped enable our species to accomplish some truly impressive feats, but nailing down just what exactly makes our language quintessentially human (other than being used by humans) is astonishingly difficult). The difficulty of making cross-species generalizations is further compounded by our inability to break away from our own human perspective. In the same way that we must be careful not to incorrectly attribute cognitive abilities and methods to non-humans through overly zealous anthropomorphism, we must be wary of missing the abilities of other animals through the sheer alien nature of the abilities (for example, while we are now quite comfortable with the idea of bats utilizing echo-location to navigate and hunt, when sonar was a top-secret military technology it was a startling and preposterous revelation).

Given the sloppiness of such ‘uniquely human’ generalizations, therefore, I was quite astonished to see a ScienceNOW article not only referencing Descartes in the first sentence but following it with the question, “What imbues us with this uniquely human sense of self-awareness?” The news brief, written by Greg Miller, is apparently based on a paper in Nature Neuroscience1 (I say apparently because I was unable to find an explicit reference). Reading through the paper itself, I noticed no such sentiments. Rather, the paper served as a brief and informative summary of a study on interoceptive awareness using a comparison test of heartbeat sensation in a brain lesion patient and uninjured control subjects. Greg Miller does a perfectly fine job of summarizing the actual methods and results of the paper in the rest of his news brief, so I will not dwell on those here. I simply take issue with his opening lines.

The definition of self-awareness is itself a matter of some contention, but in the sense that this study comments upon (awareness of changes in visceral function) I find it bizarre that anyone would even contend such a sense was uniquely human. After all, rat behaviour will markedly change following an injection of cortisol2 despite cortisol’s extremely low penetration of the blood-brain barrier3. Interestingly, in Vinogradova and Zhukov’s paper2 the behavioural response to cortisol injections was often of an opposite nature in two different breeding strains that were selected for high or low rates of acquiring active avoidance behaviour. Such differences are at least plausibly suggestive of an ambiguity in the interpretation of visceral changes brought on by the cortisol injection, similar to the famous ambiguity between fearful and lustful physiological arousal demonstrated by Dutton and Aron4. With two populations bred for opposite learning tactics from fearful stimuli, it is reasonable they would also exhibit a change in their likelihood of interpreting a shift in physiological state as anxiety (which is to a large extent the expectation of fear and horror in the near future).

Moving beyond the self awareness of what is going on within one’s own body to more abstract notions of the self, the mirror test is a classic tool for exploring the subject (although I have my own reservations on the efficacy of the mirror test, those can be saved for another time). Just four short days after Greg Miller claimed that only humans were self-aware, another ScienceNOW news brief came out describing the success of pigs at learning to use a mirror to find food, a measure which, in the words of the lead author Donald Broom, gives pigs at least “some degree of self-awareness”. The article also conveniently provides a list of other animals who have passed the mirror test: elephants, dolphins, magpies, gray parrots, and some primates (including humans, which the article for some reason listed separately).

While sweeping statements on human uniqueness are appealing to both our species’ vanity and as a nice opening line to elevate the profundity of the topic at hand, they are misleading and not backed up by evidence. Such language does a disservice to the field of neuroscience and discounts the contribution of model neurological organisms and comparative neuroanatomy to our understanding of the brain and its function.

1 Khalsa, Sahib S., David Rudrauf, Justin S. Feinstein, and Daniel Tranel. 2009. The pathways of interoceptive awareness. Nature Neuroscience, Advanced online publication.

2 Vinogradova, E. P. and D. A. Zhukov. 2008. Changes in anxiety after administration of cortisol to rats selected for the ability to acquire active avoidance. Neuroscience and Behavioral Physiology, 38:781-783.

3 Pardridge, William M. and Lawrence J. Mietus. 1979. Transport of steroid hormones through the rat blood-brain barrier. The Journal of Clinical Investigation, 64:145-154.

4 Dutton, D. G. and Aron, A. P. 1974. Some evidence for heightened sexual attraction under conditions of high anxiety. Journal of Personality and Social Psychology, 30:510-517.

Adaptive Control and Learned Sensory Derivatives

November 3, 2009

One popular method for modeling motor control systems is as an adaptive controller.  The brain learns, through progressive experiences, the correct sequence of output signals to yield a set of muscle contractions which will result in the desired action. Such a description is still highly nebulous, but it provides an established and already well-developed mathematical and conceptual framework. The fleshing out of the details* in relation to the underlying biological system provides a test for the applicability of adaptive control theory to biological motor control, as well as helping to guide the exploration of the otherwise overwhelmingly complicated field of cognitive control.

In a given control system, the variables that relate control signals and the resultant system performance are referred to as sensitivity derivatives. In a highly readable paper, M. N. Abdelghani, T. P. Lillicrap, and D. B. Tweed argue that sensory derivatives are not innate properties of human motor control systems, but are instead learned qualities1. They propose a novel, elegant, and biologically feasible mechanism for learning sensitivity derivatives which is well worth a look for anyone interested in adaptive control models for biological systems.

Rather than get too involved in the details of specific control architectures, however, I would like to look more closely at the question of learned versus innate sensitivity derivatives. The primary evidence for a learned sensitivity derivative is the ability of a motor system to recover from a reversal in the sign (or a change from zero**) of the effect of control signals and system response. For example, if the nerve fibres providing innervation to the extensor and flexor muscles in a limb are surgically reversed, a system relying on an innately known relationship between motor output and resultant muscle response (with learning focused solely on the correct magnitude of desired muscle flexion) would be unable to recover. Conversely, a system capable of learning sensitivity derivatives would be able to eventually recover after an appropriate learning period.

What is absolutely fascinating is the variation of recovery across species. Unfortunately, the systematic experimental protocols to test the recovery of various re-wirings of the nervous system are rather brutal seeming, and thus I was not able to find a lot of recent examples. The zoologist R. W. Sperry not only engaged in a number of such experiments during the 1940s, he also produced a lengthy article reviewing similar studies throughout the preceding decades2. I was unfortunately unable to find a non-mammalian study including transposition of nerve fibres in the limbs, but Sperry did perform an experiment in which he rotated the retinas of a group of salamanders. They never adapted to the altered state, and their visuomotor coordination remained permanently impaired3. In mammals, Sperry notes that recovery of coordination almost always occurs in humans and usually occurs in dogs and cats2. The recovery of cats has been verified in a more modern experiment4, as well as in monkeys5. Surprisingly, however, rats showed a complete lack of recovery following transposition of extensor and flexor nerves in their forelimbs6.

Considering the ubiquity of rodents as model organisms in neuroscience, this is a somewhat disconcerting functional difference. It is unclear whether the inability of rodents to recover is due to proteomic differences at the level of the synapse or larger-scale organizational differences (obviously, since we are not entirely sure how we perform motor learning and neuronal adaption in either humans or rodents), but it is an important difference that should remain in the back of a neuroscientist’s mind.

The question of learned versus innate sensory derivatives fascinates me, however, for somewhat less practical reasons. Recovery from the surgical transposition of nerves, after all, was not exactly a selective factor in vertebrate evolutionary history. Due to the fact that it is not something that is found in all vertebrate species, the ability to explicitly learn sensory derivatives is clearly an evolutionary development in neuronal architecture rather than simply the manner in which motor control initially evolved. Given the propensity for biological systems to re-use existing architectures with minor tweaks, it is possible that the expansion of motor repertoires provided by the development of a control network capable of modifying sensitivity derivatives (like implicit supervision) did not end there, but actually opened up the possibility for an entire host of more complicated and nuanced behaviours.

*No pun intended.
**A fascinating example of this sort of recovery is a surgical treatment for facial palsy using hypoglossal nerve transposition. With facial palsy resulting from damage to the facial nerve, surgeons are able to cut the facial nerve and attach in its place part of the hypoglossal nerve that formerly innervated the tongue. Although patients initially move their face whenever they try to move their tongues, they are eventually able to recover independent control of both face and tongue.
1 Abdelghani, M. N., T. P. Lillicrap, and D. B. Tweed. 2008. Sensitivity derivatives for flexible sensorimotor learning. Neural Computation, 20:2085-2111.
2 Sperry, R. W. 1945. The problem of central nervous reorganization after nerve regeneration and muscle transposition. The Quarterly Review of Biology, 20:311-369.
3Sperry, R. W. 1943. Effect of 180 degree rotation of the retinal field on visuomotor
coordination. Journal of Experimental Zoology, 92:263-279.
4Yumiya, H., K. D. Larsen, and H. Asanuma. 1979. Motor readjustment and input-output
relationship of motor cortex following crossconnection of forearm muscles in cats. Brain Research, 177:566-70.
5Brinkman, Cobie, R. Porter, and Julie Norman. 1983. Plasticity of motor behavior in monkeys with crossed forelimb nerves. Science, 220:438-440.
6Sperry, R. W. 1942. Transplantation of motor nerves and muscles in the forelimb of the rat. Journal of Comparative Neurology, 76:283-321.