Friday 19 May 2017

Why not to do EEG+fMRI


Why not to do EEG+fMRI

Recording simultaneous EEG and fMRI data sounds like it might be fruitful. But I will argue that it 1) gains nothing over recording in separate sessions, and 2) encourages poor design and analysis.

1) What do you gain from simultaneous EEG+fMRI?

The main gain is the ability to examine trial-to-trial correlations. But how might this be useful?

Let us assume that we have two experimental conditions, A and B. We might find that the contrast (A-B) gives a region of increased BOLD signal, and also leads to a particular temporal profile of increased Gamma oscillatory power. This data could have been obtained by performing two separate studies. It yields two causal relationships: the manipulation A/B increases BOLD, and A/B changes Gamma power.  What next?

The usual approach is to then examine trial-to-trial correlations. So we might show that, for example, on trials where Gamma power is higher, BOLD signal is also higher. This gains us nothing, since it’s simply caused by A/B. This fact would be known even from the separate sessions. But let us examine just A trials, or just B trials. We might find a correlation here too, and this truly harnesses the simultaneity. What do we conclude from these correlations? We conclude that, even without a manipulation, from trial-to-trial Gamma correlates with BOLD in a region. This superficially seems interesting, but in truth we have no knowledge of why Gamma varies from trial-to-trial, nor why BOLD varies from trial-to-trial. So the fact that they correlate tells us little. There is no experimental manipulation, so it’s not a good test of any hypothesis.

What if we find different correlations in the different conditions? What if A trials show a correlation between BOLD and Gamma, but B trials do not? Prima facie, this condition-by-correlation interaction seems more interesting, since we have caused the correlation. But in fact this could arise for many reasons, not least because there might be greater variability on A trials, in BOLD, Gamma, or both. But let’s assume for argument that there is equal variability in conditions A and B. This trial-to-trial variability is of course uncontrolled, so we must model it as noise. To explain our data we require two sources of uncontrolled variability (noise “1” and “2”). What we must say is this: “A results in noise 1 influencing both BOLD and Gamma, whereas B results in noise 1 influencing BOLD and noise 2 influencing Gamma”. This is the only inference we can make (see footnote). Interpreting this is naturally condemned to be rather vague, because we are trying to examine uncontrolled noise correlations.

The correct way to examine such a relation between BOLD and Gamma would be to specifically manipulate Gamma or BOLD in another way. For example we might introduce conditions C+D which are similar to A+B, but include a task feature that is known to increase Gamma, say, by some aspect of task-relevant processing. This gives a conclusive result without resorting to the hack of simultaneity.

2) Simultaneous recording methods encourage us to do bad analysis

It should be clear now that we buy little from simultaneity. The only way to harness simultaneity is to examine correlations across trials, within single conditions. These correlations arise from natural variability between trials. In other words, this is fundamentally the study of unexplained noise, not the study of manipulations -- which puts it in the same game as resting state analysis.

The variability is, sadly, not well characterised, and in particular are usually non-normally distributed (consider an RT distribution), and different participants will have different mean, variance and skewness. Correlations must be done within subjects and care must be taken regarding assumptions. It is not clear in this case whether it is valid to use mixed linear models to combine data from different subjects. These problems all arise because we are trying to correlate two signals whose noise sources we failed to control for.

More importantly, what hypotheses do we actually have about the origin of trial-to-trial variability? We might include history / sequential effects, learning, fatigue and motivational changes. Surely these should be manipulated, instead of digging into trial-to-trial correlation. There is also true noise, such as thermal / synaptic channel noise, and unmodellable “attentional lapses” -- and perhaps we might argue that these things are best ignored for now!

Far better would be to actually compare manipulations that are known to give rise to different Gamma or BOLD signals, than to tap into uncontrolled trial-to-trial variability. This surmounts many problems of correlating noise. If you are not keen to perform these manipulations, it’s probably because you don’t have appropriate hypotheses: you probably should not be doing this trial-to-trial analysis in the first place! Stick to separate EEG and fMRI experiments.




FOOTNOTE

Numerically, if BOLD has independent noise and Gamma has independent noise , then for each trial i, the simplest expression of the effect of condition is
BOLDi=0+1 Ai+i
Gammai=0+1Ai+i.
where and are parameter estimates. To model correlations across trials due to natural variability, we can include additional terms in Gamma:
Gammai=0+1Ai+i+2 i+3 Ai i
were 2 is the correlation of Gamma with BOLD fluctuations, and 3 is the interaction of BOLD with condition. My argument is that we cannot attach any causal significance to 3 (even though it seems like there is some causality at work), because we have no idea what the distinction between fluctuations and means.