Q. I understand that when designing a blocked fMRI paradigm the duration of each block should be such that the frequency of changes from block to block is different from the frequency of fluctuations in the field of the magnet. Does this make sense? If so, what block lengths are to be avoided?
A. The magnet should hopefully not have any significant fluctuations over time. At least none that are periodic. For our magnet the cold head cycle (giving the dull background thumping sound that is constantly on) may cause some mild phase fluctuations at the centre of the magnet. This cycle time is about 0.5 sec duration (i.e., 2 Hz). A more common source of periodic fluctuation is the effect of breathing (which causes small magnetic field fluctuations in the head from the lungs filling and emptying). These effects (with time constants of 3-10 secs) are likely to be more prominent than the cold head effects. Obviously, there may be other magnet-related instabilities that are not periodic.
Q. We are interested in imaging medial temporal lobe, and have considered whether it might help us to image along the longitudinal axis of the hippocampus. I understand that using some planes for imaging introduces more noise into the signals due to the demands placed on the coils. Does this make sense?
A. On our 3T system there should be little difference in the demands placed on the different axes of the gradient coil. When the read direction is z the acoustic noise is slightly less than when the read direction is x or y. But the temporal stability of the images should not be any different. There may, however, be reasons for selecting a particular plane if you are trying to image "difficult" areas (inferior frontal lobes, inferior temporal lobes). In these areas there are macroscopic field gradients established in the head which cannot be shimmed out with the available room temperature shims. This results in image distortion and signal dropout in these areas. If the internal field gradients have a preferential direction that aligns with the slice direction then you may experience increased through-slice signal dephasing. This is particularly the case when you use a thick slice. In these cases, it is better to select a slice direction that is perpendicular to the predominant internal field gradient. For the frontal lobes this is probably sagittal. For the temporal lobes there is likely little difference, as the internal field gradients unfortunately have a rather complicated spatial pattern.
Q. What factors influence the susceptibility distortions in gradient-echo EPI data?
A. There are two effects that susceptibility boundaries have on gradient-echo EPI data sets. The first is the geometric distortion, which is caused by the long effective dwell time in the phase encoding dimension. Ultimately this is governed by how fast the readout gradient can be oscillated to give a +ive and -ive pair of echoes in the EPI echo train. At the FMRIB Centre we are able to oscillate the readout gradient at a maximum of about 1200 Hz (effective phase-encode direction bandwidth of 2400 Hz. This will still cause some residual distortions, which can in theory be further corrected post hoc if a field map is also collected. The second problem is the signal drop out that you get in regions where the susceptibility differences cause a local field gradient. This gets worse with increasing echo time (TE), but obviously the BOLD effect gets better with increasing TE (at least up to a point). So there is a trade off. We typically use 30ms for our studies, which is approximately the average brain T2* value, and is therefore the optimum value. In fMRI experiments that interrogate areas of significant susceptibility distortion (and signal drop out) it may be desirable to use a shorter TE value. Note that in spin-echo sequences the signal my be enhanced in some areas and reduced in others.
Q. What are the pro's and con's of increasing the readout bandwidth?
A. As explained the previous answers, the level of distortion artefact in EPI can be be reduced by reducing the time spent acquiring each line line in the image. This is done practically by increasing the receiver bandwidth (also called 'sw'). By increasing the range of frequencies that are sampled, the time spent sampling each point is decreased, and hence the distortion can be reduced. So, for example, increasing the bandwidth from 100 to 200 kHz reduces the effect of a distortion that was 4 pixels, to be only 2 pixels. This reduction in distortion is obviously beneficial for any fMRI study, so what are the disadvantages of such a change?
Firstly there is a penalty in signal-to-noise. Doubling the bandwidth decreases the signal-to-noise ratio by a factor of sqrt(2). This could have implications for the number of averages required in an fMRI run, although it should be noted that by increasing the bandwidth we are increasing the thermal (image) noise, but not the physiological noise which is often the dominant factor in the contrast-to-noise ratio of an fMRI time series.
Whatever bandwidth is used, the same total gradient area is required to maintain the resolution. Therefore, since increasing the bandwidth reduces the acquisition time, the gradient strength required increases. This has two effects. Firstly, the heat dissipated by the wires in the gradient coil will increase in proportion to the increase in bandwidth. The vicinity of the wires in the head coil cannot rise above 50 degrees centigrade, and a thermal cutout will stop the experiment automatically if this happens. This could limit the duration that a particular scan run can for, and may mean that a gap must be left between runs to let the head gradient coil cool down. Not each of the three gradient axis coils (X,Y and Z) respond in the same way to the heavy gradients. This means that it might be possible to avoid the gradient coils overheating just by changing the slice orientation, or turning on or off the 'Swap phase/frequency' option.
Another effect of the increased gradient strength is the extra acoustic noise that this produces. Recently, a subject scanned with a 64 x 64 matrix and 300kHz found the level of noise so distracting that they could not perform the activation task properly. This could also be dangerous if acoustic levels are too great.
So what is recommended? If you are not too concerned with the level of distortion in your current images then it is probably not worth changing to a higher bandwidth. If however, signal loss and distortion are making it difficult to accurately detect activations then an increase in bandwidth may be worthwhile. For a 64 x 64 matrix an increase from the usual 100kHz to 200kHz will make a significant difference to the level of distortion. When doing this though the following things should be considered.
Q. I don't understand the Nyquist theorem. Why are we restricted to just obeying the Nyquist theorem and sticking to frequencies of less than 1/(twice the dwell time)? Surely if we sample more often we get better reproduction? If you are, for example, sampling music then you can sample as often as you like.
You are absolutely right that you can sample the tune (i.e., digitize it) as fast as you want. If the music has frequencies that run up to (say) 20kHz (typical upper limit of human ear) and you sample it with a bandwidth of 100kHz then you'd think you had better represented the music. Which of course you have (although you can't hear it). However, the electronics of the sampling (microphone and recorder) have some inherrent noise within them. This is often known as Johnson noise, and has a fairly flat spectrum. So if I make the recording with 100kHz bandwidth then I will get a noisier recording than if I made the recording with 20kHz bandwidth. This is why people use filters so that they only sample the frequencies they want to sample. That way the signal-to-noise is optimized. Since I can only hear frequencies up to 20kHz there is little point in representing higher ones, since all I get is the same sensation of the music, but noisier.
OK, so now to Nyquist...
Let's say that I decide (arbitrarily) to sample with a dwell time of 10 microseconds. That means I am collecting a data point every 10 microseconds. If I try to digitize a sine wave with a frequency of 1 kHz then I can do a very good job, as I have 100 data points per period of the oscillation (a 1 kHz signal has a period of 1000 microsceonds). If I digitize a 10kHz signal then I can still do pretty well, as I will have 10 points per period, and my fourier transform analysis will be able to pick out the frequency at 10 kHz. Now consider the example of trying to digitize a signal at 50 kHz. You will only have 2 points per oscillation. If the signal is at *exactly* 50 kHz then I will be able to sample only at (say) the maximum peak and minimum trough of the sine wave. I have no other data points. But I can tell that it is a 50kHz signal. If, however, the signal is 50,001 Hz (50kHz plus 1Hz) then I will have less than 2 samples per trough, and it turns out that if you draw it out (or simulate it on a computer) then you get the same signal as if you had a -49,999Hz source. So instead of your Fourier transform telling you you have a 50,001Hz signal it would tell you that you have a -49,999Hz signal (i.e. the signal and the digitization frequency are beating with one another). Similarly, if you have a 60kHz signal and you digitized it at every 10 microseconds then your Fourier transform would tell you that you had a -40kHz signal. This is aliasing, where the signal is put in the wrong place because you didn't sample it frequently enough. This is also a statement of the Nyquist theory. This says that if you want to sample a frequency then you need to sample it at least twice per oscillation. Also note that we have a simple relationship that we have "derived":
For a 10 microseconds sampling rate we have found that we have problems if the frequency is above 50kHz. Note that if we sample in a complex (real and imaginary) mode (as we do in MRI) then we can tell the difference between -50kHz and +50kHz. If we only sample the signal with a single (real) ADC then we can't even tell if the signal has +ive or -ive frequencies.
So to summarize:
If you only sample the real channel then the above sampling theory arguments tell us that the highest signal frequency we can sample is half the sampling frequency (I define the sampling frequency as 1/DW), and we can't tell the difference between +ive and -ive frequencies (i.e., the bandwidth we can accurately see is from 0 to 1/(2DW)).
If you sample both real and imaginary channels (as we do in MRI) then the highest signal frequency you can sample is still half the sampling frequency, but you can now tell the difference between -ive and +ive frequencies. So the range of frequencies that you can accurately represent is everything from -1/(2DW) to +1/(2DW). Anything outside this range of frequencies will not be properly represented, as the Fourier transform will put the signal in the wrong place (alias). Also, if we define the bandwidth (BW) as the range of frequencies that we can properly represent then we have BW=1/(2DW) - (-1/(2DW)) =2/(2DW) = 1/DW. This is why we can summarize the Nyquist theory as saying that BW=1/DW by definition.
So the optimum sampling is to set the bandwidth so that you accurately represent the range of frequencies that you know are present. Any less and you will get aliasing. Any more and you will introduce noise. So yes, you can open up the bandwidth and sample the NMR signal as often as you want, but you'll get a noisier image (try it and see). The SNR scales as 1/sqrt(BW).
Q. Does the MR scanner keep good time?
The scanner does keep good time, but we do not guarantee that the TR value that you request is exactly the value that you get. This is beacause we pad the TR time (to get even slice sampling over the TR duration) with a calculated number of microseconds based on the various options selected. It is quite tough to remember to account for every microsecond in every option, and as a consequence the amount we pad the sequence by may not be exactly the right amount to give a (say) 3.0000 seconds TR. We are fairly sure that the EPI TR is going to be correct to the nearest one or two milliseconds, but over a multi volume acquisition this can result in a drifting of the scanner timing versus the stimulus timing. The safest option is to periodically re-sync the stimulus to the scanner using the TTL triggers that can be programmed in the protlist. Alternatively you can calibrate the scanner timing with an oscilloscope (e.g., specify a TR of 2.99998 seconds instead of 3.0). If this is enough of a pain for people then we can get the fine tooth comb out and account for every microsec in the sequence.
Q. I am characterising the time course of a voxel taken from a single slice of fmri data. I want to determine the precise time of acquisiton of the first time point of the data in that voxel, so that I can acurately re-create the time course.
The TR for these data were 3 seconds and there were 21 slices.
The slices are collected throughout the TR, usually starting with slice 1 and ending with slice 21 (although sometimes we set an option that interleaves the slices so that it does 1,3,5...21,2,4,..20. More than likely it is sequential rather than interleaved.
What you should do is look in the procpar file in the .fid directory (e.g., series_2.1.fid etc.). If you search for the line that says:
"pss 1 1 1000000 -1000000 0 2 1 264 1 64"
then the line after that one tells you the slice order. The first number on that next line is the number of slices in the volume (e.g. 21) then the other numbers show the slice location of the slices, and are in the order that the scanner collected the slices for each volume. The time of slice "n" in the list is TR/num_slices*(n-1). So, e.g., the 5th slice location in the list for a 21 slice volume with TR=3secs will be collected 571ms into the TR period.