FBIRN:TuesPMRoadmapMarch2006

From NAMIC Wiki
Jump to: navigation, search
Home < FBIRN:TuesPMRoadmapMarch2006

Lee's presentation

  • --issues that come up when implementing a smoothness correction paradigm, for example: smoothing done prior to level one stats, reduces noise in the temporal domain (barring the effect of large vascular artifacts; and it works best on white noise, which vascular artifacts are not).
  • --smoothing for temporal noise reduction is a different issue from smoothing for site variance reductions.
  • --Stephen S.: the field is likely to move in the direction of wavelets, as optimal smoothing, rather than gaussians.
  • --smoothness equalization has to be done as the last thing before the GLM, after the pre-processing motion corrections, time-slice corrections, normalizations, etc.
  • --How much to smooth to?--to the smoothest raw data; for Phase I, it was 6.83 mm. (Everything calculated on the residual image.)
  • --still have to do this for Phase II, but we know where in the pipeline it should go and that it's likely to be smooth-to 7 to 8 mm (less than the usual SPM value of 8, more than the FSL's default of 5).
  • --smoothness can be used as a diagnostic--that's the basis of the Weisskopf analysis.
  • --Greg B used AFNI tools to graph smoothness over time as a diagnostic--found evidence of stepwise changes, spiking, etc.

Chris's presentation: Took data from 3 of the sites (MN, NM, MGH), first visit, Auditory Oddball, and looked at QA issues.

  • --The QA was just a flag/review but not changing the data.
  • --AFNI/AIRT, Duke tools, Gablab tools: They each look at slightly different aspects of the data.
  • --motion correction: Do we start with the middle slice or the first one (if there is movement in the middle slice)? They did with MCFLIRT. Chris to send the data to Lee and Bryon and Doug.
  • --Were the Siemen sites running with PACE on? MN was, as was MGH. The data come out corrected by PACE--the acquisition is being modified during the scan on the fly based on the estimated movement. 3Dvolreg on PACE data should show some transients but not multi-frame movements.
  • --using head movement numbers in the analysis is completely different when using PACE data vs normal data. PACE corrections are relative to the previous corrections, but it's not clear when in the process the correction occurred. You can't get back to the un-corrected data.

Action item: Who collected using PACE? Open questions: 1) is motion correction doing what we want it to do?

--2) were the subjects sleeping during the non-movement portions?

--3) Behavioral data: Is the number of responses and the accuracy easily available?

--4) What metrics do we want to store in the database to let users decide whether to include each dataset?

--5) How to handle the artifacts/movement?

--6) Suggestions for minimum QA data:

  1. behavioral responses
  2. behavioral accuracy

etc.


Gary's points: The breathhold calibration has to have a behavioral control; we have to measure the inspiration level. Feedback to the subject to inspire to a particular level is important, as is training in how to hold one's breath. Would it be helpful to have feedback on the screen rather than through an LED?

Getting it into the visual display or an auditory feedback would be the most useful--many sites have goggles and can't see LEDs outside the display. We'll need to put up a VAS that relates to the amount they breathed in.

Eprime Service Pack 3 (due out soon) is supposed to allow game ports, which would allow feedback to the computer; or USB A/D converters as input. (Strother--Eprime was abandoned at the Rottman institute because it couldn't do what was needed.)

Can Sz do this using the visual input? They were able to do it at MGH using the current methods.

Lee found chronic smokers had an elevated BH response compared to non-smokers.

Can depth be offset by the duration of time of breathhold? --it's a complicated process, and no.

How do you pick the value that they should all breathe too? --Gary thinks he's got a value that will work for young/old/etc. Child/adult has volumetric difference so that needs to be normalized.

Subjects will have to have two belts on them: One for breathholding (connected to game port or A/D converter) and one for respiration. Unless the one for breathholding can be used for both.

From GE: One can get heart rate and respiration sampled at some rate by default. From Siemens: Have to make your own pulse sequence to save this data. Will have to be collected externally.

Eprime vs others: Presentation is now expensive, not free any more. Jim V. offers CIGAL (C program that runs on Windows).

Pipeline implication: How to put the BH correction and/or the SFNR into the pipeline? Gary's version works on AFNI as part of the pre-processing steps.

  • What IS Gary's version? He gets a per-voxel scalar from the BH timeseries and then multiplies the time series of interest by those scalars. Alignment issues are key, etc.
  • Doug sees how that can be worked into FIPS almost as it stands. He needs the module from Gary.

Question about multi-channel coils: Would we consider using them? Ans: We've considered that but at the time the multi-channel coils were highly varied and we stayed away from them. Do they add an additional source of noise based on the distance to the channel? Yes. They create more heterogeneity in the brain pattern? Yes, and at UCSD they correct for signal heterogeneity (can't correct for the noise).