Difference between revisions of "TractographyWorkshop Core1 ActionPlan"
RandyGollub (talk | contribs) |
RandyGollub (talk | contribs) |
||
Line 1: | Line 1: | ||
== At the Workshop we agreed to complete the following: == | == At the Workshop we agreed to complete the following: == | ||
− | 1) Define a NA-MIC endorsed DWI pre- and post processing pipeline that uses NA-MIC toolkit software when available and other freely available software if unanimously agreed upon by the group (e.g. some FSL tools are in widespread use).<br /> | + | 1) Define a NA-MIC endorsed DWI pre- and post processing pipeline that uses NA-MIC toolkit software when available and other freely available software if unanimously agreed upon by the group (e.g. some FSL tools are in widespread use, but for academic use only) for this project. Any tools not compatible with NAMIC licensing that are essential to the pipeline will be put on a short list for future NA-MIC development. <br /> |
2) Curate and post in the NA-MIC Publication database one or more sets of DWI data to be used within NA-MIC for analytic tool development, testing and calibration.<br /> | 2) Curate and post in the NA-MIC Publication database one or more sets of DWI data to be used within NA-MIC for analytic tool development, testing and calibration.<br /> | ||
3) Complete a rigorous analysis of the properties of the tractography approaches in use or under development within NA-MIC Core 1 teams on these data sets, including test-retest reliability.<br /> | 3) Complete a rigorous analysis of the properties of the tractography approaches in use or under development within NA-MIC Core 1 teams on these data sets, including test-retest reliability.<br /> |
Revision as of 18:30, 5 November 2007
Home < TractographyWorkshop Core1 ActionPlanContents
At the Workshop we agreed to complete the following:
1) Define a NA-MIC endorsed DWI pre- and post processing pipeline that uses NA-MIC toolkit software when available and other freely available software if unanimously agreed upon by the group (e.g. some FSL tools are in widespread use, but for academic use only) for this project. Any tools not compatible with NAMIC licensing that are essential to the pipeline will be put on a short list for future NA-MIC development.
2) Curate and post in the NA-MIC Publication database one or more sets of DWI data to be used within NA-MIC for analytic tool development, testing and calibration.
3) Complete a rigorous analysis of the properties of the tractography approaches in use or under development within NA-MIC Core 1 teams on these data sets, including test-retest reliability.
4) Prepare and submit for publication a scholarly report of this work, with Sonia Pujol taking the lead under the mentorship of CF Westin, Ross, Guido and Randy. All participants in the work will share authorship.
Brief summary of the presentations, comparative analysis of tractography methods/approaches:
There were both common and disparate results across tractography approaches. In particular, the data sets were found to be outliers in several of the algorithms. We were all struck by the range of results of the different tractography algorithms even after controlling for most of the preprosessing steps. The presentations also highlighted the exact details of how different preprocessing steps and choice of ROIs affected the outcome of the different algorithms. This led to a unanimous agreement to chose only one pre-processing pipeline for the next phase of our work and greatly simplified the decision-making for the next phase.
Methods for Phase 2 NA-MIC DWI tractography analysis
1) All participants agreed to continue, so list of algorithms will be the same as presented in Santa Fe with potential addition of others if needed. That will be decided at the January AHM.
2) Agreed to change datasets in favor of one with more directions and that had two identical sessions (test-retest) so that within subject reliability can be assessed for each algorithm. Candidate dataset under consideration is the 10 MIND subject Reliability data from MGH. Sylvain volunteered to make nrrd headers for the 10 MIND subjects data from MGH test/retest with help from Jeremy, Vince, and Randy as needed. Sylvain has initial dataset and will report back on any problems before preparing the rest.
3) Will use the same 5 tracts plus the Corpus Callosum (CC). The ROIs need to be redone to be a volume rather than a plane. Agreed to use same definitions for locating the centroid of the ROI then expand to make a volume ROI. Sonia and Randy to make first pass in 1 subject, validate with Marek's lab and then send around to be sure they work with all of the algorithms.
Preprocessing stream:
1) Start with DWI data and NiFTY header + gradient directions
2) Field Map correction is available (for UMN.Iowa.MGH) if we verify that this is helpful
3) Eddy Current Correction (affine registration) (BWH)
4) Put into nrrd format (BWH)
5) Use weighted least squares (FSL won't match on this step) tensor estimation using #TEEM library (BWH)
6) T1 white matter mask co-registered to eddy current corrected DWI data (NA-MIC affine registration tool) (Freesurfer white matter + ? vs. EMSegmentation- Sonia/Sylvain to determine based on what works best)
7) ROIs will be drawn in DWI/DTI space
8) Affine registration transformation to bring retest into test space (use this for mapping ROIs and outcome label maps only from test to retest for each subject)
9) Each group will be responsible for implementing their own algorithm starting at whatever point in this stream is appropriate for their software. All agreed NOT to use alternate methods to accomplish any of the afore listed steps.
Outcome metrics:
1) Space carved (Casey's DTIprocess tool that generates a volume label map measure from traceline). This will give volumes, overlap, mean and Std Dev of FA, trace, mode. Use these for test-retest metrics. Pass these label maps to Sonia she will generate these measures for the AHM presentation.
2) Casey's FiberCompare multiple traceline visualization tool
3) Sonia will also use these results to explore ways to analyze them, e.g. Staple algorithm to find common agreement (specificity and sensitivity)
4) User interface, hardware/software (processor speed, platform, RAM), operator time
5) Further discussion of how best to parameterize tracts at January AHM
Next steps:
1) T-con November 16th 2 PM EST/ noon MST Agenda items include feedback on sample data set & ROIs
2) Next face to face gathering will be at the AHM, Randy scheduled time on the agenda Wednesday to continue this project
3) Proper implementation of DTI gradient orientation system in ITK, nrrd, TEEM, etc (Casey/Tom to file bug report, bring it up in an upcoming Engineering T-con, plan for work on it next Project week))
Miscellaneous Notes
Group explored potential data sets (UNC n=1, 10 acquisitions with 6 directions; MIND n=10, 8 acquisitions, 2x at each of 3 sites with 6 directions and 2x at 1 site with 60 directions) that are available as needed.
Return to Contrasting Tractography Project Page