Mbirn: Semi-Automated Shape Analysis (SASHA) Project
From NAMIC Wiki
(Redirected from SASHA)
Home < SASHA
File:785191115ca5bab409b9d28141ca9773.png
Semi-Automated SHape Analysis (SASHA)
- Background: The SASHA is a collaborative application aimed at developing a morphometry pipeline that integrates subcortical and shape analyses using tools developed at different sites. Clinical imaging data acquired at one site (WashU) is being analyzed by morphometry tools from two other different sites: Freesurfer subcortical segmentation at MGH followed by shape analysis (Large Deformation Diffeomorphic Metric Mapping, LDDMM) at JHU. A visualization tool from a fourth site (BWH) has been extended to enable the viewing of all the derived results on a single platform. To drive the integration of the pipeline preliminary data from R. Buckner (WashU) were analyzed (45 subjects: 21 nondemented controls, 18 very mild Alzheimer's Disease and 6 semantic dementia).
Preprocessing methods at MGH
Mbirn: SASHA preprocessing methods and effort log
UPDATES
2006.05.02 (A. Kolasny)
- The 101 subject lddmm processing has been completed. This required 244,824 cpu/hrs of processing and 40TB of storage.
SDSC and NCSA TeraGrid sites, BIRN SDSC cluster and JHU CIS cluster were used for processing the 40,804 lddmm jobs. We will now begin the statistical analysis and visualization processing.
2006.03.14 (A. Kolasny)
- Added an additional 10TB of storage to the JHU CIS storage repository. This used to complete the right hippocampus processing. Experimenting with sshfs and unionfs to assist in cluster processing.
2006.02.10 (A. Kolasny)
- Completed lddmm processing for the 101 left hippocampus data sets. This computation required a total of 13 cpu/years of computing. We utilized the JHU CIS cluster, SDSC BIRN Cluster and TeraGrid for the processing. The processing required 20TB of storage which is being stored on the BIRN rack and the TeraGrid /gpfs-wan storage repository. Started processing the right hippocampus data.
2005.12.02 (A. Kolasny)
- Completed lddmm processing on the Teragrid for the 56x56 left hippocampus data sets.
2005.09.08 (T. Brown)
- Completed lddmm runs on Teragrid, comparing hippocampi to scaled self. Comparisons for both left and right hand side (45 hippocampus data set) complete.
2005.07.19 (J. Jovicich)
- MGH continues doing the quality control of 100 (50 control and 50 AD brains) Freesurfer hippocampus segmentations. A set of 20 brains are almost ready to go to JHU. The process is slow because it involves:
- full brain freesurfer segmentation (done once)
- right & left hemisphere hippocampal surface smoothing
- inspection of each brain in three orthogonal view and fine-tune manual edit corrections (about 4 brains per day)
- repeat surface smoothing and repeat inspection
- Getting the reviewed hippocampi to JHU soon is crucial so that they start running them on the teragrid, which will help us use the CPU time originally requested and help us apply for future teragrid usertime.
2005.05.17 (A. Kolasny)
- Statistical Analysis has been performed on the unscaled (raw) and scaled (scale-adjusted or normalized for scale) hippocampus data sets. Three technical reports have been prepared, and four drafts are being prepared for publication in relevant literature. (A. Kolasny)
- R scripts are being documented. An internal Rweb server has been created. The key libraries used in R have been identified and implemented on the internal Rweb server. Scripts used for the statistical analysis will be refined in the coming months and incorparated into an automated statistical analysis process. (A. Kolasny)
- Additional 100 (50 AD, 50 matched controls) segmented hippocampus data from R. Buckner are to be quality assured at MGH before sent to JHU for the validation analysis. (J. Jovicich)
- Scriptable conversion routines between the MGH segmentation and the shape analysis processing of LDDMM under development. (T. Borwn)
- Goal to perform 100 x 100 hippocampus comparisons on Teragrid, and perform statistical analysis on resultant data by October 2005. (T. Brown)
2005.02.14 (J. Jovicich)
- Preliminary LDDMM results seem very promising in terms finding reliable metrics for classifying subjects by disease. The results will be presented at the mBIRN Miami meeting. Current plan is to complete LDDMM analysis of a second batch of data for reproducibility purposes.
- The new dataset is from R. Buckner Wash U: 50 young controls, 50 old controls, 50 AD
- The Processing status & plans
- Brian is pre-processing the 150 brains (subcortical segmentations with latest software/atlas)
- Brian will QA the asegs
- Silvester will then run Peng's smoothing code
- Brad/Randy/Silvester will QA the smoothed asegs
- Silvester/Heidi will provide the results to JHU
- JHU will complete shape & stats analysis
- Manuscript on LDDMM and MultiDimensional Scaling for classifying AD lead by MIM
- Bruce Fischl provided Polina with scripts that will let her read MGH's automated subcortical segmentation results. This should enable her to start using these data as input to her classifier tools. We will then be able to use this as a template for the LDDMM stuff when MIM can give it to her in a format that she can use.
2005.01.20 (T. Brown)
- Poster abstract Pattern classification of hippocampal shape analysis in a study of Alzheimer's Disease submitted for 2005 Human Brain Mapping Conference.