2007 Annual Scientific Report

From NAMIC Wiki
Revision as of 13:42, 7 June 2007 by Tkapur (talk | contribs) (→‎1. Introduction)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
Home < 2007 Annual Scientific Report

Back to 2007_Progress_Report

An electronic copy of this report that was mailed to NIH on May 31, 2007(pdf)

1. Introduction

The National Alliance for Medical Imaging Computing (NA-MIC) is now in its third year. This Center is comprised of a multi-institutional, interdisciplinary team of computer scientists, software engineers, and medical investigators who have come together to develop and apply computational tools for the analysis and visualization of medical imaging data. A further purpose of the Center is to provide infrastructure and environmental support for the development of computational algorithms and open source technologies, and to oversee the training and dissemination of these tools to the medical research community. The driving biological projects (DBPs) for the first three years of the Center came from schizophrenia, although the methods and tools developed are clearly applicable to many other diseases.

In the first year of this endeavor, our main focus was to develop alliances among the many cores to increase awareness of the kinds of tools needed for specific imaging applications. Our first annual report and all-hands meeting reflected this emphasis on cores, which was necessary to bring together members of an interdisciplinary team of scientists with such diverse expertise and interests. In the second year of the center our emphasis shifted from the integration of cores to the identification of themes that cut across cores and are driven by the requirements of the DBPs. We saw this shift as a natural evolution, given that the development and application of computational tools became more closely aligned with specific clinical applications. This change in emphasis was reflected in the Center's four main themes, which included Diffusion Tensor Analysis, Structural Analysis, Functional MRI Analysis, and the integration of newly developed tools into the NA-MIC Tool Kit. In the third year of the center, collaborative efforts have continued along each of these themes among computer scientists, clinical core counterparts, and engineering partners. We are thus quite pleased with the focus on themes, and we also note that our progress has not only continued but that more projects have come to fruition with espect to publications and presentations from NA-MIC investigators, which are listed on our publications page.

Below, in the next section (Section 2) we summarize our progress over the last year using the same four central themes to organize the progress report. These four themes include: Diffusion Image analysis (Section 2.1), Structural analysis (Section 2.2), Functional MRI analysis (Section 2.3), and the NA-MIC toolkit (Section 2.4). Section 3 highlights four important accomplishments of the third year: advanced algorithm development in Shape and DTI analysis, the newly architected open source application platform, Slicer 3, and our outreach and technology transfer efforts. Section 4 summarizes the impact and value of our work to the biocomputing community at three different levels: within the center, within the NIH-funded research community, and externally to a national and international community. The final section of this report, Section 5, provides a timeline of Center activities.

In addition, the end of the first three years of the center marks a transition from the first set of DBPs that were focused entirely on Schizophrenia to a new set that span a wider range of biological problems. The new DBPs continue to include neuropsychiatric disorders such as Systemic Lupus Erythematosis (MIND Institute, University of New Mexico), Velocardiofacial Syndrome (Harvard), and Autism (University of North Carolina, Chapel Hill), along with a adopting a direction that is new but synergistic for NA-MIC: Prostate Interventions (Johns Hopkins University). Funding for the second round of DBPs starts in the next cycle, but the PIs were able to attend the recent All-hands meeting and start developing plans for their future research in NA-MIC.

Finally, we note that Core 3.1 (Shenton and Saykin), are in the process of applying for a Collaborative R01 to expand current research with NA-MIC, which ends on July 31, 2007. Both Drs. Shenton and Saykin have worked for three years in driving tool development for shape measures, DTI tools, and path analysis measures for fMRI as part of the driving biological project in NA-MIC, and they now plan to expand this research in a Collaborative R01 by working closely with Drs. Westin, Miller, Pieper, and Wells, to design, assess, implement, and apply tools that will enable the integration of MRI, DTI, and fMRI in individual subjects, as well as to develop an atlas of functional networks and circuits that are based on a DTI atlas (i.e., structural connectivity), which will be integrated with a network of functional connectivity that will be identified from fMRI probes of attention, memory, emotion, and semantic processing. We mention this here because this will be, to our knowledge, the first DBP to apply for further funding to continue critical work begun with NA-MIC.

(Please note that this report is available on the NA-MIC wiki at the following url: http://www.na-mic.org/Wiki/index.php/2007_Annual_Scientific_Report)

2. Four Main Themes

This year's activities focus on four main themes: Diffusion Image Analysis, Structural Analysis, Functional MRI Analysis, and the NA-MIC Kit. Each of the following sections begins with an overview of the theme, provides a progress update and list of key investigators, and concludes with a set of links to additional information for individual projects in that theme.

These thematic activities involve scientists from each of the 7 NA-MIC cores (Appendix).

2.1 Diffusion Image Analysis Theme

Progress

Over the past year, we have continued to develop a number of tools relevant to diffusion tensor estimation, fiber tractography and geometric and statistical diffusion tensor analysis. Some of these tools have been already integrated into diffusion dedicated software (e.g., Fiber Viewer-UNC) and into the Slicer platform. Various groups focus on more sophisticated methodology as alternatives to conventional tractography for more robust extraction of fiber bundles (Georgia Tech, Utah). Others combine tractography results from individuals into fiber cluster atlases (MIT). UNC started developing brain mapping tools for group data analysis, such as white matter atlases, and tools for data integration, that would combine and interface among different imaging modalities (such as structural, fMRI and DTI), to better estimate anatomical and functional connectivity. The current reporting period is clearly characterized by a significantly improved effort towards application of NA-MIC DTI tools developed by Core-1 partners to clinical data provided by Core-3 DBPs, used in multiple clinical projects involving several psychiatric populations. Below we provide detailed progress in the area of diffusion image analysis.

Fiber Tract Extraction and Analysis

  • Since last year, all of the algorithms for fiber tractography and anisotropy estimation have been implemented in both the “Fiber Viewer” and “Slicer” packages, and now the resulting methods are being applied to clinical studies. The FiberViewer tool such as its application in a large clinical UNC DTI study are published in medical image analysis such as archive journals. Utah recently extended the tractography-based concept with a new approach based on solving a PDE, thus replacing the tractography-based extraction of fiber bundles by a fully volumetric process. Characterization of bundles is then approached by a new regression method (IPMI'07). In a similar spirit, the BWH/MIT/UNC groups applied diffusion measures along a specific fiber bundle of interest, parametrized by arc-length, to the cingulum bundle. Georgia Tech develops a method called "Finsler Active Contour DWI", a mathematically more sophisticated approach to standard tractography methods that makes use of Finsler geometry and solves a new anisotropic energy function for extraction of 3D curves from tensor data. Preliminary applications to the cinculum bundle in the PNL schizophrenia dataset of our Core-3 partner are very promising.
  • In addition to clinical studies, teams have been working on other methods to define and estimate brain connectivity. The most important developments in this regard included volumentric connectivity and stochastic tractography measures. For volumentric connectivity, a PDE-based approach to white matter connectivity from DTI has been developed, that is founded on the principal of minimal paths through the tensor volume. This method computes a volumetric representation of a white matter tract given two endpoint regions. In addition, statistical methods for quantifying the full tensor data along these pathways have been also developed. Directional PDE-based flows have been proposed and implemented for a similar purpose. Further, an approach called "stochastic tractography," has been developed to calculate probability of two regions being connected by the fiber tract. This should be especialy useful when tract has to go through the regions characterized by low diffusion anisotropy.
  • Some initial attempts have been made to population based analysis of DT-MRI. One method in development by UNC in collaboration with Utah is based on the unbiased non-rigid registration of a population to a common coordinate system. The registration jointly produces an average DTI atlas, which is unbiased with respect to the choice of a template image, along with a diffeomorphic correspondence for each image. The method includes calculation of features driving non-linear, population-based diffeomorphic deformation to an average template, nonlinear mapping of tensor fields, and a statistical framework for region-based and tract-based analysis of the whole population in the geometry of the atlas space but also in the respective space of the original MR-DWI. The anatomically significant correspondence provides a basis for comparison of tensor features and fiber tract geometry in clinical studies. This development is a response to the need of the clinical community for efficient, automatic analysis of large population of images as an alternative to case-by-case user-supervised processing, similarly to the SPM package used for fMRI analysis. A different route is taken in a project by MIT/BWH: Tractography applied to the whole brain of a population of subjects is used to create a model (atlas) of fiber tract clusters. This method can be applied to subdivide the corpus callosum left-right tracts into smaller entities that connect particular subareas of the cortex. A key research issue of this research is a geometric characterization of bundles of streamlines obtained by tractography.
  • Validation of DTI tools is pursued via a joint collaboration betwen UCI, MGH, UNC and MIT. Initial work used the DTI Studio method provided by Susumo Mori. Extension of this validation to include Slicer was difficult due to incompatibility of data, thick-slice DTI data available for the preliminary analysis, and the use of very different formats not yet fully integrated into ITK. This effort will have priority for the Core-5 training group which will integrate validation of methods as a key part into their training workshop activities.

Fractional Anisotropy Analysis

  • We have used tools developed last year, and applied them to our population of chronic schizophrenia subjects in order to investigate two fiber tracts: the cingulum bundle and the corpus callosum. In the case of cingulum bundle, manually drawn regions of interest and Finsler geometry were used to extract entire cingulum bundle fiber tract, and FA was estimated along the tract, and compared between groups. Corpus Callosum cross sectional area and its probabilistic subdivisions were determined automatically from the structural MRI scans using a model based deformable contour segmentation. The structural scan was then co-registered with the DTI scan and the anatomical corpus callosum subdivisions were propagated to the associated FA map, and compared between groups, demonstrating deficient interhemispheric communication in schizophrenia.

Please see a complete summary of clinical applications below.

Integration of fMRI and DTI, Path-of-Interest Analysis

  • Progress has been also made in the development of a tool that would successfully combine and integrate functional and anatomical information. The Optimal Path Analysis method has been applied to the Harvard- BWH fMRI dataset, and anatomical connections between regions active during the fMRI experiment have been extracted for each subject. The connectivity has been calculated and compared between groups. This tool is now being ported to Slicer3.

Summary of NA-MIC Clinical DWI Applications

The current reporting period is clearly characerized by a significant increase of application of tools to clinical DWI data provided by the Core-3 DBP groups. Main reasons are two-fold, first there are several new tools that are ready to be applied to a set of image datasets, and second there are new high-resolution DTI data from 3-Tesla scanners that are more appropriate for the new tools than older data using highly non-isotropic voxel data.

  • Application of NA-MIC DTI tools jointly with the Core-3 partners include the ongoing study of inferior frontotemporal connections (BWH/MIT) by applying fiber tractography to extract the uncinate fasciculus and occipito-frontal fasciculus in a study of 27 chronic schizophrenics and 34 controls. The same group studies corpus callosum clustering in the new 3Tesla dataset provided by PNL (24 subjects, 12 chronic schizophrenics and 12 matched control subjects). Preliminary results show reduced FA and reduced volume in two different clusters.
  • A collaboration between Harvard PNL, MIT and BWH studies diffusion properties of the fornix, an interconnecting the frontal and temporal lobes that is likely implicated in schizophrenia. Tractography tools were used to extract left and right fornix in 34 chronic schizophrenia subjects and 40 matched controls.
  • The atlas-based analysis of DWI in development by UNC has been applied to 24 3Tesla schizophrenia DTI data of Harvard PNL. Key structures like fornix, cingulum, uncindate fasciculus can be efficiently extracted from the tensor atlas and thus immediately applied to all datasets of the population. Statistical analysis of regions/tracts of interest and relevant to the study of schizophrenia is currently performed jointly with the Core-3 partner PNL.
  • Dartmouth, in collaboration with BWH and UNC, is approaching corpus callosum regional analysis to chronic schizophrenia data. BWH, together with MIT and Dartmouth, are currently working on FA analysis of the corpus callosum and the anterior commissure. The same groups also study the uncinate fasciculus bundle in schizophrenia and bipolar disease, a study that attempts to replicate previous user-guided ROI analysis with application of new NA-MIC tools.
  • The cinculum bundle in the PNL 3-Tesla schizophrenia dataset is analyzed by Georgia Tech, using the newly developed "Finsler Active Contour DWI" method.
  • Corpus callosum probabilistic subdivision and quantitative analysis, originally developed by the UNC shape analysis group, is now being applied to a clinical study of from 32 schizophrenic subjects and 42 controls, a joint collaboration between Harvard PNL, BWH and UNC.
  • The path of interest method is successfully tested by Dartmouth to map out uncinate fasciculus and arcuate fasciculus in a healthy adult. This will continue to study the fronto-temporal circuitry in the Darthmouth schizophrenia study. Anatomical connectivity of regions of functional activation, using the "optimal path analysis" method, is currently applied by BWH, PNL and MIT to study anatomical connectivity between regions demonstrating functional activation.
  • A clinical project in recruitment phase between Toronto and BWH is planning a DTI and genetic study of Psychosis across the lifespan.

Key Investigators

  • BWH: Martha Shenton, Marek Kubicki, Marc Niethammer, Sylvain Bouix, Jennifer Fitzsimmons, Katarina Quintis, Doug Markant, Kate Smith, Georgia Bushell, Mark Dreusicke, Carl-Fredrik Westin, Raul San Jose, Gordon Kindlmann
  • MGH: Bruce Fischl, Denis Jen, David Kennedy
  • MIT: Lauren O'Donnell, Polina Golland, Tri Ngo
  • UCI: James Fallon, Martina Panzenboeck
  • UNC: Guido Gerig, Isabelle Corouge, Casey Goodlett, Martin Styner
  • Utah: Tom Fletcher, Ross Whitaker, Saurav Basu, Davis McKay
  • GA Tech: Eric Pichon, John Melonakos, Xavier LaFaucheur, Vandana Mohan, Allen Tannenbaum
  • Dartmouth: John West, Andrew Saykin, Laura Flashman, Paul Wang, Heather Pixley, Robert Roth
  • Isomics: Steve Pieper
  • Kitware: Luis Ibanez

Additional Information

For details of each of the projects in this theme, please see NA-MIC Projects on Diffusion Image Analysis.

2.2 Structural Analysis Theme

Progress

Within the NAMIC's structural analysis theme, the main topics of interest are structural segmentation, registration and shape analysis. These topics are of course intertwined, e.g., segmentation or registration can directly deliver structural correspondence used in shape analysis, or in turn shape modeling is necessary for good structural segmentations. Here is a selection of progress highlights in the structural analysis theme

Segmentation
  • Wavelet based structural segmentation: We have developed a spherical wavelet based framework for the segmentation of selected brain structures, such as the hippocampus or the caudate nucleus. An automated segmentation of such structures must be highly accurate and include high frequency variations in the surface. Since shape representation is a key component of the segmentation, it must be rich enough to express shape variations at various frequency levels, from low harmonics to sharp edges. Medical object segmentation with deformable models and statistical shape modeling may be combined to obtain a more robust and accurate segmentation. To address this issue, a decomposable spherical wavelet based shape representation targeted to the population seems natural, where the shape parameters are separated into groups that describe independent global and/or local biological variations in the population, and a prior induced over each group explicitly encodes these variations. This work presents three novel contributions for shape representation, multiscale prior probability estimation and segmentation.
  • Directional based segmentation: We have proposed an image segmentation technique based on augmenting the conformal (or geodesic) active contour framework with directional information. In the classical case, the Euclidean metric is locally multiplied by a scalar conformal factor (based on image information) such that the weighted length of curves lying on points of interest (typically edges) is small. We propose to add directionality to the factor, and show that one gets a well-defined minimization problem in the case that the factor defines a Finsler metric. Optimal curves may be obtained using the calculus of variations or dynamic programming. This methodology also makes connections to the important technique of graph-cuts.
  • Statistical PDE methods for segmentation in shape space: This past year, we have proposed another method to perform shape-driven segmentation. In our approach, shapes are represented using binary maps, and linear PCA is utilized to provide shape priors for segmentation. Intensity based probability distributions are then employed to convert the given volume into a binary map representation, and a new energy functional is formulated whose minimization is performed using a parametric model for surface evolution in the shape space. Our algorithm is then applied to the segmentation of brain caudate nucleus and hippocampus from MRI data. Our validation shows that the proposed algorithm outperforms the log-likelihood based energy, converges in less than 5 iterations and is very obust to initialization. The overall algorithm is illustrates the potential for segmentation in shape space.
  • Rule-based segmentation methods: We have continued this past year to develop segmentation methods based on heuristic rules provided to us by our Core 3 partners for segmenting various brain regions of interest in schizophrenia, e.g. the DLPFC and the striatum. The idea is to try to semi-automate these rules in order to forge an interactive tool for segmentation which can greatly shorten the time necessary for manual segmentation. Typically, these methods are used in conjunction with some Bayesian classifier which further aids to automating and in speeding up the given segmentation methodology.
  • Tissue and structural segmentation via EM Segmenter: Standard image based segmentation approaches perform poorly when there is little or no contrast along boundaries of different regions. In such cases segmentation is mostly performed manually using prior knowledge of the shape and relative location of the underlying structures combined with partially discernible boundaries. We have developed an automated approach guided by covariant shape deformations of neighboring structures, which is an additional source of prior knowledge. Captured by a shape atlas, these deformations are transformed into a statistical model using the logistic function. The mapping between atlas and image space, structure boundaries, anatomical labels, and image inhomogeneities are estimated simultaneously within an Expectation-Maximization formulation of the Maximum A posteriori Probability (MAP) estimation problem. These results are then fed into an Active Mean Field approach, which views the results as priors to a Mean Field approximation with a curve length prior. This segmentation framework has been ported into the NAMIC-toolkit and a first version of the tool is distributed with Slicer 3.
Shape Analysis
  • Wavelet based shape analysis: A shape representation that encodes variations at multiple scales can be useful as a rich feature set for shape analysis and classification. Combining tools from the existing shape analysis toolset, we extended it to include spherical wavelet coefficients (SWC) as features and compared the results obtained to shape analysis using a SPHARM-PDM representation. The rich SWC feature set allows the differentiation of shape differences at various scales as well as highly correlates with the existing SPHARM-PDM analysis but with increased statistical sensitivity. Wavelet coefficient shrinkage and dimension reduction are well-understood and have been widely researched for traditional types of wavelet decompositions but not much explored for the second generation wavelets. During the past year, we have developed a Bayesian model on our specific wavelet structure based on a population of surfaces. For each shape, the deviation from the mean is computed and is modeled as the sum of an unknown signal and a noise. This deviation is encoded by the wavelet transform and our goal is to estimate the wavelet coefficients belonging to the noiseless signal.
  • Surface flattening for shape analysis: Flattened representations of undulated surfaces constitute an active area of research in the field of medical imaging and visualization, due to their extensive use for registration and shape analysis of various structures of interest. We have presented a method for flattening anatomical surfaces in an area preserving manner, while minimizing the geometrical distortion. This method is based on the theory of optimal mass transport and conformal mapping of surfaces. The key idea here is the use of a multiresolution scheme for the solution of optimal mass transport gradient descent equations which allows a fast and stable solution for optimal transport. The method has been implemented on a GPU, allowing us to flatten a 128 by 128 by 128 volume in about 5 seconds on a standard workstation.
  • Curvature based population correspondence: We have extended the Minimum Description Length population based correspondence framework to include curvature based measurements, such as the Koenderink Shape Index S and Curvedness C in combination with the standard location information. Current methodology in population based correspondence is based mainly on minimizing distribution properties of surface point locations and are thus not invariant to alignment. We have favorably compared our combined "Curvature + Location" MDL to the standard MDL, as well as to the SPHARM approach, in complex structures, such as the striatal brain structure (composed of caudate, nucleus accumbens and putamen).
  • Shape analysis toolset: A considerable amount of work was spent on the development aspect of the shape analysis tools. The distributed set of tools is continuously enhanced and the population based correspondence framework has been released as open source. All the tools including the visualization tool, KWMeshVisu, can be called directly from Slicer 3. The visualization tool allows the overlay of scalar, vector and ellipsoid data onto surfaces via versatile colormaps. The attributed surfaces are then visible within Slicer 3. This lean visualization tool fills a niche and is also used in our cortical thickness analysis tool. Also, while it is entirely possible to run all shape analysis steps by calling the individual modules, this is highly inefficient in a larger study. As a result we are developing a separate shape pipeline Slicer module that uses Batchmake to run the shape analysis pipeline as a distributed, background process. The whole shape analysis pipeline will thus become entirely encapsulated and accessible to the trained clinical collaborator.
  • Particle based correspondence: We have developed a method for a multi-object correspondence optimization, and have applied it successfully to a proof-of-concept application for the analysis of brain structure complexes from a longitudinal study of pediatric autism. This new method for constructing compact statistical point-based models of ensembles of similar shapes does not rely on any specific surface parameterization, requires little preprocessing or parameter tuning, and is applicable to a wider range of problems than existing methods, including non-manifold surfaces and objects of arbitrary topology. The method uses a dynamic particle system to optimize correspondence point positions across all structures by simultaneously maximizing both the geometric accuracy and the statistical simplicity of the model.

Key Investigators

  • MIT: Kilian Pohl, Sandy Wells, Eric Grimson
  • UNC: Martin Styner, Ipek Oguz, Guido Gerig, Xavier Barbero
  • Utah: Ross Whitaker, Suyash Awate, Tolga Tasdizen, Tom Fletcher, Joshua Cates, Miriah Meyer
  • GaTech: Allen Tannenbaum, John Melonakos, Tauseef ur Rehman, Shawn Lankton, Ramsey Al-Hakim, Eric Pichon, Delphine Nain, Oleg Michailovich, Yogesh Rathi, James Malcolm
  • Isomics: Steve Pieper
  • GE: Bill Lorensen, Jim Miller
  • Kitware: Luis Ibanez, Karthik Krishnan,
  • UCLA: Michael J. Pan, Jagadeeswaran Rajendiran
  • BWH: Sylvain Bouix, Motoaki Nakamura, Min-Seong Koo, Martha Shenton, Marc Niethammer, Jim Levitt
  • Dartmouth: Andrew Saykin
  • UCI: James Fallon

Additional Information

For details of each of the projects in this theme, please see NA-MIC Projects on Structural Image Analysis.

2.3 Functional MRI Analysis Theme

Progress

During this year, the focus of the algorithms and the engineering cores has been on the structural and DTI analysis. While we continued to expand the methods and the infrastructure in NAMIC-kit to support fMRI analysis, as well as using the analysis tools to perform clinical studies, the emphasis of the work this year has been on integrating the fMRI analysis with other modalities and supporting other modalities.

Clinical Studies

We would like to highlight several clinical studies within NAMIC that focused on fMRI data and its relationship with other imaging modalities:

  • Imaging Phenotypes in Schizophrenics and Controls: Functional connectivity of the DLPFC by genotype was investigated emplying partial least squares (PLS) correlation analysis. PLS is a multivariate analytical technique used to summarize large neuroimaging data sets in such a way as to correlate patterns of activation with a variable(s) of interest (i.e., DLPFC activity). In the most recent analysis, the DRD1 genotype was used as a grouping variable. This analysis has been submitted for publication (Tura, Turner, Fallon, Kennedy, and Potkin. Genetic Impact on Functional Connectivity in Schizophrenics During a Working Memory Task).
    Working memory performance did not differ significantly between the two cohorts. However, imaging-genetic analysis showed a significant difference (P< 0.05) between the circuitry engaged by each group. Significance and reliability of the resulting imaging-behavioral patterns within each genotype were assessed by 200 bootstrap and 500 permutation tests, respectively. In one group, the circuitry included the temporal pole, the insula, the dorsolateral prefrontal cortex, and the Brodmann Areas (BA) 1,2,3,4,6,11 and 21, while the other group showed a network comprising the tectum, precuneus retroplenial, vermis, substantia nigra, BA 22,39,8, and 9. The DRD1 polymorphic site may characterize circuitry differences in schizophrenic patients.
  • Path-Of-Interest Analysis (joint DTI/fMRI modeling): We collected preliminary data using an application of the “optimal path analysis”. In this analysis, we extracted group fMRI activation due to the Stroop effect (an attentional experiment where incongruent color, in which the word is written, competes with name of the color itself, activating areas responding to conflict monitoring and selection) separately for controls and schizophrenics. This resulted in three clusters of activation, one in the right Dorsolateral Prefrontal Cortex, a second in the Anterior Cingulate Gyrus, and a third in the Medial Parietal Lobe. In the next step, we placed activation clusters in each individual space, by reversing normalization parameters used during fMRI analysis. Finally, EPI fMRI scans were co-registered to DTI scans, and the same registration parameters were applied to activation maps (fMRI results).
    Regions of activation were used as start and destination points for optimal path analysis, which resulted in three separate paths of optimal connectivity for each subject. The probability of the connections were then calculated for each path and each subject, and compared between groups. In our preliminary analysis, we included 10 control subjects and 10 chronic schizophrenics. Our results demonstrated a relationship between Stroop Effect fMRI activation in the medial parietal area and optimal path connectivity between parietal and cingulate activation sites in schizophrenics (rho=-0.56; p=0.047), which was not observed in controls. These findings suggest that decreased connectivity may result in schizophrenics relying more on posterior parts of the executive attentional network during performance of the Stroop task.
  • Hippocampal and frontal memory circuitry abnormalities in schizophrenia: Relation of diffusion, morphometric and fMRI markers: We performed a combined DTI, fMRI, and morphometric study on 13 patients with schizophrenia (SZ) and 14 HC. We identified areas of increased trace diffusivity (TD) in the hippocampal and insular regions as well as areas of reduced fractional anisotropy (FA) in left frontal white matter in SZ relative to HC (p<.01). Voxel based morphometry analyses in a subset of these subjects showed corresponding reductions in gray matter density in hippocampal and insular regions in patients relative to controls (p<.01). Analysis of fMRI results from the novel vs. repeated word contrast from the event-related auditory verbal episodic memory encoding/retrieval task in a subset of the subjects indicated reduced activation in frontal and temporal regions, as well as increased activation in posterior cingulate, retrosplenial, and thalamic regions in SZ relative to HC (p<.05). Further analysis showed that left frontal white matter FA was associated with activation in the left and right hippocampi as well as other frontal and temporal regions, but inversely related to activation in the retrosplenial/posterior cingulate region (p<.05). These initial findings indicate a pattern of relationships between of structural and functional brain abnormalities in schizophrenia and demonstrate the feasibility of integrated quantitative analyses across modalities.
Methods

During this year, we continued methodological development along two directions:

  • Improving fMRI detectors by incorporating Markov priors on the activation state. We integrated the improved detector into Slicer and performed substantial validation of the methods using fBIRN data provided by the UC Irvine group. Journal paper on the method has been submitted to IEEE TMI.
  • Improving registration of EPI images to anatomical scans through modeling of the EPI distortions. We demonstrated an initial model that uses segmentation of the structural scan to predict the distortions in the EPI images. The preliminary results are quite encouraging.

Key Investigators

  • MIT: Polina Golland, Sandy Wells, Wanmei Ou, Claire Poynton
  • BWH: Martha Shenton, Marek Kubicki
  • Dartmouth: Andy Saykin
  • UCI: Jessica Turner, Stephen Potkin
  • Toronto: Jim Kennedy

Additional Information

For details of each of the projects in this theme, please see NA-MIC Projects on fMRI Analysis.

2.4 NA-MIC Kit Theme

Progress

The continuing vision of the NA-MIC Kit is to provide an open source set of software tools and methodologies that will serve as the foundation for medical image computing projects for both academic and commercial use. Key elements of this vision are:

  1. Unrestrictive License. Users of the Kit are free to distribute their derived works under any license suitable to their needs.
  2. Cross Platform. This software set can be adapted to the best available price-performance computer systems for any particular use.
  3. Extensible Application Framework. New techniques and algorithms can be quickly integrated into a working system. Sophisticated user interfaces can be generated easily through automated processes. Sophisticated toolsets such as ITK, VTK, and KWWidgets are available to create and deploy applications quickly.
  4. Quality Software Process. Developers and users can rely on accurate and well documented behavior from all the parts of the Kit.
  5. Sustainable Community. Users are actively involved in the design process of the Kit. Documentation, training materials, and hands-on sessions are available and well publicized to the community.
Slicer3

A major focus of the third year was the implementation of the Slicer3 in the NAMIC Kit. This effort addressed Item #3 above Extensible Application Framework. The previous two years of the NAMIC project, which entailed gathering requirements from Cores 1 and 3, and developing the computational foundation, toolsets, and software process, came together in the Slicer3 application platform.

Core 2 worked hard to insure that the Slicer 3 application serves, and will continue to serve, as a productive technology deployment platform. The application framework was designed carefully to provide ease-of-use, both in terms of interaction and software integration. Advanced capabilities, including the ability to launch large-scale grid computing, was designed into the application. Some of the key features of the Slicer3 application completed in the third year include the following.

  • Advanced application framework including a tuned GUI for ease of use, undo/redo capabilities, 2D/3D view windows, and support for advanced interaction techniques such as 3D widgets. The application provides viewers for displaying slices, volumes, and models including the ability to edit properties. A built in transformation pipeline enables users to confidently import, edit and display data in a consistent coordinate system.
  • The application is data-driven based on the next generation MRML scene description file format. Backward compatibility to Slicer 2.x MRML files is preserved.
  • A module plug-in architecture and execution model that enables researchers to package and integrate their software into the Slicer3 framework. Plug-in modules can be implemented in a variety of programming languages, and are described using a simple XML description. These modules, when located and loaded into Slicer3, have the capability to automatically generate their user interface, which is seamlessly integrated into the Slicer3 GUI.
  • Support for editing and marking data including support for fudicials, paint and draw editors.
  • The creation of several simple plug-in modules, including the conversion of previous Slicer 2.x modules to the new Slicer3 architecture.
EM Segment

As an application framework, Slicer3 provides tools for loading, viewing, measuring, editing, and saving data. To support advanced medical image analysis, plug-in modules are required in conjunction with the Slicer3 core. To demonstrate the capabilities of the framework we implemented the EM Segment module, a sophisticated and proven method for automatically segmenting complex anatomical structures.

To use this module, users specify parameters defining the image protocol and the anatomical structures of interests. This process results in a template that the module uses to automatically segment large data sets. The template is composed of atlas data and a non-trivial collection of parameters for the EM Segment algorithm (Pohl et al.). Once the parameters are specified, the target images are segmented using the algorithm; if the results are satisfactory, the template is saved and can be used later to segment new images (via the GUI or batch processing). If the results are unsatisfactory, the parameters can be modified and the segmentation re-run.

Besides successfully demonstrating the use of complex algorithms in the Slicer3 framework, this effort also led us to develop tools, including modifications to the underlying KWWidgets GUI toolkit, to support module workflow. With these tools, it is possible to simplify complex modules by dividing the complicated template specification task into a number of smaller, intuitive steps. These steps are enforced by the GUI and reduce the potential for user error, while improving the overall user interface.

Quality Software Process

Building on last year's success with the KDE community, the NAMIC community continued to extend its world-class, open source software process tools CMake, DART, CTest and DART. These tools, which form the core of a quality-oriented, test-driven development (TDD) software process. In particular, the CPack system is now able to automatically package and distibute code, libraries, executables and installers across all of NAMIC's supported platforms (i.e., Linux, Windows, Mac). This enables the NAMIC developer community to rapidly deploy software tools to the user community.

Key Investigators

  • GE: Bill Lorensen, Jim Miller, Xiaodong Tao, Dan Blezek
  • Isomics: Steve Pieper, Alex Yarmarkovich
  • Kitware: Will Schroeder, Luis Ibanez, Brad Davis, Andy Cedilnik, Sebastien Barre, Bill Hoffman
  • UCLA: Mike Pan, Jagadeeswaran Rajendiran
  • UCSD: Neil Jones, Jeffrey Grethe, Mark Ellisman
  • BWH: Nicole Aucoin, Katie Hayes, Wendy Plesniak, Mike Halle, Gordon Kindlmann, Raul San Jose Estepar, Haiying Liu, Ron Kikinis
  • MIT: Lauren O'Donnell, Kilian Pohl

Additional Information

For details of each of the projects in this theme, please see NA-MIC Kit Projects.

3. Highlights

The third year of the NAMIC project saw continued development and dissemination of medical image analysis software. With the release of the first version of Slicer3, the transfer of this technology is accelerating. Because of NAMIC's strong ties with several large open source communities, such as ITK, VTK, and CMake, NAMIC continues to make significant impact on the nation's broader biocomputing infrastructure. The following are just a few of the many highlights from the third year of the NAMIC effort.

3.1 Advanced Algorithms

Core 1 continues to lead the biomedical community in DTI and shape analysis.

  • NAMIC published an open source framework for shape analysis, including providing access to the open source software repository. Shape analysis has become of increasing relevance to the neuroimaging community due to its potential to precisely locate morphological changes between healthy and pathological structures. The software has been downloaded many times since the first online publication in October 2006, and is now used by several prestigious image analysis groups.
  • The spherical based wavelet shape analysis package has been contributed into ITK, and in the next few months the multiscale segmentation work will be incorporated as well.
  • The NAMIC community has implemented a very fast method for the optimal transport approach to elastic image registration which is currently being added to ITK.
  • The conformal flattening algorithm has been implemented as an ITK filter and is in the NAMIC Sandbox in preparation for formal acceptance into the NAMIC Kit.

3.2 Technology Deployment Platform: Slicer3

Core 2 in conjunction with Algorithms (Core 1) and DBP (Core 3) are creating new tools to accelerate the transition of technology to the biomedical imaging community.

  • One of the year's major achievements was the release of the first viable version of Slicer3 application, which evolved from concept to a full-featured application. The second beta version of Slicer3 was released in April 2007. The application provides a full range of functionality for loading, viewing, editing, and saving models, volumes, transforms, fiducials and other common medical data types. Slicer3 also includes a powerful execution model that enables Core 1 developers (and other in the NAMIC community) to easily deploy algorithms to Core 2 and other biocomputing clients.
  • Slicer3's execution model supports plug-in modules. These modules can be run stand alone or integrated into the Slicer3 framework. When integrated, the GUI to the module can be automatically generated from an associated XML file describing input parameters to the module. A variety of modules were created, ranging from simple image processing algorithms, to complex, multi-step segmentation procedures.
  • To stress test Slicer3's architecture and demonstrate its capabilities, the EM Segment module (http://wiki.na-mic.org/Wiki/index.php/Slicer3:EM) was created and added to Slicer's library of modules. EM Segment is an automatic segmentation algorithm for medical images and represents a collaborative effort between the NAMIC engineering, algorithms, and biological problem cores. The EM Segment module enables users to quickly configure the algorithm to a variety of imaging protocols as well as anatomical structures through a wizard-style, workflow interface. The workflow tools have been integrated into the NAMIC Kit, and are now available to all other modules built on the Slicer3 framework.

3.3 Outreach and Technology Transfer

Cores 4-5-6 continue to support, train and dissemniate to the NAMIC community, and the broader biomedical computing community.

  • NAMIC continues to practice the best of collaborative science through its bi-annual Project Week events. These events, which gather key representatives from Cores 1-7 and external collaborators, are organized to gather experts from a variety of domains to address current research problems. This year's first Project Week was held in January and hosted by the University of Utah. It saw several significant accomplishments including the first beta release of the next generation Slicer3 computing platform. The second Project Week is scheduled for June in Boston, MA.
  • Twelve NAMIC-supported papers were published in high-quality peer reviewed conference proceedings (four papers in MICCAI alone). Another paper on the NAMIC software process was published in IEEE Software. All three DTI papers presented at MICCAI last year were NAMIC associated.
  • Several workshops through the year were held at various institutions. This includes the DTI workshop at UNC, the MICCAI Open Source Workshop, and the NA-MIC Training Workshop at the Harvard Center for Neurodegeneration and Repair.

4. Impact and Value to Biocomputing

In NA-MIC's third year, it is evident that NA-MIC is developing a culture, environment, and resources to foster and incite collaborative research in medical image analysis that draws together mathematicians, computer scientists, software engineers, and clinical researchers. These artefacts of NA-MIC impact how NA-MIC operates, make NA-MIC a fulcrum for NIH funded research, and draws new collaborators from across the country and around the world to NA-MIC.

4.1 Impact within the Center

Within the center, the NA-MIC organization, NA-MIC processes, and the NA-MIC calendar has permeated the research. The organization is nimble, forming ad hoc distributed teams within and between cores to address specific biocomputing tasks. Information is shared freely on the NA-MIC Wiki, on the weekly Engineering telephone conferences, and in the NA-MIC Subversion source code repository. The software engineering tools of CMake, Dart 2 and CTest, CPack, and KWWidgets facilitate a cross platform software environment for medical image analysis that be easily built, tested, and distributed to end-users. Core 2 has provided a platform, Slicer 3, that allows Core 1 to easily integrate new technology and deliver this technology in an end user application to Core 3. Core 1 has developed a host of techniques to apply to structural and diffusion analysis which are under evaluation by Core 3. Major NA-MIC events, such as the annual All Hands Meeting, the Summer Project Week, the Spring Algorithms meeting, and Engineering Teleconferences are avidly attended by NA-MIC researchers as opportunities to foster collaborations.

4.2 Impact within NIH Funded Research

Within NIH funded research, NA-MIC continues to forge relationships with other large NIH funded projects such as BIRN, caBIG, NAC, and IGT. Here, we are sharing the NA-MIC culture, engineering practices, and tools. BIRN hosts data for the NA-MIC researchers and NA-MIC hosts BIRN wikis. caBIG lists the 3D Slicer among the applications available on the National Cancer Imaging Archive. NAC and IGT use the NA-MIC infrastructure and are involved in the development of the 3D Slicer. BIRN recently held an event modeled after the NA-MIC Project Week. NA-MIC has become a resource on open source licensing to the medical image analysis community.

NA-MIC is also attracting NIH funded collaborations. Two grants have been funded under PAR-05-063 to collaborate with NA-MIC: Automated FE Mesh Development and Measuring Alcohol and Stress Interactions with Structural and Perfusion MRI. Five additional applications to collaborate with NA-MIC via the NCBC collaborative grant mechanism are under consideration. Additional grant applications submitted under other calls are planning to use and extend the NA-MIC tools.

4.3 National and International Impact

NA-MIC events and tools garner national and international interest. There were nearly 100 participants at the NA-MIC All Hands Meeting in January 2007, with many of these participants from outside of NA-MIC. Several researchers from outside the NA-MIC community have attended the Summer Project Weeks and the Winter Project Half-Weeks to gain access to the NA-MIC tools and people. These external researchers are contributing ideas and technology back into NA-MIC.

Components of the NA-MIC kit are used globally. The software engineering tools of CMake, Dart 2 and CTest are used by many open source projects and commercial applications. For example, the K Desktop Environment (KDE) for Linux and Unix workstations uses CMake and Dart. KDE is one of the largest open source projects in the world. Many open source projects and commercial products are benefiting from the NA-MIC related contributions to ITK and VTK. Finally, Slicer 3 is being used as an image analysis platform in several fields outside of medical image analysis, in particular, biological image analysis, astronomy, and industrial inspection.

NA-MIC co-sponsored the Workshop on Open Science at the Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2006 conference. The proceedings of the workshop are published on the electronic Insight Journal, another NIH-funded activity.

Over 50 NA-MIC related publications have been produced since the inception of the center.

5.NA-MIC Timeline

This section provides a table of NAMIC timelines from the original proposal that graphically depicts completed tasks/goals in years 1, 2, and 3 and tasks/goals to be completed in years 4-5. Changes to the original timelines have also been described.

2007 Scientific Report Timeline

Appendix A Publications

http://www.na-mic.org/Wiki/index.php/Publications

Appendix B EAB Report

EAB Report

The NA-MIC External Advisory Board (EAB), chaired by Professor Chris Johnson of the University of Utah, met at the annual All-Hands Meeting. After individual presentations by NA-MIC investigators and open as well as closed-door EAB discussion, the Board provided its independent expert assessment of the Center.

This is a password protected link to the EAB report.

Response to EAB Report

The NA-MIC leadership discussed the excellent suggestions made by the EAB, and in the following section we respond to each of them.

EAB: The Center should actively pursue methods to distribute their software and other resources to a broader community. As we noted in last year’s EAB Report, the Center is doing a good job at distributing their software toolkit, however, largely the distribution has been focused internally to NA-MIC collaborators. The EAB recommends that the Center work to more broadly distribute its software to the larger community of neuroimaging researchers. In addition, because the Center’s software tools will also be useful to biomedical researchers outside of neuroimaging, the EAB recommends the Center consider additional methods to “get the word out” about their resources.

NA-MIC: An essential focus in the last two years of the Center has been to broaden the outreach effort to the neuroimaging community and to other biomedical researchers outside of this community. Many algorithms have been developed in ITK/VTK, and then ported to Slicer to try to make them as accessible as possible. We believe that the ITK/VTK platform has been quite widely disseminated. Further, even algorithms in a more preliminary state and run in Matlab and C++ have been distributed, and various researchers have been trained in their use. As an example of following new strategies, NA-MIC has tied up with Kitware to organize joint workshops, e.g. at MICCAI 2005 and MICCAI 2006. These workshops were very successful, included presentation of research papers (reviewed via open-access mechanism) by NA-MIC and other researchers, and demonstrate the special NA-MIC effort within the more general ITK activities. However, as the EAB has noted, the current training and dissemination funds are limited, and the Center leadership will continue to look for creative solutions to get the word out within these resource constraints.

EAB: The Center is well suited to assess and compare algorithms. An important question in image analysis involves understanding which image-processing algorithm is best suited for a particular task. Given the large number of different image analysis algorithms being created by NA-MIC, the Center is in a good position to evaluate the effectiveness of different algorithms and to make appropriate recommendations.

NA-MIC: This topic has been discussed extensively within the Center, and we have addressed this issue in a manner that is aligned with our core expertise in open source software. More specifically, we will continue to build upon our strength and provide infrastructure and tools needed for open science, as well as to train researchers in the use of these tools. It is our hope that by providing easy access to these tools, we will be facilitating the process of comparison between algorithms for various biomedical tasks. This being said, many times there is no optimal algorithm for every imaging purpose. A certain segmentation routine may be excellent for one purpose, but not for another. This is a well recognized problem in image processing and computer vision, and not a fundamental limitation of medical imaging. Comparisons, however, among different algorithms to determine which algorithms are best for which questions being addressed is nonetheless important and we have made some inroads here. Sylvain Bouix, for example, has a paper in press in Neuroimage on evaluating brain tissue classifiers without ground truth, which compares several different segmentation algorithms. Also, in the shape analysis field, every NA-MIC related publication performs the necessary comparisons. Furthermore, Martin Styner and coworkers (Core 1 and 3 investigators) authored a paper on shape analysis comparisons at ISBI 2006. In addition, Martin Styner is co-organizing a caudate and liver segmentation comparison workshop at MICCAI 2007, with currently over 35 registered participating methods. Eric Pichon, Delphine Nain, and Marc Niethammer during their time at Georgia Tech published several works on the comparision of segmentation methodologies in various forums such as SPIE and ICIP. Our philosophy is to make a large menu of algorithmic methodologies available in as easy to use form as possible, so as to encourage the kind of comparisons recently done by Drs. Bouix, Styner, and the Georgia Tech group.

EAB: The Center should consider additional techniques to validate and verify techniques and software. The EAB was pleased to see the progress the Center made in quantifying and validating their algorithms and implementations within the NA-MIC Kit. However, because this is such an important issue, the EAB recommends the Center continue to make progress in this area.

NA-MIC: The Center is proactive in the validation and verification of its software implementations, and our record is quite good in this regard. Center researchers have come up with original techniques for validation, e.g., the Laplacian method for comparing the accuracy of segmentations which was developed at Georgia Tech, and has been incorporated into ITK. These validation techniques have been used to drive new original algorithmic as well. In addition, work within the Center has continued to compare manual measures of brain structure, i.e., the caudate, as a gold standard for automated measures of caudate in the same subjects. In DTI, validation methodology work includes access of NAMIC researchers to high-resolution DTI of a volunteer with 10 repeated datasets (UNC), to be used for reproducibility studies of quantitative DTI analysis methods. Joint work by Utah and UNC explores number of gradient directions versus errors of FA and MD measures as functions of spatial direction. The theoretical analysis has been recently confirmed by an in vivo experiment (UNC, ISMRM conference presentation and MICCAI’07 submission). We are also co-organizing a workshop at MICCAI 2007 whose goal is to compare automatic and semi-automatic segmentation methodologies (existing and new) against manual segmentations for the liver in CT and the caudate in MRI. Recent submissions to IPMI (to appear) and MICCAI (under review) from a Utah/UNC collaboration, make explicit comparisons between different methods for shape analysis. These kinds of validation efforts will continue to grow within NAMIC and distributed to the general medical imaging community.

EAB: The Center should generate a set of success criteria. The EAB recommends that the Center create a set of benchmarks of success that it will use in future years to measure the Center’s success and as ways to evaluate the Center’s continuing progress and at the time of the Center’s first renewal.

NA-MIC: We note that we have established benchmarks for success that are similar to criteria used by study sections for reviewing the success of other grant mechanisms, although we also include criteria for success that is specific to NA-MIC. These criteria include:

• Papers in peer reviewed publications

• Technology transfers from Core 1 and Core 2 to Core 3 researchers

• Software released as part of the NA-MIC Kit

• Training for outside researchers in the use of the NA-MIC Kit. The dissemination and training sessions include a web-based feedback mechanism for attendees.