Difference between revisions of "2008 Annual Scientific Report"

From NAMIC Wiki
Jump to: navigation, search
Line 1,478: Line 1,478:
 
=== [[Core_5_Timeline_Notes|Core 5 Timeline Notes ]] ===
 
=== [[Core_5_Timeline_Notes|Core 5 Timeline Notes ]] ===
  
= Core 6: Dissemination =
+
== Core 6: Dissemination ==
  
== Core 6 Timelines and Milestones ==
+
=== Core 6 Timelines and Milestones ===
  
 
{| border="1"
 
{| border="1"
Line 1,580: Line 1,580:
 
|}
 
|}
  
== Core 6 Timeline Modifications ==
+
=== Core 6 Timeline Modifications ===
 +
 
 
None.
 
None.
  
== [[Core_6_Timeline_Notes|Core 6 Timeline Notes ]] ==
+
=== [[Core_6_Timeline_Notes|Core 6 Timeline Notes ]] ===
  
 
==Appendix A Publications (Kapur)==
 
==Appendix A Publications (Kapur)==

Revision as of 15:21, 22 May 2008

Home < 2008 Annual Scientific Report

Back to 2008_Progress_Report



Contents

Guidelines for preparation

  • 2008_Progress_Report#Scientific Report Timeline - Main point is that May 15 is the date by which all sections below need to be completed. No extensions are possible.
  • DBPs - If there is work outside of the roadmap projects that you would like to report, you are welcome to create a separate section for it under "Other".
  • The outline for this report is similar to the 2007 report, which is provided here for reference: 2007_Annual_Scientific_Report.
  • In preparing summaries for each of the 8 topics in this report, please leverage the detailed pages for projects provided here: NA-MIC_Internal_Collaborations.
  • Publications will be mined from the SPL publications database. All core PIs need to ensure that all NA-MIC publications are in the publications database by May 15.

Introduction (Tannenbaum)

The National Alliance for Medical Imaging Computing (NA-MIC) is now in its fourth year. This Center is comprised of a multi-institutional, interdisciplinary team of computer scientists, software engineers, and medical investigators who have come together to develop and apply computational tools for the analysis and visualization of medical imaging data. A further purpose of the Center is to provide infrastructure and environmental support for the development of computational algorithms and open source technologies, and to oversee the training and dissemination of these tools to the medical research community. The first driving biological projects (DBPs) three years for Center were inspired by schizophrenia research. In the fourth year new DBPs have been added. Three are centered around diseases of the brain: (a) brain lesion analysis in neuropschiatric systemic lupus erythematosus; (b) a study of cortical thickness for autism; and (c) stochastic tractography for VCFS. In an very new direction, we have added DBP on the prostate: brachytherapy needle positioning robot integration.

We briefly summarize the work of NAMIC during the four years of its existence. In the year one of the Center, alliances were forged amongst the cores and constituent groups in order to integrate the efforts of the cores and to define the kinds of tools needed for specific imaging applications. The second year emphasized the identification of the key research thrusts that cut across cores and were driven by the needs and requirements of the DBPs. This led to the formulation of the Center's four main themes: Diffusion Tensor Analysis, Structural Analysis, Functional MRI Analysis, and the integration of newly developed tools into the NA-MIC Tool Kit. The third year of center activity was devoted to the continuation of the collaborative efforts in order to give solutions to the various brain-oriented DBPs.

Year four has seen progress with the work of our new DBPs. As alluded to above these include work on neuropsychiatric disorders such as Systemic Lupus Erythematosis (MIND Institute, University of New Mexico), Velocardiofacial Syndrome (Harvard), and Autism (University of North Carolina, Chapel Hill), as well as the prostate interventional work (Johns Hopkins and Queens Universities). We already have a number of publications as is indicated on our publications page, and software development is continuing as well.

In the next section (Section 3), we summarize this year’s progress on the four roadmap projects listed above: Section 3.1 stochastic tractography for Velocardiofacial Syndrome, Section 3.2 brachytherapy needle positioning for the prostate, Section 3.3 brain lesion analysis in neuropschiatric systemic lupus erythematosus, and Section 3.4 cortical thickness for autism. Next in Section 4, we describe recent work on the four infrastructure topics. These include: Diffusion Image analysis (Section 4.1), Structural analysis (Section 4.2), Functional MRI analysis (Section 4.3), and the NA-MIC Toolkit (Section 4.4). In Section 4.5, we outline some of the other key projects, in Section 4.6 some key highlights including the integration of the EM Segmentor into Slicer, and in Section 4.7 the impact of biocomputing at three different levels: within the center, within the NIH-funded research community, and externally to a national and international community. The final sections of this report, Sections 5-11, provide updated timelines on the status of the various projects of the different cores of NAMIC.

Clinical Roadmap Projects

Roadmap Project: Stochastic Tractography for VCFS (Kubicki)

Overview (Kubicki)

The goal of this project is to create an end-to-end application that would be usefull in evaluating anatomical connectivity between segmented cortical regions of the brain. The ultimate goal of our program is to understand anatomical connectivity similarities and differences between genetically related schizophrenia and velocardio-fatial syndrome. Thus we plan to use the "stochastic tractography" tool for the analysis of abnormalities in integrity, or connectivity, provided by arcuate fasciculus, fiber bundle involved in language processing, in schizophrenia and VCFS.

Algorithm Component (Golland)

At the core of this project is the stochastic tractography algorithm developed and implemented in collaboration between MIT and BWH. Stochastic Tractography is a Bayesian approach to estimating nerve fiber tracts from DTI images.

We first use the diffusion tensor at each voxel in the volume to construct a local probability distribution for the fiber direction around the principal direction of diffusion. We then sample the tracts between two user-selected ROIs, by simulating a random walk between the regions, based the local transition probabilities inferred from the DTI image.

The resulting collection of fibers and the associated FA values provide useful statistics on the properties of connections between the two regions. To constrain the sampling process to the relevant white matter region, we use atlas-based segmentation to label ventricles and gray matter and to exclude them from the search space. As such, this step relies heavily on the registration and segmentation functionality in Slicer.

Over the last year, we tested the algorithm first on the already available to NAMIC dataset of schizophrenia subjects acquired on 1.5T. This step allowed us to optimize algorithm to our dataset, as well as to develop the pipeline for data analysis that would be then easily transferable to other image sets and structures.

Next step, also accomplished this last year, was to apply the algorithm to new, higher resolution NAMIC dataset, and to study smaller white matter connections including cingulum bundle, arcuate fasciculus, uncinate fasciculus and internal capsule. This step was accomplished and data presented at the Santa Fee meeting in October 2007.

Upon the completion of testing phase, we started analysis of arcuate fasciculus, language related fiber bundle, in new 3T, high resolution dataset. Our current work focuses on improving the parameterization of the tracts, in order to obtain FA measurements along the tracts.

Engineering Component (Davis)

Stochastic Tractography slicer module has been finished, and presented at the AHM in SLC. Its now part of the slicer2.8 and slicer3. Module documentation have been also created. Current engineering efforts are concentrated on maintaining the module, optimizing it for working with other data formats, and adding new functionality, such as better registration, distortion correction and ways of extracting and measuring FA along the tracts.

Clinical Component (Kubicki)

Over the last year, we tested the algorithm on the already available NAMIC dataset of schizophrenia subjects acquired on 1.5T. Anterior Limb of the internal capsule, large structure connecting thalamus with frontal lobe, were extracted, and analyzed in group of 20 schizophrenics, and 20 control subjects. We presented the results showing group differences in FA values at the ACNP symposium in December 2007. Next, stochastic tractography was tested, and optimized for new, high resolution DTI dataset acquired on 3T GE magnet.

Upon the completion of the testing phase, we started analysis of arcuate fasciculus, language related fiber bundle, in 20 controls and 20 chronic schizophrenics. For each subject, we performed the white matter segmentation and extracted regions interconnected by Arcuate Fasciculus (Inferior frontal and Superior Temporal Gyrus), as well as another ROI that would guide the tract ("waypoint" ROI). We presented the preliminary results of the probabilistic tractography and the statistics of FA extracted for each tract for a small set of 7 patients and 12 controls at the AHM in January 2008. The full study is currently underway.

Additional Information

Additional Information for this project is available here on the NA-MIC wiki.

Roadmap Project: Brachytherapy Needle Positioning Robot Integration (Fichtinger)

Overview (Fichtinger)

Numerous studies have demonstrated the efficacy of image-guided needle-based therapy and biopsy in the management of prostate cancer. The accuracy of traditional prostate interventions performed using transrectal ultrasound (TRUS) is limited by image fidelity, needle template guides, needle deflection and tissue deformation. Magnetic Resonance Imaging (MRI) is an ideal modality for guiding and monitoring such interventions due to its excellent visualization of the prostate, its sub-structure and surrounding tissues.

We have designed a comprehensive robotic assistant system that allows prostate biopsy and brachytherapy procedures to be performed entirely inside a 3T closed MRI scanner. The current system applies transrectal approach to the prostate: an endorectal coil and steerable needle guide, both tuned to 3T magnets and invariable to any particular scanner, are integrated into the MRI compatible manipulator.

Under the NAMIC initiative, the image computing, visualization, intervention planning, and kinematic planning interface is being accomplished with open source system built on the NAMIC toolkit and its components, such as Slicer3 and ITK. These are complemented by a collection of unsupervised prostate segmentation and registration methods that are of great importance to the clinical performance of the interventional system as a whole.

Algorithm Component (Tannenbaum)

We have worked on both the segmentation and the registration of the prostate from MRI and ultrasound data. We explain each of the steps now.

Prostate Segmentation

We first must extract the prostate. We have considered three possible methods: a combination of a combination of Cellular Automata(CA also known as Grow Cut) with Geometric Active Contour(GAC) methods; employing an ellipsoid to match the prostate in 3D image; shape based approach using spherical wavelets. More details are given below and images and further details may be found at GaTech Algorithm Prostate Segmentation.

1. A cellular automata algorithm is used to give an initial segmentation. It begins with a rough manual initialization and then iteratively classifies all pixels into object and bacground until convergence. It effectively overcomes the problems of weak boundaries and inhomogeneity within the object or background. This in turn is fed into Geometric Active Contour for finer tuning. We are initially using the edge-based minimal surface pproach (the generalization of the standard Geodesic Active Contour model) which seems to give very reasonable results. Both steps of the algorithm algorithm are implemented in 3D. A ITK-Cellular Automata filter, dealing with N-D data, has already been completed and submitted to the NA-MIC SandBox.

2. Spherical wavelets have proven to be a very natural way of representing 3D shapes which are compact and simply connected (topological spheres). We developed a segmentation framework using this 3D wavelet representation and multiscale prior. The parameters of our model are the learned shape parameters based on the spherical wavelet coefficients}, as well as pose parameters that accommodate for shape variability due to a similarity transformation (rotation, scale, translation) which is not explicitly modeled with the shape parameters. The transformed surface based on the pose parameters. We used a region-based energy to drive the evolution of the parametric deformable surface for segmentation. Our segmentation algorithm deforms an initial surface according to the gradient flow that minimizes the energy functional in terms of the pose and shape parameters. Additionally, the optimization method can be applied in a coarse to fine manner. Spherical wavelets and conformal mappings are already part of the NA-MIC SandBox.

3. The third method is very closely related to the second. It is based on the observation that the prostate may be roughly modelled as an ellipsoid. One can then employing this ellipsoid model coupled with a local/global segmentation energy approach which we have developed this year, as the basis of a segmentation procedure. Because of the local/global nature of the functional and the implicit introduction of scale this methodology may be very useful for MRI prostate data.

Prostate Registration

The registration and segmentation elements of our algorithm are difficult to separate. Thus for the 3D shape-driven segmentation part, the shapes must first be aligned through a conformal and area-correction alignment process. The prostate presents a number of difficulties for traditional approaches since there are no easily discernable landmarks. On the other hand, we observed that the surface of the prostate is almost half convex and half concave. The concave region may be captured and used to register the shapes, thus we register the whole shape by registering a certain region on it. Such concave region is characterized by its negative mean curvature. We treat the mean curvature as a scalar field defined on the surface, and we have extended the Chan-Vese method (in which one wants to separate the means with respect to the regions defined by the interior and exterior of the evolving active contour) to the case at hand on the prostate surface. The method is implemented in C++ and it successfully extracts the concave surface region. This method could also be used to exact regions on surface according to any feature charactered by a scalar field defined on the surface.

In order incorporate the extracted region as landmarks into the registration process, instead of matching two binary images directly, we transform the binary images into a form to highlight the boundary region. This is done by applying a Gauss function on the (narrow band) of the signed distance function of the binary image. The transformed image enjoys the advantages of both the parametric and implicit representations of shapes. Namely it has compact description, as the parametric representation does, and as in the implicit representation it avoids the correspondence problem. Moreover we incorporate the extracted concave regions into such images for registration which leads to a better result.

Finally, in the past year we have developed a particle filtering approach for the general problem of registering two point sets that differ by a rigid body transformation which may be very useful for this project. Typically, registration algorithms compute the transformation parameters by maximizing a metric given an estimate of the correspondence between points across the two sets of interest. This can be viewed as a posterior estimation problem, in which the corresponding distribution can naturally be estimated using a particle filter. We treat motion as a local variation in pose parameters obtained from running several iterations of the standard Iterative Closest Point (ICP) algorithm. Employing this idea, we introduce stochastic motion dynamics to widen the narrow band of convergence often found in local optimizer functions used to tackle the registration task. In contrast with other techniques, this approach requires no annealing schedule, which results in a reduction in computational complexity as well as maintains the temporal coherency of the state (no loss of information). Also, unlike most alternative approaches for point set registration, we make no geometric assumptions on the two data sets.

Engineering Component (Hayes)

There are several features of the NA-MIC kit that have been appropriated for the transrectal prostate biopsy module. The most important is the set of "WizardWorkflow" widgets, which were originally developed to guide a user through the logical steps needed to accomplish segmentation of the brain. This "Wizard" framework and its underlying state machine have turned out also to be ideal for developing a GUI that rigidly follows the clinical workflow of interventional procedures, and they has become the backbone of the module. The "fiducial" tool in Slicer3 has also been put to use, as a means of annotating biopsy target sites. The final crucial feature that has been put to use is Slicer's ability to display oblique slices through arbitrarily rotated volumes, which is necessary over a broad range of interventional applications.

The fiducials for localizing the needle and robot within the MRI magnet bore will, in the future, be identified by MRI fiducial segmentation software developed for this purpose at JHU.

Clinical Component (Fichtinger)

The current robotic prostate biopsy and implant system has been applied on over 50 patients. The system is being replicated for multicenter trials at Johns Hopkins (Baltimore), NIH (Bethesda), Brigham and Womens Hospital (Boston), and Princess Margaret Hospital (Toronto). Of these, NIH and Princess Margaret have completed the ethics board approval and will commence trials in May 2008. Others will follow suite shortly. In the meantime, the VTK-based interface to the system is being converted into a Slicer3 interventional module, and the underlying algorithmic components are being replaced by, and in some cases converted into, NAMIC algorithmic components. Ongoing clinical trials will seamlessly absorb the Slicer3 version of the system, based on detailed functional equivalency tests that are to be conducted. (Note that most IRB-s do not require resubmission of the protocol when the interface software is updated, as long as the system's functionality is guaranteed to be intact.)

Additional Information

Additional Information for this project is available here on the NA-MIC wiki.

Roadmap Project: Brain Lesion Analysis in Neuropsychiatric Systemic Lupus Erythematosus (Bockholt)

Overview (Bockholt)

The primary goal of the MIND DPB is to examine changes in white matter lesions in adults with Neuropsychiatric Systemic Lupus Erythematosus (SLE). We want to be able to characterize lesion location, size, and intensity, and would also like to examine longitudinal changes of lesions in an SLE cohort. To accomplish this goal, we will create an end-to-end application entirely within NA-MIC Kit allowing individual analysis of white matter lesions. Such a workflow will then be applied to a clinical sample in the process of being collected.

Algorithm Component (Whitaker)

The basic steps necessary for the white matter lesion analysis application entail first registration of T1, T2, and FLAIR images, second tissue classification into gray, white, csf, or lesion, thirdly clustering lesion for anatomical localization, and finally a summarization of lesion size and image intensity parameters within each unique lesion.

Tissue segmentation: We have compared manual tracing of white matter lesions to EM-Segment, itkEMS, and a custom ITK-based k-means+Bayesian classifier. Tests have been successful and a comparative study of each automated technique to manual tracing has shown that further parameter optimization is needed to match the manual classification (specifically, an approach for paraventricular artifacts that manifest as hyperintensity artifacts on FLAIR images).

Engineering Component (Pieper)

Several of the algorithms for this Clinical Roadmap project were already in software tools utilizing ITK. These tools are being repackaged as a Slicer3 plugin. The EM-Segment module in Slicer3 has been extended to support this Clinical Roadmap by adding a registration module for co-registration of T1, T2, and FLAIR. Also, the EM-Segment module has been tested with and now allows 3 input channels to be used for tissue classification and also has been adapted to allow full control of weighting each of the three channels anywhere in the hierarchal tissue classification procedure.

Clinical Component (Bockholt)

So far, the clinical component of this project has involved interfacing with the algorithms and engineering teams to provide the project specifications, feedback, and data (needed for testing). 3 SLE and 3 healthy normal volunteers datasets were also collected at 1.5T and 3.0T for the purpose to be used in a public tutorial. During this past year, training a new NA-MIC engineer, development and programming work has proceeded satisfactorily, and we anticipate being able to apply our lesion classification method and analyses by the end of our project period. Therefore, the primary accomplishment of this first year has been the development and testing of methods that are necessary for this white matter lesion classification pipeline.

Additional Information

Additional Information for this project is available here on the NA-MIC wiki.

Roadmap Project: Cortical Thickness for Autism(Hazlett)

Overview (Hazlett)

A primary goal of the UNC DPB is to examine changes in cortical thicknes in children with autism compared to typical controls. We want to examine group differences in both local and regional cortical thickness, and would also like to examine longitudinal changes in the cortex from ages 2-4 years. To accomplish this goal, this project will create an end-to-end application within Slicer3 allowing individual and group analysis of regional and local cortical thickness. Such a workflow will then be applied to our study data (already collected).

Algorithm Component (Styner)

The basic steps necessary for the cortical thickness application entail first tissue segmentation in order to separate white and gray matter regions, second cortical thickness measurement, thirdly cortical correspondence to compare measurements across subjects and finally a statistical analysis to locally compute group differences. Tissue segmentation: We have successfully adapted the UNC segmentation tool called itkEMS to Slicer, which we have for segmentations of the young brain. We also created a young brain atlas for the current Slicer3 EM Segment module. Tests have been successful and a comparative study to itkEMS has shown that further parameter optimization is needed to reach the same quality.

Cortical thickness measurement

The UNC algorithm for the measurement of local cortical thickness given a labeling of white matter and gray matter has been developed into a Slicer3 external module. This module lends itself well for regional analysis of cortical thickness, but less so for local analysis due to its non-symmetric and sparse measurements. Ongoing development is focusing on a symmetric, Laplacian based cortical thickness suitable for local analysis.

Cortical correspondence (regional)

For regional correspondence, an existing lobar parcellation atlas is deformably registered using a b-spline registration tool. First tests have been very promising and the release of the corresponding Slicer 3 registration module is schedule to be finished within the next month and thus the regional analysis workflow will be available at that time.

Cortical correspondence (local)

Local cortical correspondence requires a two-step process of white/gray surface inflation followed by group-wise correspondence computation. White matter surface extraction and inflation is currently achieved with an external tool and developing a Slicer 3 based solution is a goal in the next year. The group-wise correspondence step has been fully solved, and a Slicer 3 module is already available. Evaluation on real data has shown that our method outperforms the currently widely employed Freesurfer framework.

Statistical analysis/Hypothesis testing

Regional analysis can be done with standard statistical tools such as MANOVA as there are a limited, relatively small number of regions. Local analysis on the other hand needs local non-parametric testing, multiple-comparison correction, and correlative analysis that is not routinely available. We are currently extending the current Slicer 3 module designed for statistical shape analysis to be used for this purpose incorporating a local applied General Linear Module and MANCOVA based testing framework.

Engineering Component (Miller, Vachet)

Several of the algorithms for this Clinical Roadmap project were already in software tools utilizing ITK. These tools have been refactored to be NA-MIC compatible and repackaged as Slicer3 plugins. Slicer3 has been extended to support this Clinical Roadmap by adding transforms as a parameter type that can be passed to and returned by plugins. Slicer3 registration and resampling modules have been refactored to produce and accept transforms as parameters. Slicer3 has also been extended to support nonlinear transformation types (B-Spline and deformation fields) in its data model.

Clinical Component (Hazlett)

So far, the clinical component of this project has involved interfacing with the algorithms and engineering teams to provide the project specifications, feedback, and data (needed for testing). During this past year, development and programming work has proceeded satisfactorily, and we anticipate being able to test our project hypotheses about cortical thickness in autism by the end of our project period. Therefore, the primary accomplishment of this first year has been the development and testing of methods that are necessary for this cortical thickness work pipeline.

Additional Information

Additional Information for this project is available here on the NA-MIC wiki.

Four Infrastructure Topics

Diffusion Image Analysis (Gerig)

Progress

Key Investigators

Additional Information

Additional Information for this topic is available here on the NA-MIC wiki.

Structural Analysis(Tannenbaum)

Progress

Under Structural Analysis, the main topics of research for NAMIC are structural segmentation, registration techniques and shape analysis. These topics are correlated and research in one often finds application in another. For example, shape analysis can yield useful priors for segmentation, or segmentation and registration can provide structural correspondences for use in shape analysis and so on.

An overview of selected progress highlights under these broad topics follows.

Structural Segmentation

  • Directional Based Segmentation

We have proposed a directional segmentation framework for Direction-weighted Magnetic Resonance imagery by augmenting the Geodesic Active Contour framework with directional information. The classical scalar conformal factor is replaced by a factor that incorporates directionality. We mathematically showed that the optimization problem is well-defined when the factor is a Finsler metric. The calculus of variations or dynamic programming may be used to find the optimal curves. This past year we have applied this methodology in extracting the anchor tract (or centerline) of neural fiber bundles. Further we have applied this in conjunction with the Bayes’ rule into volumetric segmentation for extracting the entire fiber bundles. We have also proposed a novel shape prior in the volumetric segmentation to extract tubular fiber bundles.

  • Stochastic Segmentation

We have continued work this year on developing new stochastic methods for implementing curvature-driven flows for medical tasks like segmentation. We can now generalize our results to an arbitrary Riemannian surface which includes the geodesic active contours as a special case. We are also implementing the directional flows based on the anisotropic conformal factor described above using this stochastic methodology. Our stochastic snakes’ models are based on the theory of interacting particle systems. This brings together the theories of curve evolution and hydrodynamic limits, and as such impacts our growing use of joint methods from probability and partial differential in image processing and computer vision. We now have working code written in C++ for the two dimensional case and have worked out the stochastic model of the general geodesic active contour model.

  • Statistical PDE Methods for Segmentation

Our objective is to add various statistical measures into our PDE flows for medical imaging. This will allow the incorporation of global image information into the locally defined PDE framework. This year, we developed flows which can separate the distributions inside and outside the evolving contour, and we have also been including shape information in the flows. We have completed a statistically based flow for segmentation using fast marching, and the code has been integrated into Slicer.

  • Atlas Renormalization for Improved Brain MR Image Segmentation

Atlas-based approaches can automatically identify detailed brain structures from 3-D magnetic resonance (MR) brain images. However, the accuracy often degrades when processing data acquired on a different scanner platform or pulse sequence than the data used for the atlas training. In this project, we work to improve the performance of an atlas-based whole brain segmentation method by introducing an intensity renormalization procedure that automatically adjusts the prior atlas intensity model to new input data. Validation using manually labeled test datasets shows that the new procedure improves segmentation accuracy (as measured by the Dice coefficient) by 10% or more for several structures including hippocampus, amygdala, caudate, and pallidum. The results verify that this new procedure reduces the sensitivity of the whole brain segmentation method to changes in scanner platforms and improves its accuracy and robustness, which can thus facilitate multicenter or multisite neuroanatomical imaging studies.

  • Multiscale Shape Segmentation Techniques

The goal of this project is to represent multiscale variations in a shape population in order to drive the segmentation of deep brain structures, such as the caudate nucleus or the hippocampus. Our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We derived a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We applied our algorithm to the caudate nucleus, a brain structure of interest in the study of schizophrenia. Our validation shows that our algorithm is computationally efficient and outperforms the Active Shape Model (ASM) algorithm, by capturing finer shape details.

Registration

  • Optimal Mass Transport Registration

The aim of this project is to provide a computationally efficient non-rigid/elastic image registration algorithm based on the Optimal Mass Transport theory. We use the Monge-Kantorovich formulation of the Optimal Mass Transport problem and implement the gradient flow PDE approach using multi-resolution and multi-grid techniques to speed up the convergence. We also leverage the computational power of general purpose graphics processing units available on standard desktop computing machines to exploit the inherent parallelism in our algorithm. We have implemented 2D and 3D multi-resolution registration using Optimal Mass Transport and are currently working on the registration of 3D datasets.

  • Diffusion Tensor Image Processing Tools

We aim to provide methods for computing geodesics and distances between diffusion tensors. One goal is to provide hypothesis testing for differences between groups. This will involve interpolation techniques for diffusion tensors as weighted averages in the metric framework. We will also provide filtering and eddy current correction. This year, we developed a Slicer module for DT-MRI Rician noise removal, developed prototypes of DTI geometry and statistical packages, and began work on a general method for hypothesis testing between diffusion tensor groups.

  • Point Set Rigid Registration

We propose a particle filtering scheme for the registration of 2D and 3D point set undergoing a rigid body transformation where we incorporate stochastic dynamics to model the uncertainty of the registration process. Typically, registration algorithms compute the transformations parameters by maximizing a metric given an estimate of the correspondence between points across the two sets of interest. This can be viewed as a posterior estimation problem, in which the corresponding distribution can naturally be estimated using a particle filter. In this work, we treat motion as a local variation in the pose parameters obtained from running a few iterations of the standard Iterative Closest Point (ICP) algorithm. Employing this idea, we introduce stochastic motion dynamics to widen the narrow band of convergence as well as provide a dynamical model of uncertainty. In contrast with other techniques, our approach requires no annealing schedule, which results in a reduction in computational complexity as well as maintains the temporal coherency of the state (no loss of information). Also, unlike most alternative approaches for point set registration, we make no geometric assumptions on the two data sets.

  • Cortical Correspondence using Particle System

In this project, we want to compute cortical correspondence on populations, using various features such as cortical structure, DTI connectivity, vascular structure, and functional data (fMRI). This presents a challenge because of the highly convoluted surface of the cortex, as well as because of the different properties of the data features we want to incorporate together. We would like to use a particle based entropy minimizing system for the correspondence computation, in a population-based manner. This is advantageous because it does not require a spherical parameterization of the surface, and does not require the surface to be of spherical topology. It would also eventually enable correspondence computation on the subcortical structures and on the cortical surface using the same framework. To circumvent the disadvantage that particles are assumed to lie on local tangent planes, we plan to first ‘inflate’ the cortex surface. Currently, we are at testing stage using structural data, namely, point locations and sulcal depth (as computed by FreeSurfer).

  • Multimodal Atlas

In this work, we propose and investigate an algorithm that jointly co-registers a collection of images while computing multiple templates. The algorithm, called iCluster for Image Clustering, is based on the following idea: given the templates, the co-registration problem becomes simple, reducing to a number of pairwise registration instances. On the other hand, given a collection of images that have been co-registered, an off-the shelf clustering or averaging algorithm can be used to compute the templates. The algorithm assumed a fixed and known number of template images. We formulate the problem as a maximum likelihood solution and employ a Generalized Maximum Likelihood algorithm to solve it. In the E-step, we compute membership probabilities. In the M-step, we update the template images as weighted averages of the images, where weights are the memberships and the template priors are updated, and then perform a collection of independent pairwise registration instances. The algorithm is currently implemented in the Insight ToolKit (ITK) and we next plan to integrate it into Slicer.

  • Groupwise Registration

We aim at providing efficient groupwise registration algorithms for population analysis of anatomical structures. Here we extend a previously demonstrated entropy based groupwise registration method to include a free-form deformation model based on B-splines. We provide an efficient implementation using stochastic gradient descents in a multi-resolution setting. We demonstrate the method in application to a set of 50 MRI brain scans and compare the results to a pairwise approach using segmentation labels to evaluate the quality of alignment. Our results indicate that increasing the complexity of the deformation model improves registration accuracy significantly, especially at cortical regions.

Shape Analysis

  • Shape Analysis Framework Using SPHARM-PDM

The UNC shape analysis is based on an analysis framework of objects with spherical topology, described by sampled spherical harmonics SPHARM-PDM. The input of the proposed shape analysis is a set of binary segmentations of a single brain structure, such as the hippocampus or caudate. Group tests can be visualized by P-values and by mean difference magnitude and vector maps, as well as maps of the group covariance information. The implementation has reached a stable framework and has been disseminated to several collaborating labs within NAMIC (BWH, Georgia Tech, Utah). The current development focuses on integrating the current command line tools into the Slicer (v3) via the Slicer execution model. The whole shape analysis pipeline is encapsulated and accessible to the trained clinical collaborator. The current toolset distribution (via NeuroLib) now also contains open data for other researchers to evaluate their shape analysis enhancements.

  • Multiscale Shape Analysis

We present a novel method of statistical surface-based morphometry based on the use of non-parametric permutation tests and a spherical wavelet (SWC) shape representation. As an application, we analyze two brain structures, the caudate nucleus and the hippocampus. We show that the results nicely complement the results obtained with shape analysis using a sampled point representation (SPHARM-PDM). We used the UNC pipeline to pre-process the images, and for each triangulated SPHARM-PDM surface, a spherical wavelet description is computed. We then use the UNC statistical toolbox to analyze differences between two groups of surfaces described by the features of choice that is the 3D spherical wavelet coefficients. This year, we conducted statistical shape analysis of the two brain structures and compared the results obtained to shape analysis using a SPHARM-PDM representation.

  • Population Analysis of Anatomical Variability

In contrast to shape-based segmentation that utilizes a statistical model of the shape variability in one population (typically based on Principal Component Analysis), we are interested in identifying and characterizing differences between two sets of shape examples. We use the discriminative framework to characterize the differences in shape by training a classifier function and studying its sensitivity to small perturbations in the input data. An additional benefit is that the resulting classifier function can be used to label new examples into one of the two populations, e.g., for early detection in population screening or prediction in longitudinal studies. We have implemented stand alone code for training a classifier, jackknifing and permutation testing, and are currently porting the software into ITK. We have also started exploring alternative, surface-based descriptors which are promising in improving our ability to detect and characterize subtle differences in the shape of anatomical structures due to diseases such as schizophrenia.

  • Shape Analysis with Overcomplete Wavelets

In this work, we extend the Euclidean wavelets to the sphere. The resulting over-complete spherical wavelets are invariant to the rotation of the spherical image parameterization. We apply the over-complete spherical wavelet to cortical folding development and show significantly consistent results as well as improved sensitivity compared with the previously used bi-orthogonal spherical wavelet. In particular, we are able to detect developmental asymmetry in the left and right hemispheres.

  • Shape based Segmentation and Registration

When there is little or no contrast along boundaries of different regions, standard image segmentation algorithms perform poorly and segmentation is done manually using prior knowledge of shape and relative location of underlying structures. We have proposed an automated approach guided by covariant shape deformations of neighboring structures, which is an additional source of prior knowledge. Captured by a shape atlas, these deformations are transformed into a statistical model using the logistic function. The mapping between atlas and image space, structure boundaries, anatomical labels, and image inhomogeneities are estimated simultaneously within an expectation-maximization formulation of the maximum a posteriori Probability (MAP) estimation problem. These results are then fed into an Active Mean Field approach, which views the results as priors to a Mean Field approximation with a curve length prior. Our method filters out the noise as compared to thresholding using initial likelihoods, and it captures multiple structures as in the brain (where both major brain compartments and subcortical structures are obtained) because it naturally evolves families of curves. The algorithm is currently implemented in 3D Slicer Version 2.6 and a beta version is available in 3D Slicer Version 3.

  • Spherical Wavelets

In this project, we apply a spherical wavelet transformation to extract shape features of cortical surfaces reconstructed from magnetic resonance images (MRI) of a set of subjects. The spherical wavelet transformation can characterize the underlying functions in a local fashion in both space and frequency, in contrast to spherical harmonics that have a global basis set. We perform principal component analysis (PCA) on these wavelet shape features to study patterns of shape variation within normal population from coarse to fine resolution. In addition, we study the development of cortical folding in newborns using the Gompertz model in the wavelet domain, allowing us to characterize the order of development of large-scale and finer folding patterns independently. We develop an efficient method to estimate the regularized Gompertz model based on the Broyden–Fletcher–Goldfarb–Shannon (BFGS) approximation. Promising results are presented using both PCA and the folding development model in the wavelet domain. The cortical folding development model provides quantitative anatomical information regarding macroscopic cortical folding development and may be of potential use as a biomarker for early diagnosis of neurological deficits in newborns.

Key Investigators

  • MIT: Polina Golland, Kilian Pohl, Sandy Wells, Eric Grimson, Mert R. Sabuncu
  • UNC: Martin Styner, Ipek Oguz, Xavier Barbero
  • Utah: Ross Whitaker, Guido Gerig, Suyash Awate, Tolga Tasdizen, Tom Fletcher, Joshua Cates, Miriah Meyer
  • GaTech: Allen Tannenbaum, John Melonakos, Vandana Mohan, Tauseef ur Rehman, Shawn Lankton, Samuel Dambreville, Yi Gao, Romeil Sandhu, Xavier Le Faucheur, James Malcolm
  • Isomics: Steve Pieper
  • GE: Bill Lorensen, Jim Miller
  • Kitware: Luis Ibanez, Karthik Krishnan
  • UCLA: Arthur Toga, Michael J. Pan, Jagadeeswaran Rajendiran
  • BWH: Sylvain Bouix, Motoaki Nakamura, Min-Seong Koo, Martha Shenton, Marc Niethammer, Jim Levitt, Yogesh Rathi, Marek Kubicki, Steven Haker

Additional Information

Additional Information for this topic is available here on the NA-MIC wiki.

fMRI Analysis (Golland)

Progress

One of the major goals in analysis of fMRI data is the detection of functionally homogeneous networks in the brain. Over the past year, we demonstrated a method for identifying large-scale networks in brain activation that simultaneously estimates the optimal representative time courses that summarize the fMRI data well and the partition of the volume into a set of disjoint regions that are best explained by these representative time courses.

In the classical functional connectivity analysis, networks of interest are defined based on correlation with the mean time course of a user-selected `seed' region. Further, the user has to also specify a subject-specific threshold at which correlation values are deemed significant. In this project, we simultaneously estimate the optimal representative time courses that summarize the fMRI data well and the partition of the volume into a set of disjoint regions that are best explained by these representative time courses. This approach to functional connectivity analysis offers two advantages. First, is removes the sensitivity of the analysis to the details of the seed selection. Second, it substantially simplifies group analysis by eliminating the need for the subject-specific threshold. Our experimental results indicate that the functional segmentation provides a robust, anatomically meaningful and consistent model for functional connectivity in fMRI.

We are currently exploring the applications of this methodology to characterizing connectivity in the rest-state data in clinical populations. We are also comparing the empirical findings with the results of ICA decomposition, which is commonly used for data-driven fMRI analysis. Our goal in this study is to identify differences in connectivity between the patient populations and normal controls.

Key Investigators

  1. MIT: Polina Golland, Danial Lashkari, Bryce Kim
  2. Harvard/BWH: Sylvain Bouix, Martha Shenton, Marek Kubicki

Additional Information

Additional Information for this topic is available here on the NA-MIC wiki.

NA-MIC Kit Theme (Schroeder)

Progress

The NAMIC-Kit consists of a framework of advanced computational components, as well as the support infrastructure for testing, documenting, and deploying leading edge medical imaging algorithms and software tools. The framework has been carefully constructed to provide low-level access to libraries and modules for advanced users, plus high-level application access that non-computer professionals can use to address a variety of problems in biomedical computing. In this fourth year of the NA-MIC projects, the potential of this vision has been realized with the release of the first formal release of Slicer3. This release is based on a coordinated development of the underlying toolkits such as VTK, ITK, KWWidgets and Teem, plus advances in the underlying software process and the beginnings of new facilities for large-scale data processing across computational grids. The following subsections describe some of the important additions to the NAMIC-Kit during this year of work.

Software Releases

The NAMIC-Kit can be represented as a pyramid of capabilities, with the base consisting of toolkits and libraries, and the apex standing in for the Slicer3 user application. In between, Slicer modules are stand-alone executables that can be integrated directly into the Slicer3 application, including GUI integration, while work-flows are groups of modules that are integrated together to manifest sophisticated segmentation, registration and biomedical computing algorithms. In a coordinated NAMIC effort, major releases of these many components were realized over the past year. This includes, but is not limited to:

  • VTK v5.2, the first major VTK release in over two years, that includes significant functional additions including 3D interaction widgets and an entire framework for information visualization;
  • ITK v3.6, integrating many new features and computational improvements including a multi-threaded (parallel) registration framework;
  • CMake v2.6, the leading-edge tool for cross-platform compilation, testing and deployment;
  • KWWidgets, the computing-platform independent GUI library;
  • Slicer3 v3.2, the user application that includes dozens of new modules supporting image processing and analysis, segmentation and registration.

Slicer3 and the Software Framework

One of the major achievements of the past year has been the realization of the Slicer3 execution framework. This framework provides significant flexibility to medical imaging scientists who are developing algorithms, while simultaneously providing automated integration into the Slicer application. The algorithms, which are implemented as Slicer modules, can be implemented using any of the components from the NAMIC-Kit toolkits, or even from custom code external to the NAMIC-Kit. These modules can then be run stand-alone, independent of the Slicer3 application, or dynamically loaded into Slicer which seamlessly integrates into the Slicer GUI. The advantage is that algorithm developers, such as the NAMIC Core 1 participants, can focus on their algorithms without the concern of the complexity of integration into the Slicer application framework. Currently dozens of modules have been developed and are distributed with Slicer. One recent development has been the exchange of modules across the Slicer community; whereby users create and then send the resulting modules to users who can then drop the modules into their own Slicer application.

Another important feature of the execution framework is that it readily supports batch processing, either local to a machine, or across a computing grid. The key is that modules can be driven by external processing tools such as BatchMake, which is a new addition to the NAMIC-Kit. BatchMake supports simple scripts that can drive these modules across multiple computing platforms, and provides methods for iteration across parameter space.

Software Process

One of the challenges facing developers has been the requirement to implement, test and deploy software systems across multiple computing platforms. NAMIC continues to push the state of the art with further development of the CMake, CTest, and CPack tools for cross-platform development, testing, and packaging, respectively. These tools have been recognized for their excellence and have been adopted by many large software systems including KDE 4.0, one of the world's largest open source software systems. In addition, several new software process tools were created this year and are now in use in the NAMIC community as well as in other parts of the world. This includes the testing dashboard server CDash, which is based on modern web protocols and and is built to scale robustly to large software systems. The collaboration tools, W2W (wiki-2-web) and PubDB (Publication Database) were also created and deployed this year. W2W is a web authoring tool that leverages the simplicity of wiki editing to create static, professional web pages. PubDB is a system for managing, including providing access to, publications, data, and/or images.

Key Investigators

  • Kitware - Will Schroeder (Core 2 PI), Sebastien Barre, Luis Ibanez, Bill Hoffman
  • GE - Jim Miller, Xiaodong Tao
  • Isomics - Steve Pieper

Additional Information

Additional Information for this topic is available here on the NA-MIC wiki.

Other Projects

Any Project(s) not covered by the 8 sections above

Highlights(Schroeder)

Advanced Algorithms

Core 1 continues to lead the biomedical community in DTI and shape analysis, and has deployed a comprehensive EMSegmenter work-flow module in the Slicer3 application.

  • EM Segmenter
  • DTI progress

NAMIC-Kit

Core 2 in conjunction with Algorithms (Core 1) and DBP (Core 3) are creating new tools to accelerate the transition of technology to the biomedical imaging community.

  • Year 4 of the NAMIC NCBC was extemely active on the software development front. A coordinated effort resulted in the formal release of several key components of the NAMIC Kit including VTK 5.2, ITK 3.6, CMake 2.6, and Slicer 3.2. All of these software systems have been integrated into the kit.
  • The NAMIC software process, encompassing the tools CMake, CTest, and CPack, continue to be garner attention and adoption in other projects around the world. At a formal announcement co-sponsored by Google, KDE announced its official 4.0 release, one of the highlests of this release being the cross-platform support provided by CMake.
  • The NAMIC community continues to lead the biomedical computing community with the creation of new systems to facilitate the testing process, and improve community collaboration and communication. CDash, a new software testing server, was created in order to provide scalable, robust support for large, complex software projects such as Slicer3 and the NAMIC-Kit. The Wiki2Web tool was created to simplify the creation of high-quality web pages by using the simplicity of wiki authoring. The Publication Database has been deployed and adopted as a repository for publication, data and images for scientific reference material. BatchMake was introduced to provide large-scale, batch computing across local and distributed computing resources.

Outreach and Technology Transfer

Cores 4-5-6 continue to support, train and dissemniate to the NAMIC community, and the broader biomedical computing community.

  • The Slicer community held several workshops and tutorials. In June 2007 a satellite event was held for the international Organization for Human Brain Mapping at the annual meeting in Chicago, IL. The eight hour workshop on Diffusion Imaging Data hosted 50 participants representing nine countries from around the world, 14 states within the US and 40 different laboratories including 2 NIH institutes. In addition, the first Slicer3 tutorial was held at the NAMIC AHM meeting at Salt Lake City in January 2008, with supsequent tutorials following including one at SPL in Boston and another at UNC-CH.
  • Project Week continues to be a successful NAMIC venue. These semi-annual events are held in Boston in June, and January in Salt Lake City. These events are well attended with approximately 90 participants, of which about a third are outside collaborators. At the last Project Week in Salt Lake City, approximately 38 projects were realized.
  • NAMIC continues to participate in conferences and other technical venues. For example, NAMIC hosted the Workshop on Open Source and Open Data at MICCAI 2007.

Outreach (Gollub)

NAMIC outreach is a joint effort of Cores 4, 5 and 6. The various mechanisms by which we ensure that the tools developed by NAMIC are rapidly and successfully deployed to the widest possible extent within the scientific community are closely integrated. This begins with the immediate posting of all software tools, interim updates and associated documentation via the NAMIC and Slicer wiki pages (links). The concerted effort to provide a harmonious visualization and analysis platform (Slicer 3) that enables the integration of the software algorithms of all Core 1 laboratories drives the sequence of development of training materials. With the January 2008 release of Slicer 3 in beta format, we prepared the first of the Slicer 3 based Powerpoint tutorials that guide new users through the process of loading, interacting with and saving data in Slicer 3. Given the intense and successful effort at engineering this platform to facilitate the process of integrating new command-line modules of image analysis software into the platform, our second tutorial targeted software developers . The "Hello World" tutorial guides a programmer, step-by-step through the process of integrating a command line tool into Slicer 3. Both these tutorials are available via the web (link). These tutorials have been thoroughly tested by using them in large Workshops (see next) to ensure that they are robust across platform (Linux, Mac, PC) and can be used successfully by users across a wide range of training backgrounds.

In June of 2007 as a satellite event to the international Organization for Human Brain Mapping annual meeting in Chicago, IL we ran an 8 hour workshop on analysis of Diffusion Imaging Data (link); it was our final Slicer workshop based on the Slicer 2.7 release. The Workshop rapidly filled after posting, the 50 participants represented 9 countries from around the world, 14 states within the US and 40 different laboratories including 2 NIH institutes. The single "no-show" was due to a European flight cancellation. The attendees, with backgrounds in basic or clinical neurosciences, physics, image processing or computer science, ranging from full professors to new graduate students were very comfortable learning together. The feedback from the workshop attendees was uniformly positive with 100% reporting that they would recommend the workshop to others and 50% planning to apply the tools and information they learned to their own work.

In January 2008 we debuted the "Hello World" tutorial at the NAMIC AHM in Salt Lake City to an audience of our project members and collaborators. This very constructive presentation was used to make significant improvements in the presentation and delivery of this material. In February 2008 we debuted the users tutorial at a workshop hosted by the Surgical Planning Laboratory at BWH. Again, this presentation was used to make significant improvements in the presentation and delivery of the material. In April of 2008 we ran an all day workshop, hosted by UNC (get details right) for users and developers that incorporated both tutorials. This was attended by approximately 20 individuals coming from a wide range of backgrounds. Time was taken to ensure that all participants gained significant understanding of the new software, sufficient to ensure their successful use of it following the workshop.

This year saw the publication of a peer-reviewed manuscript that describes the NAMIC approach to outreach including our multi-disciplinary approach, our integration of theory into practice as driven by a clinical goal, and the translation of concepts into skills through interactive instructor led training sessions (Pujol S, Kikinis R, Gollub R: Lowering the barriers inherent in translating advances in neuroimage analysis to clinical research applications, Academic Radiology 15: 114-118, 2008, add link to Publication DB).

  • Text here about Project Events 5 & 6 from Tina if not already included elsewhere.
  • Text here about the MICCAI Open Source Workshop if not already included elsewhere (Steve?)
  • Slicer IGT event December 2007 (tina?)
  • Wiki to web
  • Impact as measured by number of downloads of tutorial materials (help someone)
  • Should the DTI tractography validation project be written up somewhere, if so where? I will do it if it isn't already assigned.

Impact and Value to Biocomputing (Miller)

NA-MIC impacts Biocomputing through a variety of mechanisms. First, NA-MIC produces scientific results, methodologies, workflows, algorithms, imaging platforms, and software engineering tools and paradigms in an open enviroment that contributes directly to the body of knowledge available to the field. Second, NA-MIC science and technology enables the entire medical imaging community to build on NA-MIC results, methods, and techniques, to concentrate on the new science instead of developing supporting infrastructure, to leverage NA-MIC scientists and engineers to adapt NA-MIC technology to new problem domains, and to leverage NA-MIC infrastructure to distribute their own technology to a larger community.

Impact within the Center

Within the center, NA-MIC has formed a community around its software engineering tools, imaging platforms, algorithms, and clinical workflows. The NA-MIC calendar includes the All Hands Meeting and Winter Project Week, the Spring Algorithm Meeting, the Summer Project Week, Slicer3 Mini-Retreats, Core Site Visits, Training Workshops, and weekly telephone conferences.

The NA-MIC software engineering tools (CMake, Dart, CTest, CPack) have enabled the development and distribution of a cross-platform, nightly tested, end-user application, Slicer3, that is a complex union of novel application code, visualization tools (VTK), imaging libraries (ITK, TEEM), user interface libraries (Tk, KWWidgets), and scripting languages (TCL, Python). The NA-MIC software engineering tools have been essential in the development and distribution of the Slicer3 imaging platform to the NA-MIC community.

NA-MIC's end-user application, Slicer3, supports the research within NA-MIC by providing a base application for visualization and data management. Slicer3 also supports the research within NA-MIC by providing plugin mechanisms which allow researchers to quickly and easily integrate and distribute their technology with Slicer3. Slicer3 is available to all center participants and the external community through its source code repository, official binary releases, and unofficial nightly binary snapshots.

NA-MIC drives the development of platforms and algorithms through the needs and research of its DBPs. Each DBP has selected specific workflows and roadmaps as focal points for development with a goal of providing the community with complete end-to-end solutions using NA-MIC tools. The community will be able to reproduce these workflows and roadmaps in their own research programs.

NA-MIC algorithms are designed and used to address specific needs of the DBPs. Multiple solution paths are explored and compared within NA-MIC, resulting in recommendations to the field. The NA-MIC algorithm groups collaborate and orchestrate the solutions to the DBP workflows and roadmaps.

Impact within NIH Funded Research

Within NIH funded research, NA-MIC is the NCBC collaborating center for three R01's: "Automated FE Mesh Development", "Measuring Alcohol and Stress Interactions with Structural and Perfusion MRI", and "An Integrated System for Image-Guided Radiofrequency Ablation of Liver Tumors". Several other proposals have been submitted and are under evaluation for the "Collaborations with NCBC PAR". NA-MIC also collaborates on the Slicer3 platform with the NIH funded Neuroimage Analysis Center and the National Center for Image-Guided Therapy. The NIH funded "BRAINS Morphology and Image Analysis" project is also leveraging NA-MIC and Slicer3 technology. NA-MIC collaborates with the NIH funded Neuroimaging Informatics Tools and Resources Clearinghouse on distribution of Slicer3 plugin modules.

National and International Impact

NA-MIC events and tools garner national and international interest. Over 100 researchers participated in the NA-MIC All Hands Meeting and Winter Project Week in January 2008. Many of these participants were from outside of NA-MIC, attending the meetings to gain access to the NA-MIC tools and researchers. These external researchers are contributing ideas and technology back into NA-MIC. In fact, a breakout session at the Winter Project Week on "Geometry and Topology Processing of Meshes" was organized by four researchers from outside of NA-MIC.

Components of the NA-MIC kit are used globally. The software engineering tools of CMake, Dart 2 and CTest are used by many open source projects and commercial applications. For example, the K Desktop Environment (KDE) for Linux and Unix workstations uses CMake and Dart. KDE is one of the largest open source projects in the world. Many open source projects and commercial products are benefiting from the NA-MIC related contributions to ITK and VTK. Finally, Slicer 3 is being used as an image analysis platform in several fields outside of medical image analysis, in particular, biological image analysis, astronomy, and industrial inspection.

NA-MIC science is recognized by the medical imaging community. Over 100 NA-MIC related publications are listed on PubMed. Many of these publications are in the most prestigious journals and conferences in the field. Portions of the DBP workflows and roadmaps are already being utilized by researchers in the broader community and in the development of commercial products.

NA-MIC sponsored several events to promote NA-MIC tools and methodologies. NA-MIC co-sponsored the "Third Annual Open Source Workshop" at the Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2007 conference. The proceedings of the workshop are published on the electronic Insight Journal, another NIH-funded activity. NA-MIC sponsored three training workshops on NA-MIC tools for the Biocomputing community in this fiscal year and plans to hold sessions at upcoming MICCAI and RSNA conferences.


Introduction

This section of the report gives the milestones for years 1 through 5 that are associated with the timelines in the original proposal. We have organized the milestones by core. For each milestone we have indicated the proposed year of completion and a very brief description of the current status. In some cases the milestones include ongoing work, and we have try to indicate that in the status. We have also included tables that list any significant changes to the proposed timelines. On the wiki page, we have links to the notes from the various PIs that give more details on their progress and the status of the milestones.

These tables demonstrate that the project is, on the whole, proceeding according to the originally planned schedule.

Core 1: Algorithms

Timelines and Milestones

Group Aim Milestone Proposed time of completion Status
MIT 1 Shape-based segmentation
MIT 1.1 Methods to learn shape representations Year 2 Completed
MIT 1.2 Shape in atlas-driven segmentation Year 4 Completed
MIT 1.3 Validate and refine approach Year 5 In Progress
MIT 2 Shape analysis
MIT 2.1 Methods to compute statistics of shapes Year 4 Completed
MIT 2.3 Validation of shape methods on application data Year 5 Completed, refinements ongoing
MIT 3 Analysis of DTI data
MIT 3.1 Fiber geometry Year 3 Completed
MIT 3.2 Fiber statistics Year 5 Completed, new developments ongoing
MIT 3.3 Validation on real data Year 5 Completed, refinements ongoing
Utah 1 Processing of DTI data
Utah 1.1 Filtering of DTI Year 2 Completed
Utah 1.2 Quantitative analysis of DTI Year 3 Completed, refinements ongoing
Utah 1.3 Segmentation of cortex/WM Year 3 Completed partially, modified below
Utah 1.4 Segmentation analysis of white matter tracts Year 3 Completed, applications ongoing
Utah 1.5 Joint analysis of DTI and functional data Year 5 Initiated
Utah 2 Nonparametric Shape Analysis Year 5 Completed
Utah 2.1 Framework in place Year 3 Complete
Utah 2.2 Demonstration on shape of neuranatomy (from Core 3) Year 4 Complete
Utah 2.3 Development for multiobject complexes Year 4 Complete
Utah 2.4 Demonstration of NP shape representations on clinical hypotheses from Core 3 Year 5 Complete, publications in progress
Utah 2.6 Integration into NAMIC-kit Year 5 Incomplete (initiated)
Utah 2.7 Shape regression Year 5 Incomplete
UNC 1 Statistical shape analysis
UNC 1.1 Comparative anal. of shape anal. schemes Year 2 Completed
UNC 1.3 Statistical shape analysis incl. patient variable Year 5 Complete, refinements ongoing
UNC 2 Structural analysis of DW-MRI
UNC 2.1 DTI tractography tools Year 4 Completed
UNC 2.2 Geometric characterization of fiber tracts Year 5 Completed
UNC 2.3 Quant. anal. of diffusion along fiber tracts Year 5 Completed.
GaTech 1.1 ITK Implementation of PDEs Year 2 Completed
GaTech 1.1 Applications to Core 3 data Year 4 Completed
GaTech 1.2 New statistic models Year 4 Completed
GaTech 1.2 Shape anaylsis Year 4 Completed, refinements ongoing
GaTech 2.0 Integration in to Slicer Year 4-5 Preliminary results and ongoing
MGH 1 Registration
MGH 1.1 Collect DTI/QBALL data Year 2 Completed
MGH 1.2 Develop registration method Year 2 Completed
MGH 1.3 Test/optimize registration method Year 3 In Progress
MGH 1.4 Apply registration on core 3 data Year 5 In Queue
MGH 2 Group DTI Statistics
MGH 2.1 Develop group statistic method Year 2 Partially Complete
MGH 2.2 Apply on core 3 data Year 5 In Queue
MGH 3 Diffusion Segmentation
MGH 3.1 Collect DTI/QBALL data Year 2 Completed
MGH 3.2 Develop/optimize segmentation algorithm Year 3 Modified
MGH 3.3 Integrate w/ tractography Year 4 Modified
MGH 3.4 Apply on core 3 data Year 5 Modified
MGH 4 Group Morphometry Statistics
MGH 4.1 Develop/optimize statistics algorithms Year 3 Modified
MGH 4.2 Develop GUI for Linux Year 3 Modified
MGH 4.3 Slicer integration Year 3 Modified
MGH 4.4 Compile application on Windows Year 4 Modified

Timeline Modifications

Group Aim Milestone Modification
MIT 2.2 Methods to compare shape statistics Removed, the effort refocused on registration necessary for population studies
MIT 2.4 Software infrastructure to integrate shape analysis tools into the pipeline for population studies. New, morphed into collaboration with XNAT to provide more general population analysis tools. Partially completed.
MIT 4 fMRI analysis including local and atlas-based priors for quantifying activation. New, partially completed. Refinements in progress. Clinical study with Core 1 is in progress.
Utah 2.2 (removed) Feature-based brain image registration. Shift emphasis to shape-based analysis/registration
Utah 2.1 (removed) Cortical filtering and feature detection Effort is subsumed by other Core 1 partners (e.g. see MGH/Freesurfer)
Utah 1.3 (removed) Segmentation of cortex/WM Effort is subsumed by other Core 1-2 partners (e.g. see EM-Segmenter)
Utah 3.0 (removed) Fast implmentations of PDEs Real-time filtering is demphasized in favor of shape/DTI analysis
Utah 1.5 (added) Joint analysis of DTI and functional data Opportunities/needs within various collaborations
Utah 2.1-2.3 (added, in place of cortical analysis) Shape analysis Nonparametric shape analysis added to address needs of core 3.
Utah 2.7 Shape regression Extension/completion of framework. Opportunities/needs within various collaborations.
UNC 1.2 Develop medially-based shape representation Remove
UNC 1.4 Develop generic cortical correspondence framework (Years 3-5) New
UNC 2.4 DTI Atlas Building (Years 2--4) New
GaTech 2.1 FA analysis New
MGH 4.1 - 4.4 added and then removed, based on personnel changes

Core 1 Timeline Notes

Core 2: Engineering

Core 2 Timelines and Milestones

Group Aim Milestone Proposed time of completion Status
GE 1 Define software architecture
GE 1 Object design Yr 1 Completed
GE 1 Identify patterns Yr 3 Patterns for processing scalar and vector images, models, fiducials complete. Patterns for diffusion weighted completed, fMRI ongoing.
GE 1 Create frameworks Yr 3 Frameworks for processing scalar and vector images, models, fiducials complete. Frameworks for diffusion weighted completed, fMRI ongoing.
GE 2 Software engineering process
GE 2 Extreme programming Yr 1-5 On schedule, ongoing
GE 2 Process automatiion Yr 3 On schedule, ongoing
GE 2 Refactoring Yr 3 Complete
GE 3 Automated quality system
GE 3 DART deployment Yr 2 Complete
GE 3 Persistent testing system Yr 5 Incomplete
GE 3 Automatic defect detection Yr 5 Incomplete
Kitware 1 Cross-platform development
Kitware 1 Deploy environment (CMake, CTest) Yr 1 Complete
Kitware 1 DART Integration and testing Yr 1 Complete
Kitware 1 Documentation tools Yr 2 Complete
Kitware 2 Integration tools
Kitware 2 File Formats/IO facilities Yr 2 Complete
Kitware 2 CableSWIG deployment Yr 3 Complete (integration ongoing)
Kitware 2 Establish XML schema Yr 4 Complete, refinements ongoing
Kitware 3 Technology delivery
Kitware 3 Deploy applications Yr 1 Complete (ongoing)
Kitware 3 Establish plug-in repository Yr 2 Incomplete
Kitware 3 Cpack Yr 4-5 Incomplete
Isomics 1 NAMIC builds of slicer Years 2--5 Complete
Isomics 1 Schizophrenia and DBP intefaces Year 3---5 Completed (refinements ongoing)
Isomics 2 ITK Integration tools Year 1---3 Completed
Isomics 2 Experiment Control Interfaces Year 2---5 Migration from LONI to BatchMake Underway
Isomics 2 fMRI/DTI algorithm support Year 2---5 Completed DTI, fMRI Ongoing
Isomics 2 New DBP algorithm support Year 2---5 Ongoing
Isomics 3 Compatible build process Year 1---3 Completed
Isomics 3 Dart Integration Year 1---2 Completed (upgrades ongoing)
Isomics 3 Test scripts for new code Year 2---5 Ongoing
UCSD 1 Grid computing---base Year 1 Completed
UCSD 1 Grid enabled algorithms Year 3 First version (GWiz alpha) available - initial integration with Slicer3 and execution model.
UCSD 1 Testing infrastructure Year 4 Initiated
UCSD 2 Data grid --- compatibility Year 2 Completed
UCSD 2 Data grid --- slicer access Year 2 Completed for version 2.6. In progress for Slicer3
UCSD 3 Data mediation --- deploy Year 1 Incomplete (modfication below)
UCLA 1 Debabeler functionality Year 1 Continued Progress
UCLA 2 SLIPIE Interpretation (Layer 1) Year 1--Year2 In Progress
UCLA 3 SLIPIE Interpretation (Layer 2) Year 1--Year2 On Schedule
UCLA 3 Developing ITK Modules Year2 In Progress
UCLA 4 Integrating SRB (GSI-enabled) Year2 Completed
UCLA 5 Integrating IDA Year2 Completed
UCLA 5 Integrating External Visualization Applications Year2 Completed

Core 2 Timeline Modifications

Group Aim Milestone Modification
Isomics 3 Data mediation Delayed pending integration of databases into NAMIC infractructure

Core 2 Timeline Notes

Core 3: Driving Biological Problems

The Core 3 projects submitted R01 style proposals, as specified in the RFA, and did not submit timelines.

Core 4: Service

Core 4 Timelines and Milestones

Group Aim Milestone Proposed time of completion Status
Kitware 1 Implement Development Farms
Kitware 1 Deploy platforms Yrs 1 Complete
Kitware 1 Communications Yrs 1 Complete, ongoing
Kitware 2 Establish software process
Kitware 2 Secure developer database Yr 1 Complete, ongoing
Kitware 2 Collect guidelines Yr 1 Complete
Kitware 2 Manage software submission process Yr 1 Complete
Kitware 2 Configure process tools Yr 1 Complete
Kitware 2 Survey community Yr 1 Complete
Kitware 3 Deploy NAMIC Tools
Kitware 3 Toolkits Yr 1 Complete
Kitware 3 Integration tools Yr 1 Complete
Kitware 3 Applications Yr 1 Complete
Kitware 3 Integrate new computing resources Yr 1 Complete
Kitware 4 Provide support
Kitware 4 Esablish support infrastructure Yrs 1--5 On schedule, ongoing
Kitware 4 NAMIC support Yr 1 Complete
Kitware 5 Manage NAMIC Software Releases Yrs 1--5 On schedule, ongoing

Core 4 Timeline Modifications

Group Aim Milestone Modification
Kitware 2-5 Various Refined/modified the sub aims

Core 4 Timeline Notes

Core 5: Training

Core 5 Timelines and Milestones

Group Aim Milestone Proposed time of completion Status
Harvard 1 Formal Training Guidllines
Harvard 1 Functional neuroanatomy Yr 1 Complete
Harvard 1 Clinical correlations Yr 1 Complete
Harvard 2 Mentoring
Harvard 2 Programming workshops Yrs 1-5 On schedule, ongoing
Harvard 2 One-on-one mentoring, Cores 1, 2, 3 Yrs 1-5 On schedule, ongoing
Harvard 3 Collaborative work environment
Harvard 3 Wiki Yrs 1 Complete
Harvard 3 Mailing lists Yrs 1 Complete
Harvard 3 Regular telephone conferences Yrs 1-5 On schedule, ongoing
Harvard 4 Educational component for tools
Harvard 4 Slicer training modules Yr 2-5 Slicer 2.x tutorials complete, Two Slicer 3 tutorials complete, translation of 2.x tutorials to 3 is ongoing and on schedule
Harvard 5 Demonstrations and hands-on training
Harvard 5 Various workshops and conferences Yrs 1--5 On schedule, ongoing

Core 5 Timeline Modifications

None.

Core 5 Timeline Notes

Core 6: Dissemination

Core 6 Timelines and Milestones

Group Aim Milestone Proposed time of completion Status
Isomics 1 Create a collaboration metholdology for NA-MIC
Isomics 1.1 develop a selection process Yr 1 Complete
Isomics 1.2 guidelines to govern the collaborations Yr 1-2 Complete
Isomics 1.3 Provide on-site training Yr 1-5 Complete for current tools (ongoing for tool refinement)
Isomics 1.4 develop a web site infrastructure Yr 1 Complete
Isomics 2 Facilitate communication between NA-MIC developers and wider research community
Isomics 2.1 develop materials describing NAMIC technology Yr 1-5 On Schedule
Isomics 2.2 participate in scientific meetings Yr 2-5 On Schedule
Isomics 2.3 Document interactions with external researchers Yr 2-5 On Schedule
Isomics 2.4 Coordinate publication strategies Yr 3-5 On Schedule
Isomics 3 Develop a publicly accessible internet resource of data, software, documentation, and publication of new discoveries
Isomics 3.1 On-line repository of NAMIC related publications and presentations Yr 1-5 On Schedule
Isomics 3.2 On-line repository of NAMIC tutorial and training material Yr 1-5 On Schedule
Isomics 3.3 Index and a searchable database Yr 1-2 Done
Isomics 3.4 Automated feedback systems that track software downloads Yr 3 Done

Core 6 Timeline Modifications

None.

Core 6 Timeline Notes

Appendix A Publications (Kapur)

These will be mined from the SPL publications database. All core PIs need to ensure that all NA-MIC publications are in the publications database by May 15.

Appendix B EAB Report and Response (Kapur)

EAB Report

Response to EAB Report