Difference between revisions of "2009 Annual Scientific Report"

From NAMIC
Jump to: navigation, search
(Software Process)
(Software Process)
Line 363: Line 363:
  
 
===Software Process===
 
===Software Process===
One of the challenges facing developers has been the requirement to implement, test and deploy software systems across multiple computing platforms. NAMIC continues to push the state of the art with further development of the CMake, CTest/CDash, CPack and tools for cross-platform development, testing, and packaging, respectively.
+
One of the challenges facing developers has been the requirement to implement, test and deploy software systems across multiple computing platforms. NAMIC continues to push the state of the art with further development of the CMake, CTest/CDash, CPack and tools for cross-platform development, testing, and packaging, respectively. In particular, this year saw significant advances in the development of the PHP-based CDash server, which now provides sophisticated query/retrieve, notification, and testing-results navigation. The CMake system continues to grow rapidly both in the NAMIC community as well as external to it, reaching a level of approximately 1,000 downloads per day in early 2009 (this figure does not include the CMake distributions now embedded in Linux distributions such as Debian Linux). Other important additions this year was better support for integration of execution modules into Slicer3, packaging of Slicer3 distributions for more platforms with CPack, and the introduction of GUI (Graphical User Interface) testing with the Squish tool.
  
 
===Software Releases===
 
===Software Releases===

Revision as of 06:34, 18 May 2009

Home < 2009 Annual Scientific Report

Back to 2009_Progress_Report



Contents

Guidelines for preparation

  • 2009_Progress_Report#Scientific Report Timeline - Main point is that May 15 is the date by which all sections below need to be completed. No extensions are possible.
  • DBPs - If there is work outside of the roadmap projects that you would like to report, you are welcome to create a separate section for it under "Other".
  • The outline for this report is similar to the 2008 and 2007 reports, which are provided here for reference: 2008_Annual_Scientific_Report, 2007_Annual_Scientific_Report.
  • In preparing summaries for each of the 8 topics in this report, please leverage the detailed pages for projects provided here: NA-MIC_Internal_Collaborations.
  • Publications will be mined from the SPL publications database. All core PIs need to ensure that all NA-MIC publications are in the publications database by May 15.

Introduction (Tannenbaum)

The National Alliance for Medical Imaging Computing (NA-MIC) is now completing its fifth year. The Center is comprised of a multi-institutional, interdisciplinary team of computer scientists, software engineers, and medical investigators who have come together to develop and apply computational tools for the analysis and visualization of medical imaging data. A further purpose of the Center is to provide infrastructure and environmental support for the development of computational algorithms and open source technologies, as well as to oversee the training and dissemination of these tools to the medical research community. We are currently in year two of our second set of Driving Biological Projects (DBP), three of which involve diseases of the brain: (a) brain lesion analysis in neuropsychiatric systemic lupus erythematosus; (b) a study of cortical thickness for autism; and (c) stochastic tractography for velocardiofacial syndrome (VCFS). The fourth DBP takes the Center in a very new direction, (d) the prostate: brachytherapy needle positioning robot integration.

Over the past five years, NA-MIC has made substantial progress toward the attainment of its major objectives. In year one, the Center focused on forging alliances amongst its various cores and constituent groups to assure that the efforts of the cores were well integrated toward the attainment of common and specific goals. To that end a great deal of effort went into defining the kinds of tools that would be needed for specific imaging applications. The second year emphasized the identification of key research thrusts that cut across all cores and were driven by the needs and requirements of the DBPs. This led to the formulation of the Center's four main technical themes: Diffusion Tensor Analysis, Structural Analysis, Functional MRI Analysis, and the integration of newly developed tools into the NA-MIC Tool Kit. The third year of Center activity was devoted to the continuation of collaborative work to develop solutions for the various brain-oriented DBPs. The fourth year was focused on translating collaborative knowledge and work to a new set of DBPs. In the current fifth year, a number of projects have made sufficient progress to warrant introduction as modules in Slicer, thereby making the Core 1 algorithms available to the general medical imaging community. Some of these algorithms are quite general and can be used for purposes far broader than the original DBPs. For example, a new point cloud registration algorithm developed for the prostate brachytherapy needle positioning project also can be used for DWI registration. Likewise, work on DTI/DWI tractography has been applied to the segmentation of blood vessels and soft plaque detection in the coronary arteries.

Year five progress with respect to the current DBPs is relevant to the scope of this Annual Progress Report. As mentioned, we currently have three projects in the area of neuropsychiatric disorders: Systemic Lupus Erythematosis (MIND Institute, University of New Mexico), Velocardiofacial Syndrome (Harvard), and Autism (University of North Carolina, Chapel Hill). A fourth project from Johns Hopkins and Queens Universities involves the application of core technologies to imaging/robotics-guided treatments in prostate cancer. A number of papers have been published that specifically acknowledge the NA-MIC, and significant software development is continuing as well.

Section 3 outlines specific aims fulfilled this year by the four roadmap projects: Section 3.1 describes the Stochastic Tractography Approach for Velocardiofacial Syndrome; Section 3.2 details the application of our work to Brachytherapy Needle Positioning for the Prostate; Section 3.3 outlines the Brain Lesion Analysis in Neuropsychiatric Systemic Lupus Erythematosus project; and Section 3.4 documents the Cortical Thickness for Autism project. For all of these projects, a synergism of effort has produced working computer modules that are user friendly and accessible to both medical researchers and clinicians.

Section 4 describes year five work on the four infrastructure topics. These include: Diffusion Image Analysis (Section 4.1), Structural Analysis (Section 4.2), Functional MRI Analysis (Section 4.3), and the NA-MIC Toolkit (Section 4.4). Many of the algorithms produced by Cores 1-3 have been integrated into ITK and Slicer, including those concerning shape analysis (e.g., spherical wavelets), new segmentation algorithms (for DTI/DWI tractography and the segmentation of the prostate), and new approaches to registration (e.g., based on particle filtering).

Finally, the last three sections of this Annual Progress Report highlight some of the work that the the Scientific Leadership believes is particularly significant to the the overall goals of the Center. Section 5 summarizes the benefits of several advanced algorithms, gives a description of the growing NAMIC-Toolkit, and documents the scope of our efforts in technology transfer and outreach. It is essential to emphasize that although the algorithms emanating from this Center were developed to solve specific clinical problems raised by the DBPs, in application, most of these algorithms have far more general utility and far greater potential impact on the medical imaging technical base. To this end, Section 6 draws attention to the impact and value of our work on biocomputing imaging at three different levels: within the Center, within the NIH-funded research community, and externally to the national and international community. To further illustrate the impact of our work, Section 7 provides some updated timelines with specific milestones achieved by the various NA-MIC cores. Section 8 lists publications pertinent to the current reporting period that acknowledge NA-MIC support, and Section 9 provides the External Advisory Report and our considered response.

Clinical Roadmap Projects

Roadmap Project: Stochastic Tractography for VCFS (Kubicki)

Overview (Kubicki)

The goal of this project is to create an end-to-end application that is useful in evaluating anatomical connectivity between segmented cortical regions of the brain. The ultimate goal of our program is to understand similarities and differences in anatomical connectivity between genetically related schizophrenia and velocardio-facial syndrome (VCFS). Thus we plan to use the "stochastic tractography" tool to analyze abnormalities in integrity or connectivity in the arcuate fasciculus fiber bundle, which is involved in language processing in schizophrenia and VCFS.

Algorithm Component (Golland)

The core science involved in this project is the Stochastic Tractography algorithm. This algorithm was developed and implemented collaboratively by MIT and BWH. Stochastic Tractography is a Bayesian approach to estimating nerve fiber tracts from images created by diffusion tensor imaging (DTI).

In this approach, the diffusion tensor is used at each voxel in the volume to construct a local probability distribution for the fiber direction around the principal direction of diffusion. The tracts then are sampled between two user-selected regions of interest (ROIs), by simulating a random walk between the regions, based the local transition probabilities inferred from the DTI image.

The resulting collection of fibers and the associated FA values provide useful statistics on the properties of connections between the two regions. To constrain the sampling process to the relevant white matter region, atlas-based segmentation is used to label ventricles and gray matter and to exclude them from the search space. This latter step relies heavily on the registration and segmentation functionality of Slicer.

Over the last year, we have been working on applying several pre- and postprocessing steps to the algorithm pipeline. These steps include "eddy current" and "geometric distortion correction," which were made available to us by the Utah group, as well as "DTI filtering" (BWH). White matter masks now also can be created based on T2 thresholding within the Slicer Stochastic Tractography module. These masks are more precise, as they do not rely on MRI-to-DTI co-registration.

We also have been working on the datasets that apply to situations where fMRI activations as well as gray matter segmentations need to be registered to DTI data to permit seeding within predefined gray matter regions. Significant progress has been made in modality registration, and additional improvement is expected when "geometric distortion correction" becomes part of the analysis pipeline.

Finally, we have been working on improved ways to visualize and quantify Stochastic Tractography output, not only by parametrizing fiber tracts, but also by creating connection probability distribution maps.

Engineering Component (Davis)

This year, the Stochastic Tractography Slicer module was rewritten in Python. The new module was released in December, 2008, and presented at the All Hands Meeting in Salt Lake City. The module is now a functional component of Slicer3. Documentation for operating the module also has been created to facilitate user training. Current Engineering efforts are focused on maintaining the module, optimizing the module for use with other data formats, and adding new functionality, such as better registration, distortion correction, and ways of extracting and measuring functional anisotropy (FA) along nerve fiber tracts.

The datasets used with the Stochastic Tractography module are computationally demanding. They involve higher spatial resolutions and many more diffusion directions as compared with White Matter Tractography, which was used previously. As well, the cortical ROIs tend to be much larger than white matter ROIs. Hence, there is a pressing need for performance improvement. This need can be appreciated by examining the differences between Stochastic Tractography, where literally hundreds of tracts are generated from a single seed, and Deterministic Tractography, where only a single tract is generated. Thus, some effort has been made to economize by use of multi-threading and parallel processing. A version of the Stochastic Tractography algorithm that uses large computer clusters also has been developed and can be downloaded and installed by individual users with minimal knowledge of parallel computing.

Clinical Component (Kubicki)

In this reporting period, we have designed, implemented, or completed several studies to test the Stochastic Tractography algorithm on the newly released 3T NA-MIC data. These datasets consist of high resolution DTI, structural RM data, and automatic anatomical segmentations. Since these data already have been co-registered, cortical ROIs can be used as seeding points for Stochastic Tractography.

The first of these clinical studies was designed to analyze the connections between the inferior frontal and superior temporal lobes, which represent important sites of the language network. The connections between these two regions were measured via Stochastic Tractography in a group of 20 chronic schizophrenia patients and 20 controls and then subjected to comparative analysis. We also examined gray matter volume in destination regions and attempted to estimate the relationship between gray and white matter abnormalities in schizophrenia. The results of this study were presented at the World Psychiatry Congress in Florence, Italy in April, 2009, and later that same month at the Harvard Psychiatry MYSELL conference (Alvarado et al., 2009).

Another current endeavor is the application of Stochastic Tractography to define the connections involved in emotional processing. For this study, we are using cortical segmentations of the anterior cingulated gyrus, orbital-frontal gyrus, and amygdala to trace as well as quantify connections between these regions in healthy controls versus schizophrenia patients. The results of this preliminary study were presented at MYSELL in April, 2009 (Alvarado et al., 2009b). Another presentation will be made at the Biological Psychiatry conference later this year.

We have also been involved in two collaborative efforts. The first involves the use of DTI data acquired at University of California at Irvine (UCI). In this study, we have used the stochastic method to segment and measure the arcuate fasciculus in subjects with schizophrenia and language impairment, as evidenced in ERP data. In another collaboration, we are combining resting state fMRI data with DTI to measure connectivity between regions that form a functional network. Both of these projects are currently under way.

Finally, a paper that discusses the qualitative use of Stochastic Tractography has been accepted for publication in Human Brain Mapping and is currently in press. Here, when we combined fMRI with DTI whole brain data analysis, we identified certain regions that expressed abnormal functional connectivity in schizophrenia. These regions were then assigned to certain anatomical structures (white matter tracts) based on their location and relationship to Stochastic Tractography output.

'References'

Alvarado J, Terry D, Markant D, Ngo T, Kikinis R, Westin C, McCarley R, Shenton ME, Kubicki M. Study of language-related white matter tract connections in schizophrenia using diffusion stochastic tractography, Mysell Harvard Research Day, Psychiatry Annual Meeting, 2009a.

Alvarado J, Terry D, Markant D, Ngo T, Kikinis R, Westin C.F., McCarley R, Shenton ME, Kubicki M. Study of language-related white matter tract connections in schizophrenia using diffusion stochastic tractography, Mysell Harvard Research Day, Psychiatry Annual Meeting, 2009b.

Lee, et al. 2009.

Additional Information

Additional Information for this project is available here on the NA-MIC wiki.

Roadmap Project: MR-guided Prostate Biopsy Needle Positioning Robot Integration (Fichtinger)

Overview (Fichtinger)

Numerous studies have demonstrated the efficacy of image-guided needle-based therapy and biopsy in the management of prostate cancer. However, the accuracy of traditional prostate interventions that rely on transrectal ultrasound (TRUS) is limited by image fidelity, needle template guides, needle deflection, and tissue deformation. Magnetic Resonance Imaging (MRI) is an ideal modality for guiding and monitoring such interventions because it provides excellent visualization of the prostate, its sub-structure, and surrounding tissues.

We have designed a comprehensive robotic assistant system that permits prostate biopsy and brachytherapy procedures to be performed entirely inside a 3T closed MRI scanner. The current system applies the transrectal approach to the prostate. With this approach, an endorectal coil and steerable needle guide, both tuned to 3T magnets and invariably to any particular scanner, are integrated into the MRI compatible manipulator.

Under the NA-MIC initiative, the interface between image computing, visualization, intervention planning, and kinematic planning is being managed by an open-source system built on the NA-MIC toolkit and its components, that is, Slicer3 and ITK. These tools are complemented by a collection of unsupervised prostate segmentation and registration methods that are important to the clinical performance of the interventional system as a whole.

Algorithm Component (Tannenbaum)

Our group (GaTech) has worked on both the segmentation and registration of the prostate from MRI and ultrasound data. This process is described below.

Prostate Segmentation

The first step is to "extract" the prostate. We have provided two methods: a shape-based method and a semi-automatic method. More details are given below and images and further details may be found here

  1. A shape-based algorithm. This process begins by learning a group of shapes, obtained by manually segmenting a set of prostate 3D images. With the shapes represented as the hyperbolic tangent of the signed distance functions, principle component analysis is employed to learn the shapes. Further, given a new prostate image, we search the learned shape space in order to find one shape best segment the given image. The fitness of one shape to segment the image is evaluated by an energy functional measuring the discrepancy of the statistical characteristics inside and outside the current segmentation boundary. Such method is robust to the noise in the images. Moreover, the whole algorithm pipeline has been integrated into the Slicer3 through the command line module.
  2. Semi-automatic method. This method is based on a random walk segmentation algorithm. With user provided initial seed regions inside and out side the object (prostate), the algorithm computes a probability distribution over the image domain by solving a boundary value partial differential equation where the value at seed regions are fixed at 1.0 or 0.0, depending or whether they are object or background seeds. The resulting distribution indicates the probability of each voxel belonging to the object. Simply threshold by 0.5 gives the segmentation of the object. Moreover, if the result is not suitable, the user can edit the seed regions, and the new result is computed based on this previous result. This algorithm has been integrated into the transrectal prostate MRI module of Slier3.

Prostate Registration

We developed a nonlinear (affine) prostate registration method by treating prostate images as point sets. Then the iterative closest point algorithm is improved to register the point sets generated by the two images to be registered. The proposed method shows robustness to long distance transition and partial image structure. Moreover, such representation is much sparser than sampling image on the uniform grid thus the registration is very fast comparing two 3D volumetric image registration.

Furthermore, the registration is viewed as a posterior estimation problem, in which the distributions of the affine and translation parameters are to be estimated. This can naturally be estimated using a particle filter framework. Through this, the method can handle the otherwise difficult cases where the two prostates are one supine and one prone.

More details are given here...

Engineering Component (Vikal, Hayes)

An end-to-end slicer loadable module that interfaces with MRI-compatible robotic device to perform MRI guided prostate biopsy has been developed. A complete end-user tutorial documentation has been uploaded on the project wiki-page together with sample tutorial dataset (phantom) to facilitate user training. This is one of the first modules that uses Slicer as an interventional tool contrary to traditional usage as a post-processing tool.

This has been an year of ideas to implementation. The design hatched, was realized during this year. Specifically speaking, following functionality was implemented.

An intuitive workflow based GUI, that clearly identifies four phases of the intervention (Device calibration/registration, prostate segmentation, targeting, and verification) guides the user through the process.

The robotic device calibration/registration to scanner coordinates is achieved by means of segmenting fiducial markers in images. The registration parameters are used in targeting step for calculation of targeting parameters and needle tarjectory. The robotic device's optical encoders are interfaced, these sensors continuously sense the device rotation and needle angle, and are sent over USB interface, our module reads these on a 500 msec timer event.

The prostate segmentation algorithm developed by our algorithm core collaborators at Georgia Tech was integrated during NAMIC programming at Utah this year. The details are provided in previous sub-section.

The targeting step enables user to pick anatomic locations of interest by just clicking in any of 3 slice views. 3D Slicer's arbitrary reformat plane widget is an attractive feature which enables user/clinician to visualize and pick target in any desired plane. When a target is picked, the targeting parameters (device rotation and needle angle) for the device to hit the intended target are calculated and updated in the list of targets. Multiple targets can be picked and associated with a particular type of needle (e.g. biopsy or seed). Once a particular target is selected from the list of targets, the selected target is brought into view in all three slice views and highlighted in the 3D view, the information about the target and targeting parameters is displayed. Further, the robot's needle trajectory to the target is also visualized in 3D; this is a very crucial feedback for the clinician.

The biopsy is performed, and a validation volume is acquired while needle still in the prostate; this validation volume is then used to perform validation analysis, to find out how accurately did the device hit the target.

We've sought and received timely help from the engineering core. A couple of times, some functionality in Slicer 'core' was implemented for our specific requirements. The current engineering efforts are focussed on testing module at various levels, detecting and fixing bugs. We are in the process of designing a test protocol for functional and clinical evaluation of the software. Efforts are also on to add more functionality e.g. additional dedicated MR room display window which will display the chosen 2D image view for a particular target, the required robot targeting paramaters, and the sensed robot parameters; load previously saved experiment for post-op analysis; visualize robot anatomical coverage at calibration step itself, which can be used to reposition device if necessary.

Clinical Component (Fichtinger)

Since last year robotic hardware has gone through a major re-engineering and changes of design. We have completed detailed hardware safety tests and inaugurated the device to clinical use. We have treated the first batch of patients just recently. For the sake of clinical safety, we opted not to upgrade the interface software at the same time. In the meantime, all new image processing and visualization functions have been implemented in the 3D Slicer interface alone, no we longer retrofit the older existing software with major new features. The Slicer 3D based target planning and device control interface will inaugurated gradually during the project year.

Additional Information

Additional Information for this project is available here on the NA-MIC wiki.

Roadmap Project: Brain Lesion Analysis in Neuropsychiatric Systemic Lupus Erythematosus (Bockholt)

Overview (Bockholt)

The primary goal of the MIND DPB is to examine changes in white matter lesions in adults with Neuropsychiatric Systemic Lupus Erythematosus (SLE). We want to be able to characterize lesion location, size, and intensity, and would also like to examine longitudinal changes of lesions in an SLE cohort. To accomplish this goal, we created an end-to-end application entirely within NA-MIC Kit allowing individual analysis of white matter lesions. Such a workflow will then be applied to a clinical sample in the process of being collected.

Algorithm Component

The basic steps necessary for the white matter lesion analysis application entail first registration of T1, T2, and FLAIR images, second tissue classification into gray, white, csf, or lesion, thirdly clustering lesion for anatomical localization, and finally a summarization of lesion size and image intensity parameters within each unique lesion.

We have improved the morphometric feature based segmentation method by incorporating maximum relevancy, minimum redundancy feature ranking and Support Vector Machine based classification. Additionally, the new method produces a heat map where each voxel value represents the chance that voxel is lesion. The heat map allows the user to adjust the threshold used for segmentation to match their sensitivity / specificity preferences.

Engineering Component (Pieper)

At the January 2009 NA-MIC project week a first pass of a lesion segmentation tutorial was provided to the community. This was the first end-to-end workflow for this project and represented a significant step in the project. Based on feedback from the community and the target clinical users of these tools, we identified additional steps to improve the system.

The primary engineering effort has been directed to the following projects:

Interface Improvements

We have begun to look at the creation of a high level wizard as a front end to the processing task. This interface would allow users to go through the steps without directly navigating the slicer modules and can also provide state management that will simplify the visualization efforts.

Modularity and Deployment

The current tutorial has been difficult for some users to implement due to the requirement that the lesion detection module be compiled locally on the user's machine. Non-developers understandably find this to be a difficult requirement, so we are integrating the lesion segmentation code into the Slicer3 loadable module project so that pre-compiled versions of the module are available for users. To implement this, we are following the templates provided by the slicer example modules provided on the nitrc.org website. This infrastructure was created via a supplement to NA-MIC provided by the NITRC project. The project page on nitrc.org is being updated as new features are added to the modules.

Core Implementation Support

During this period we have also worked on optimizing the implementation of the core ITK code. This effort has primarily been accomplished by the MIND group, with interactions as needed with the rest of the NA-MIC community.

In addition, ongoing discussions with the rest of the NA-MIC community are encouraging code sharing among projects through modularization of common processing tasks and development of 'best of breed' routines for lesion detection and quantification. These tools are then embodied as slicer modules for use in other applications such as brain tumor change tracking.

Clinical Component (Bockholt)

During the past year, the Mind team attended MICCAI / Participating in MICCAI MS lesion challenge. We collected all data for 5 lupus lesion subjects and publicly shared the data set on NITRC. We researched and developed a novel morphometric feature based approach to lesion segmentation using a Naive Bayes classifier. This approach was released as a Slicer3 plugin to perform lupus lesion segmentation using the novel method.

Following clinical application of the morphometric feature based approach and user feedback, we decided to further enhance the approach. The resulting new LupusLesion clinical application (described in the algorithm enhancements above) has now been tested on a clinical SLE dataset of 20 individuals independent of the training data set. Results obtained from a ROC Curve when testing the application of the method to novel cases show a trade off between sensitivity and specificity with the best combination at sensitivity of 0.86 and 0.01 for (1- Specificity).

We have a new version of the Slicer3 LupusLesion clinical application module that uses the improved morphometric feature method planned for releaes on or before the 2009 NA-MIC programming week.

During the past year we have prepared a methods paper for submission and presented our methods work at conferences. We have also prepared a clinical paper summarizing the application of the method to a clinical SLE population. Finally, we conducted a formal dissemination event during the 2008 Society for Neuroscience Annual meeting, where we provided a hands on tutorial for using the LupusLesion clinical application. We have a second dissemination event planned for an upcoming annual Neurovascular meeting.

Papers and Presentations:

Scully M., Magnotta V., Gasparovic C., Pelligrimo P., Feis D., Bockholt H.J. 3D Segmentation In The Clinic: A Grand Challenge II at MICCAI 2008 - MS Lesion Segmentation. IJ - 2008 MICCAI Workshop - MS Lesion Segmentation. Available http://hdl.handle.net/10380/1449

H. J. Bockholt, V. A. Magnotta, M. Scully, C. Gasparovic, B. Davis, K. Pohl, R. Whitaker, S. Pieper, C. Roldan, R. Jung, R. Hayek, W. Sibbitt, J. Sharrar, P. Pellegrino, R. Kikinis. A novel automated method for classification of white matter lesions in systemic lupus erythematosus. Presented at the 38th annual meeting of the Society for Neuroscience, Washington, DC, 15 – 19 November 2008

H Jeremy Bockholt, Josef Ling, Mark Scully, Adam Scott, Susan Lane, Vincent Magnotta, Tonya White, Kelvin Lim, Randy Gollub, Vince Calhoun. Real-time Web-scale Image Annotation for Semantic-based Retrieval of Neuropsychiatric Research Images. Presented at the 14th Annual Meeting of the Organization for Human Brain Mapping, Melbourne, Australia, 15 – 19 June, 2008.

H Jeremy Bockholt, Sumner Williams, Mark Scully, Vincent Magnotta, Randy Gollub, John Lauriello, Kelvin Lim, Tonya White, Rex Jung, Charles Schulz, Nancy Andreasen, Vince Calhoun. The MIND Clinical Imaging Consortium as an application for novel comprehensive quality assurance procedures in a multi-site heterogeneous clinical research study. Presented at the 14th Annual Meeting of the Organization for Human Brain Mapping, Melbourne, Australia, 15 – 19 June, 2008.

Additional Information

Additional Information for this project is available here on the NA-MIC wiki.

Roadmap Project: Cortical Thickness for Autism(Hazlett)

Overview (Hazlett)

A primary goal of the UNC DPB is to examine changes in cortical thickness in children with autism compared to typical controls. We want to examine group differences in both local and regional cortical thickness, and would also like to examine longitudinal changes in the cortex from ages 2-4 years. To accomplish this goal, this project will create an end-to-end application within Slicer3 allowing individual and group analysis of regional and local cortical thickness. Such a workflow will then be applied to our study data (already collected).

We developed a specific project for our NA-MIC DBP focused on the goal of obtaining regional and local cortical thickness measurements on our pediatric dataset. A secondary goal is to incorporate this measurement module into the NA-MIC toolkit application, Slicer3. Lastly, the module would then be compared to other existing cortical thickness methods (e.g., FreeSurfer).


Algorithm and Engineering

The basic steps necessary for the cortical thickness application entail first tissue segmentation in order to separate white and gray matter regions, second cortical thickness measurement, thirdly cortical correspondence to compare measurements across subjects and finally a statistical analysis to locally compute group differences. As part of this project, we will create end-to-end applications allowing individual and group analysis of regional and local cortical thickness. The regional and local cortical thickness analysis is based on separate pipelines and work in these areas is described below.

REGIONAL: A Slicer3 high-level module performing individual regional cortical thickness analysis was completed this past year: ARCTIC (Automatic Regional Cortical ThICkness). The default basic steps entail first probabilistic atlas-based automatic tissue segmentation, second atlas parcellation deformable registration and thirdly asymmetric cortical thickness measurement. The user has the possibility to skip some of these steps if the related images are provided, such as tissue segmentation label maps or parcellation maps. This application provides not only lobar cortical thickness measurements but also tissue segmentation volume information stored in spreadsheets. Moreover a quick quality control can be performed for each step within Slicer3 using a MRML scene displaying output volumes and surfaces. ARCTIC’s first release is publicly available on NITRC (http://www.nitrc.org/projects/arctic/). Documentation has been created for the tool on the NAMIC wiki pages, including two tutorials. The tutorials won first prize at the NA-MIC 2009 annual meeting tutorial contest. Pediatric and adult brain atlases used by ARCTIC are also available on MIDAS (http://www.insight-journal.org/midas/collection/view/34). ARCTIC is still in development in order to improve its integration within Slicer3, but by the end of this project year, we expect ARCTIC to be cross-platform, with Windows and MAC executables available on NITRC. Moreover ARCTIC's source code will soon be available to the community via a SVN repository.


LOCAL: Regarding local cortical thickness analysis, improvement has been made on the pipeline level as this mesh-based method requires more steps than the regional one. Main components for this pipeline include (1) tissue segmentation, (2) atlas-based ROI segmentation, (3) white matter map creation and post-processing, (4) genus-zero white matter map image and surface creation, (5) cortical thickness computation, (6) white matter mesh inflation, (7) sulcal depth computation, and (8) cortical correspondence on inflated meshes using a particle system. While several modules currently exist within Slicer3 to perform the first two steps, many applications have been developed and integrated within Slicer3 regarding the intermediate steps. The last step regarding cortical correspondence module is currently being tested. We expect the whole the mesh-based local cortical thickness analysis pipeline to be fully working by the end of the current project year. Work will then be focused on integrating this high-level module within Slicer3.

Clinical Component (Hazlett)

Regarding clinical application, ARCTIC has been tested on a pediatric dataset but we plan to compare it with the state of the art application (e.g., FreeSurfer). Results are available on (our project DPB page). A statistical study based on Pearson's correlation is thus currently in progress using 40+ cases from FreeSurfer's publicly available tutorial dataset.

Once we have demonstrated adequate validity of the ARCTIC tool, and have completed work on the local cortical thickness pipeline (described above) we plan to conduct group based comparisons (autism vs. typical) examining regional and local cortical thickness differences in our pediatric sample.

During the past year we have prepared a paper (in print) and presented our methods work. See below.

Papers and Presentations:

IOguz I., Niethammer M., Cates J., Whitaker R., Fletcher T., Vachet C. , and Styner M., Cortical Correspondence with Probabilistic Fiber Connectivity, Information Processing in Medical Imaging, IPMI 2009, LNCS, in print.


“Use of the Slicer3 Toolkit to Produce Regional Cortical Thickness Measurement of Pediatric MRI Data.” H.C. Hazlett, C. Vachet, C. Mathieu, M. Styner, J. Piven presented at the 8th Annual International Meeting for Autism Research (IMFAR) Chicago, IL 2009.

Additional Information

Additional Information for this project is available here on the NA-MIC wiki.

Four Infrastructure Topics

Diffusion Image Analysis (Gerig)

Progress

The 2008/2009 year showed significant progress towards refining the DWI tools and applying existing implementations clinical studies. This progress is best documented by a total of 17 publications since the last year’s report, with 9 publications in high-impact journals (Neuroimage (3), IEEE TMI (2), MEDIA (1), Schizophr Res (4)), 4 publications in peer reviewed conference proceedings (MICCAI (3), ISBI (1)) and3 others in scientific workshops (MMBIA, MICCAI). These publications are excellent indicators that NAMIC tools and methodologies are competitive and get published in medical image analysis journals, but that their application to clinical studies and included validation and testing also get published in clinically-oriented journals. Methods can be characterized by individual subject processing of DWI to extract fiber bundles of interests in particular patients are large-scale population-based analysis for group comparison and hypothesis testing. Significant progress in both can be reported in this reporting period. Core 1 partners contributed with further developments of methods for image preprocessing such as filtering and artifact removal, with improved tractography algorithms, methods for clustering of streamlines into meaningful tracts, group-wise analysis via computational anatomy tools, and methods for quantitative analysis of tracts to provide parameters for statistical analysis. Core 2 was significantly contributing by providing the computational environment for user-guided, interactive DTi analysis requiring a complex user interface and sophisticated visualization, but also developing plug-in capabilities for more automated processing modules. Core-3 made increasing use of these tools for data from clinical studies, where there was a handshaking between engineers of the Core 3 partners with methods’ developers of Core 1 and engineers of Core 2. Core 5 organized several training courses including DTI analysis, where participants could lean about the underlying imaging and image analysis concepts and the use of the Slicer software environment.

The following list summarizes the major new contribution to Diffusion Image Analysis during the reporting period.

  • Fiber Tract Modeling, Clustering and Quantitative Analysis (MIT): Development on population-based analysis of DTI via clustering of fiber tracts for automatic labeling has been continued and resulted in a recent journal publication. As a new research direction, the group approaches the challenging problem of joint registration and segmentation of DWI fiber tractography, where tract labels are assigned in an iterative framework using registration of bundles to an atlas. This results in nonlinear joint registration of sets of DWI data into a common coordinate space, and at the same time automatic labeling of joint tracts. Quantitative analysis in population studies is based upon correspondence obtained via the clustering and labeling. The group applied this technique to various clinical datasets and reported results at 3 conferences.
  • Stochastic Tractography (MIT, DBP 2): Stochastic tractograpy was a major research effort of this group during the reporting period. Initial prototype software was integrated into Slicer 3, which brought significant challenges w.r.t. user interaction, visualization, and definition of data structures for subsequent statistical analysis. Advantages of stochastic tractography are clearly shown in areas of crossing fibers, uncertainties, considerable noise – all situations where conventional deterministic tractography methods would fail. This project is a joint collaboration between Core 1, Core 2 and Core 3, nicely demonstrating the close interaction between method development, engineering and testing and validation in a clinical environment.
  • Geodesic Tractography Segmentation (Georgia Tech): As an alternative to streamline tractography, this project develops a technique for extraction of a minimum cost curve through the tensor field, resulting in an anchor curve between source and target regions specified by the user (journal publication). As an extension, volumetric fiber segmentation based on active contours but using the anchor curves as initialization has been developed. This led to a framework for tubular surface segmentations and was presented at two conference workshop and also resulted in a journal publication.
  • DTI processing and statistical tools (Utah 1): This research addresses the important problem of correction of artifacts of DWI. Image distortions due to Eddy currents in gradient directions and due to suszeptibility artifacts of the EPI acquisition are corrected via a combined scheme of aligning individual gradient images and calculating a nonlinear transformation between DWIs and a geometrically correct T2weighted image. The whole pipeline is written in ITK and is tested on a larger number of datasets. The methodology is in print and will be presented at a peer reviewed conference (IPMI 2009). This group also continued further development of the volumetric white matter connectivity tool, i.e. a method dual to tractography that optimizes a shortest path through the tensor field.
  • Population-based analysis of white matter tracts (Utah 2): This is a whole analysis system that starts with a large set of subjects’ DWI and results in a statistical analysis of selected fiber tracts [1]. Steps include calculation of image features, linear and nonlinear registration into a common, unbiased coordinate system, user-guided selection of tracts of interests in atlas-space, mapping tract geometry back into individual images to collect subject-specific diffusion information, and finally statistical group analysis of along tract diffusion information. New in this period are the use of a Core 1 developed methodology for group-wise registration of population of images (collaboration with MIT partner [2]) and the development of a statistical framework for tract analysis based on functional data analysis (FDA). The new methods are described in a conference and a journal publication. The whole system was applied to large studies of our Core 3 partner (PNL Harvard) and pediatric studies of our affiliated clinical partners at UNC.

To summarize, these activities include the whole processing pipeline from data input via NRRD format, preprocessing and correction for artifacts and distortions, several choices for tractography tailored to different needs, and output of results for statistical analysis. The most recent progress of DTI tool development based on the point of view of the DBP 2 partner (Harvard) is summarized at [3], with links to all presently active activities.

Key Investigators

  • BWH: Marek Kubicki, Martha Shenton, Sylvain Bouix, Julien von Siebenthal, Thomas Whitford, Jennifer Fitzsimmons, Doug Terry, Jorge Alverado, Eric Melonakos, Alexandra Golby, Monica Lemmond, Carl-Fredrik Westin.
  • MIT: Lauren O'Donnell, Polina Golland, , Tri Ngo
  • Utah I: Tom Fletcher, Ross Whitaker, Ran Tao, Yongsheng Pan
  • Utah II: Casey Goodlett, Sylvain Gouttard, Guido Gerig
  • GA Tech: John Melonakos, Vandana Mohan, Shawn Lankton, Allen Tannenbaum
  • GE: Xiaodong Tao, Jim Miller, Mahnaz Maddah
  • Isomics: Steve Pieper
  • Kitware: Luis Ibanez, Brad Davis
  • UNC: Zhexing Liu, Martin Styner

Additional Information

Additional Information for this topic summarizing internal collaborations is available here on the NA-MIC wiki. Details on methods and algorithms for DWI analysis can be found in the algorithm sections of the respective Core-1 partners [4], the DBP-2 project descriptions [5], and the external and internal NAMIC collaboration pages [6].

Structural Analysis(Tannenbaum)

Progress

Under Structural Analysis, the main topics of research for NAMIC are structural segmentation, registration techniques and shape analysis. These topics are correlated and hence research in one often finds application in another. For example, shape analysis can yield useful priors for segmentation, or segmentation and registration can provide structural correspondences for use in shape analysis and so on.

An overview of selected progress highlights under these broad topics follows:

Segmentation

  • Geodesic Tractography Segmentation: We proposed an image segmentation technique based on augmenting the conformal (or geodesic) active contour framework with directional information. This has been applied successfully to the segmentation of neural fiber bundles such as the Cingulum Bundle. This framework has now been integrated into Slicer and is being tested on a population of brain data sets.
  • Tubular Surface Segmentation: We have proposed a new model for tubular surfaces that transforms the problem of detecting a surface in 3D space, to detecting a curve in 4D space. Besides allowing us to impose a "soft" tubular shape prior, this also leads to computational efficiency over conventional surface segmentation approaches. We have also developed the moving end points implementation of this framework wherein the required input is only a few points in the interior of the structure of interest. This yields the additional advantage that the framework simulatenously returns both the 3D segmentation and the 3D skeleton of the structure eliminating the need for apriori knowledge of end points, and an expensive skeletonization step. The framework is applicable to different tubular anatomical structures in the body. We have so far applied it successfully to the Cingulum Bundle, and blood vessels.
  • Local-global Segmentation: We have proposed a novel segmentation approach that combines the advantages of local and global approaches to segmentation, by using statistics over regions that are local to each point on the evolving countour. This makes it well suited to applications with contrast differences within the structure of interest such as in blood vessel segmentation, as well as applications like the neural fiber bundles where the diffusion profiles of voxels within the structure are locally similar but vary along the length of the fiber bundle itself.
  • Shape-based segmentation: Standard image based segmentation approaches perform poorly when there is little or no contrast along boundaries of different regions. In such cases segmentation is mostly performed manually using prior knowledge of the shape and relative location of the underlying structures combined with partially discernible boundaries. We have presented an automated approach guided by covariant shape deformations of neighboring structures, which is an additional source of prior knowledge. Captured by a shape atlas, these deformations are transformed into a statistical model using the logistic function. The mapping between atlas and image space, structure boundaries, anatomical labels, and image inhomogeneities are estimated simultaneously within an Expectation-Maximization formulation of the Maximum A posteriori Probability (MAP) estimation problem. These results are then fed into an Active Mean Field approach, which views the results as priors to a Mean Field approximation with a curve length prior. We have applied the algorithm successfully to real MRI images, and we have also implemented it into 3D Slicer.
  • Re-Orientation Approach for Segmentation of DW-MRI: This work proposes a methodology to segment tubular fiber bundles from diffusion weighted magnetic resonance images (DW-MRI). Segmentation is simplified by locally reorienting diffusion information based on large-scale fiber bundle geometry. Segmentation is achieved through simple global statistical modeling of diffusion orientation which allows for a convex optimization formulation of the segmentation problem, combining orientation statistics and spatial regularization. The approach compares very favorably with segmentation by full-brain streamline tractography.


Registration

  • Optimal Mass Transport based Registration: We have provided a computationaly efficient non-rigid/elastic image registration algorithm based on the Optimal Mass Transport theory. We use the Monge-Kantorovich formulation of the Optimal Mass Transport problem and implement the solution proposed by Haker et al. using multi-resolution and multigrid techniques to speed up the convergence. We also leverage the computation power of general purpose graphics processing units available on standard desktop computing machines to exploit the inherent parallelism in our algorithm. We extend the work by Haker et al. who compute the optimal warp from a first order partial differential equation, an improvement over earlier proposed higher order methods and those based on linear programming, and further implement the algorithm using a coarse-to-fine strategy resulting in phenomenol improvement in convergence. We have applied it successfully to the registration of 3D brain MRI datasets (preoperative and intra-operative), and are currently extending it to the non-rigid registration of baseline DWI to brain MRI data.
  • Atlas Regularization for Image Segmentation: Atlas-based approaches have demonstrated the ability to automatically identify detailed brain structures from 3-D magnetic resonance (MR) brain images. Unfortunately, the accuracy of this type of method often degrades when processing data acquired on a different scanner platform or pulse sequence than the data used for the atlas training. In this paper, we improve the performance of an atlas-based whole brain segmentation method by introducing an intensity renormalization procedure that automatically adjusts the prior atlas intensity model to new input data. Validation using manually labeled test datasets has shown that the new procedure improves the segmentation accuracy (as measured by the Dice coefficient) by 10% or more for several structures including hippocampus, amygdala, caudate, and pallidum. The results verify that this new procedure reduces the sensitivity of the whole brain segmentation method to changes in scanner platforms and improves its accuracy and robustness, which can thus facilitate multicenter or multisite neuroanatomical imaging studies.
  • Point-set Rigid Registration: We have proposed a particle filtering scheme for the registration of 2D and 3D point set undergoing a rigid body transformation. Moreover, we incorporate stochastic dynamics to model the uncertainity of the registration process. We treat motion as a local variation in the pose parameters obatined from running a few iterations of the standard Iterative Closest Point (ICP) algorithm. Employing this idea, we introduce stochastic motion dynamics to widen the narrow band of convergence as well as provide a dynamical model of uncertainity. In contrast with other techniques, our approach requires no annealing schedule, which results in a reduction in computational complexity as well as maintains the temoral coherency of the state (no loss of information). Also, unlike most alternative approaches for point set registration, we make no geometric assumptions on the two data sets.We applied the algorithm to different alignments of point clouds and it successfully found the correct optimal transformation that aligns two given point clouds despite the differing geometry around the local neighborhood of a point within their respective sets.
  • Regularization for Optimal Mass Transport: To extend the flexibility of the existing OMT algorithm, we added a regularization term to the functional being minimized. This term controls the tradeoff between how well two images match after registration versus how warped the transformation map can become. A weighted sum of squared differences is used to penalize having to move mass over long distances; this addition also helps to keep the transformation physically accurate by reducing the likelihood that the transformation grid will fold over itself and keeping the grid smooth.
  • Registration of DW-MRI to structural MRI: Optimal Mass Transport was applied to the problem of correcting EPI distortion in DW-MRI. A mask for white matter in DW-MRI was registered to the white matter mask extracted from the structural MRI for the same patient. Prior to registration, it is important to normalize intensities in the two masks; this was done by dividing the images into regions and uniformly normalizing over each region to assure the sum of the intensities is equal. Then, once a transformation between the white matter masks was calculated, this transformation was applied to the original DW-MRI image.

Shape Analysis

  • Shape Analysis Framework using SPHARM-PDM: We have provided an analysis framework of objects with spherical topology, described by sampled spherical harmonics SPHARM-PDM. The input is a set of binary segmentations of a single brain structure, such as the hippocampus or caudate. These segmentations are first processed to fill any interior holes. The processed binary segmentations are converted to surface meshes, and a spherical parametrization is computed for the surface meshes using a area-preserving, distortion minimizing spherical mapping. The SPHARM description is computed from the mesh and its spherical parametrization. Using the first order ellipsoid from the spherical harmonic coefficients, the spherical parametrizations are aligned to establish correspondence across all surfaces. The SPHARM description is then sampled into a triangulated surfaces (SPHARM-PDM) via icosahedron subdivision of the spherical parametrization. These SPHARM-PDM surfaces are all spatially aligned using rigid Procrustes alignment. Group differences between groups of surfaces are computed for simple group wise comparison using the standard robust Hotelling T 2 two sample metric. The tool further provides a new statistical method that allows for testing of and controlling with subject covariates via a permutation testing of GLM based MANCOVA metrics. Statistical p-values, both raw and corrected for multiple comparisons, result in significance maps. We provide additional visualization of the group tests via mean difference magnitude and vector maps, maps of the group covariance information, local correlation and z-scores. We have a stable implementation, and current development focuses on integrating the current command line tools into Slicer via the Slicer execution model and XNAT integration. A first Slicer module prototype has been developed without XNAT integration.
  • Population studies using Tubular Surface Model: We have proposed a tubular shape model for the Cingulum Bundle which models a tubular surface as a center-line coupled with a radius function at every point along the center-line. This model shows potential for population studies on the Cingulum Bundle which is believed to be involved in Schizophrenia, since it provides a natural way of sampling the structure to build a feature representation of it. We are currently segmenting the Cingulum Bundle from a population of brain data sets, towards performing this population analysis using the Pott's Model.
  • Automatic Outlining of Sulci on a Brain Surface: We present a method to automatically extract certain key features on a surface. We apply this technique to outline sulci on the cortical surface of a brain, where the data is taken to be a 3D triangulated mesh formed from the segmentation of MR image slices. The problem is posed as energy minimization using penalizing the arc-length of segmenting curve using conformal factor involving the mean curvature of the underlying surface. The computation is made practical for dense meshes via the use of a sparse-field method to track the level set interfaces and regularized least-squares estimation of geometric quantities.

Key Investigators

Needs to be updated:

  • MIT: Polina Golland, Kilian Pohl, Sandy Wells, Eric Grimson, Mert R. Sabuncu
  • UNC: Martin Styner, Ipek Oguz, Nicolas Augier, Marc Niethammer, Beatriz Paniagua
  • Utah: Ross Whitaker, Guido Gerig, Suyash Awate, Tolga Tasdizen, Tom Fletcher, Joshua Cates, Miriah Meyer
  • GaTech: Allen Tannenbaum, John Melonakos, Vandana Mohan, Tauseef ur Rehman, Shawn Lankton, Samuel Dambreville, Yi Gao, Romeil Sandhu, Xavier Le Faucheur, James Malcolm, Ivan Kolosev
  • Isomics: Steve Pieper
  • GE: Bill Lorensen, Jim Miller
  • Kitware: Luis Ibanez, Karthik Krishnan
  • UCLA: Arthur Toga, Michael J. Pan, Jagadeeswaran Rajendiran
  • BWH: Sylvain Bouix, Motoaki Nakamura, Min-Seong Koo, Martha Shenton, Marc Niethammer, Jim Levitt, Yogesh Rathi, Marek Kubicki, Steven Haker

Additional Information

Additional Information for this topic is available here on the NA-MIC wiki.

fMRI Analysis (Golland)

Progress

  • Connectivity Analysis:

One of the major goals in analysis of fMRI data is the detection of functionally homogeneous networks in the brain. We developed a new method for characterizing functional connectivity patterns from fMRI. In contrast to the seed-based analysis typically employed to identify networks of co-activation, we propose to use clustering to simultaneously estimate the networks and their representative time courses, which effectively replace user-specified seeds. During this year, we validated this method for characterizing functional connectivity patterns from fMRI. To investigate the sensitivity of the analysis to the generative model of the signal, we implemented and compared two distinct algorithms, the mixture-model clustering and the spectral clustering, in application to this problem. We validated our approach in a rest state fMRI scans of 45 healthy subjects. Our results demonstrate that the detected networks are stable across subjects and across methods. At the same time, we worked with the Harvard DBP to identify relevant clinical data sets in which our approach promises to identify effect of a disorder. We have started a collaboration to apply the method to a group of schizophrenia patients and normal controls.

  • Distortion Correction for EPI-Based Functional Imaging

We developed and demonstrated a method for correcting the distortions present in echo planar images (EPI) and registering the EPI image to structural MRI scans. Our approach does not require acquiring fieldmaps, modifying EPI acquisition parameters, or having detailed knowledge of the shim system. The technique consists of two steps. First, a classifier is used to segment structural MR into an air/tissue susceptibility model. The resulting tissue map serves as input to a first order perturbation field model to compute a subject-specific fieldmap. The classifier is trained based on MR-CT image pairs, using MR intensities as features and exploiting air segmentation in the CT images to construct labels. Second, a simultaneous shim estimation and registration algorithm is employed to solve for the lower order field perturbations (shim parameters) needed to accurately unwarp and register the EPI data.

Key Investigators

  • MIT: Polina Golland, Danial Lashkari, Archana Venkataraman, Clare Poynton
  • Harvard/BWH: Sylvain Bouix, Marek Kubicki, Carl Frederick Westin, Sandy Wells

Additional Information

Additional Information for this topic is available here on the NA-MIC wiki.

NA-MIC Kit Theme (Schroeder)

Summary of Progress

The NAMIC-Kit consists of a framework of advanced computational resources including libraries, toolkits and applications; as well as the support infrastructure for testing, documenting, and deploying leading edge medical imaging algorithms and software tools. The framework has been carefully constructed to provide low-level access to libraries and modules for advanced users, plus high-level application access that non-computer professionals can use to address a variety of problems in biomedical computing.

In this fifth year of the NA-MIC projects the focus has been on integration. Much of the foundational infrastructure has been established; however to effectively transition advanced biomedical technology and improve software usability the various subsystems that compose the NAMIC-Kit have been extended to accommodate advanced algorithmic development and optimize work flow. The activities in this year's efforts can be broadly categorized as follows:

  • Slicer3 and the Software Framework:
  • Data integration:
  • Software process:
  • Software releases:

Slicer3 and the Software Framework

One of the major achievements of the past year has been the release of version 3.4 of 3D Slicer in May of 2009. A number of important improvements have been made by the Engineering Core and significant new functionality has been added through other NA-MIC cores and collaborators since the release of version 3.2 in August of 2008. A few notable examples include:

In addition, there have been major extensions to the diffusion imaging tools, registration tools, filters, image guided therapy, and other core changes that enhance the utility and applicability of the software.

Data Integration

One of the keys to effective workflow is integration of computational tools with data. To this end, XNAT and BatchMake are directly accessible from Slicer3. XNAT, or the eXtensible Neuroimaging Archive Toolkit, is an open source software platform designed to facilitate management and exploration of neuroimaging and related data. XNAT database can now be directly accessed through the Slicer3 file menu, and additional support for data upload and query. BatchMake is a simple, scriptable, cross-platform batch processing tool that now interfaces to XNAT, and can be launched from the Slicer3 application. This means that users can interactively configure computational experiments to process data from an XNAT data repository, and then process potentially large collections of data, either locally or distributed across the grid using Condor.

Software Process

One of the challenges facing developers has been the requirement to implement, test and deploy software systems across multiple computing platforms. NAMIC continues to push the state of the art with further development of the CMake, CTest/CDash, CPack and tools for cross-platform development, testing, and packaging, respectively. In particular, this year saw significant advances in the development of the PHP-based CDash server, which now provides sophisticated query/retrieve, notification, and testing-results navigation. The CMake system continues to grow rapidly both in the NAMIC community as well as external to it, reaching a level of approximately 1,000 downloads per day in early 2009 (this figure does not include the CMake distributions now embedded in Linux distributions such as Debian Linux). Other important additions this year was better support for integration of execution modules into Slicer3, packaging of Slicer3 distributions for more platforms with CPack, and the introduction of GUI (Graphical User Interface) testing with the Squish tool.

Software Releases

The NAMIC-Kit can be represented as a pyramid of capabilities, with the base consisting of toolkits and libraries, and the apex standing in for the Slicer3 user application. In between, Slicer modules are stand-alone executables that can be integrated directly into the Slicer3 application, including GUI integration, while work-flows are groups of modules that are integrated together to manifest sophisticated segmentation, registration and biomedical computing algorithms. In a coordinated NAMIC effort, major releases of these many components were realized over the past year. This includes, but is not limited to:


Key Investigators

The NAMIC Engineering Core has to a great extent realized its goal of engaging a wider biomedical community. This community extends worldwide and has leveraged the efforts of many developers beyond the direct influence of NAMIC. This has resulted in significant advances at relatively low cost. Having said that, the senior members of the Core 2 team consist of the following personnel.

  • Kitware - Will Schroeder (Core 2 PI), Sebastien Barre, Luis Ibanez, Bill Hoffman
  • GE - Jim Miller, Xiaodong Tao
  • Isomics - Steve Pieper, Alex Yarmarkovich, Curt Lisle, Terry Lorber
  • WUSTL - Dan Marcus
  • UCSD - Jeffrey Grethe

Additional Information

Additional Information for this topic is available here on the NA-MIC wiki.

Highlights(Schroeder)

Advanced Algorithms

NAMIC-Kit

Outreach and Technology Transfer

Cores 4-5-6 continue to support, train and dissemniate to the NAMIC community, and the broader biomedical computing community.

  • The Slicer community held several workshops and tutorials. In xxx a satellite event was held for the international Organization for Human Brain Mapping at the annual meeting in xxx. The xx workshop on xx hosted xx participants representing xx countries from around the world, xx states within the US and xxdifferent laboratories including xx NIH institutes. In addition, <note how many slicer tutorials were held and where etc>
  • Project Week continues to be a successful NAMIC venue. These semi-annual events are held in Boston in June, and January in Salt Lake City. These events are well attended with approximately 100 participants, of which about a third are outside collaborators. At the last Project Week in Salt Lake City, approximately 51 projects were realized.
  • NAMIC continues to participate in conferences and other technical venues. For example, NAMIC hosted xxx

Tutorial for Autism DBP: Cortical Thickness Measurement

As part of the 2009 NA-MIC All-Hands-Meeting, a "Tutorial Contest" was held in which a panel of judges from across the Cores, reviewed submitted entries on basis of <add criteria here>

The winning entry was for cortical thickness analysis tools for the UNC Autism DBP. It shows the user how to perform analysis of regional cortical thickness. Two tutorials were included in this entry: 1) The ARCTIC tutorial for automatic analysis in which the user learns how to load input volumes, run the end-to-end module ARCTIC to generate cortical thickness information and display output volumes, and 2) The Slicer 3 tutorial for step by step analysis in which the user learns how to run individually the UNC external modules within Slicer3 in order to perform a regional cortical thickness analysis.

Impact and Value to Biocomputing (Miller)

NA-MIC impacts Biocomputing through a variety of mechanisms. First, NA-MIC produces scientific results, methodologies, workflows, algorithms, imaging platforms, and software engineering tools and paradigms in an open enviroment that contributes directly to the body of knowledge available to the field. Second, NA-MIC science and technology enables the entire medical imaging community to build on NA-MIC results, methods, and techniques, to concentrate on the new science instead of developing supporting infrastructure, to leverage NA-MIC scientists and engineers to adapt NA-MIC technology to new problem domains, and to leverage NA-MIC infrastructure to distribute their own technology to a larger community.

Impact within the Center

Within the center, NA-MIC has formed a community around its software engineering tools, imaging platforms, algorithms, and clinical workflows. The NA-MIC calendar includes the All Hands Meeting and Winter Project Week, the Spring Algorithm Meeting, the Summer Project Week, Slicer3 Mini-Retreats, Core Site Visits, and weekly telephone conferences. Over the past 18 months, the engineering core has visited each algorithm core site to support the specific infrastructure needs of each group.

The NA-MIC software engineering tools (CMake, CDash, CTest, CPack) have enabled the development and distribution of a cross-platform, nightly tested, end-user application, Slicer3, that is a complex union of novel application code, visualization tools (VTK), imaging libraries (ITK, TEEM), user interface libraries (Tk, KWWidgets), and scripting languages (TCL, Python). The NA-MIC software engineering tools have been essential in the development and distribution of the Slicer3 imaging platform to the NA-MIC community.

NA-MIC's end-user application, Slicer3, supports the research within NA-MIC by providing a base application for visualization, image analysis and data management. Slicer3 supports multiplanar reformat, oblique reformat, surface and volume rendering, comparison viewers, tracked cursors, and multiple image layer blending. Slicer3 can communicate with an XNAT database to download data and upload results. Slicer3 provides a multi-layer plugin mechanisms which allows researchers to quickly and easily integrate and distribute their technology with Slicer3. Plugins can be authored as separate executables, shared libraries, Python scripts, or as full first class Slicer3 modules. These plugins can be distributed with Slicer3 or distributed on a site maintained by the researcher (for instance on the Neuroimaging Informatics Tools and Resources Clearinghouse). Slicer3 is available to all center participants and the external community through its source code repository, official binary releases, and unofficial nightly binary snapshots. There are 15 training modules on the Slicer3 User Training 101 webpage to educate Slicer3 Users on basic image review, using advanced modules, and integrating new technology into Slicer3.

NA-MIC drives the development of platforms and algorithms through the needs and research of its DBPs. Each DBP has selected specific workflows and roadmaps as focal points for development with a goal of providing the community with complete end-to-end solutions using NA-MIC tools. The current roadmap projects are Stochastic Tractography for VCSF, Prostate Biopsy Needle Positioning Robot Integration, Brain Lesion Analysis in Neuropsychiatric Systemic Lupus Erythematosus, and Cortical Thickness for Autism. For each roadmap project, the software tools, exemplar data, and a tutorial are provided to the community to allow others to reproduce the results and apply the workflows in their own research programs. Along with the four roadmap tutorials, five other tutorials were presented at the 2009 Tutorial Contest held at the NA-MIC All Hands Meeting in January 2009.

NA-MIC algorithms are designed and used to address specific needs of the DBPs. Multiple solution paths are explored and compared within NA-MIC, resulting in recommendations to the field. For example, in 2008 and 2009, eight NA-MIC tractography algorithms were evaluated. At the All Hands Meeting in 2008, a distributed group of researchers reported on a qualitative study on the tractography methods. At the All Hands Meeting in 2009, the same group reported back on quantitative measures of sensitivity and specificity. The NA-MIC algorithm groups collaborate on a broad spectrum of methods for structural image analysis, diffusion image analysis, and functional image analysis and orchestrate the solutions to the DBP workflows and roadmaps. These efforts have lead to fundamental advancements in shape representation, shape analysis, groupwise registration, diffusion estimation, segmentation, and quantification, and functional estimation, distortion correction, and clustering.

Impact within NIH Funded Research

Within NIH funded research, NA-MIC is the NCBC collaborating center for four R01's: Automated FE Mesh Development', Measuring Alcohol and Stress Interactions with Structural and Perfusion MRI, An Integrated System for Image-Guided Radiofrequency Ablation of Liver Tumors, and Development and Dissemination of Robust Brain MRI Measurement Tools. Several other proposals have been submitted and are under evaluation for the "Collaborations with NCBC PAR" as well as to other NIH calls.

NA-MIC also collaborates on the Slicer3 platform with the NIH funded Neuroimage Analysis Center and the National Center for Image-Guided Therapy. The NIH funded "BRAINS Morphology and Image Analysis" project is also leveraging NA-MIC and Slicer3 technology. A collaboration with the Simbios NCBC is evaluating NA-MIC tools for model generation from diagnostic images. NA-MIC collaborates with the NIH funded Neuroimaging Informatics Tools and Resources Clearinghouse (NITRC) on distribution of Slicer3 plugin modules. A Slicer3 training session was held at NCI in August of 2008. Slicer3 is listed one of the DICOM Viewers on the National Biomedical Imaging Archive at NCI.

National and International Impact

NA-MIC events and tools garner national and international interest. Over 100 researchers participated in the NA-MIC All Hands Meeting and Winter Project Week in January 2009. Many of these participants were from outside of NA-MIC, attending the meetings to gain access to the NA-MIC tools and researchers. These external researchers are contributing ideas and technology back into NA-MIC. Two of the break out sessions at the Winter Project Week were organized by researchers from outside of NA-MIC. The Project Week in June of 2009 is being expanded to be a joint event for NA-MIC, the Neuroimage Analysis Center, the National Center for Image-Guided Therapy, the Harvard Catalyst, and CIMIT.

Components of the NA-MIC kit are used globally. The software engineering tools of CMake, CDash and CTest are used by many open source projects and commercial applications. For example, the K Desktop Environment (KDE) for Linux and Unix workstations uses CMake and CTest. KDE is one of the largest open source projects in the world. Many open source projects and commercial products are benefiting from the NA-MIC related contributions to ITK and VTK. Slicer3 was downloaded 3300 times during the current reporting period. Slicer3 is also being used as an image analysis platform in several fields outside of medical image analysis, in particular, biological image analysis, astronomy, and industrial inspection.

NA-MIC science is recognized by the medical imaging community. Nearly 150 NA-MIC related publications are listed on PubMed. Many of these publications are in the most prestiguous journals and conferences in the field. Overall, there are 269 publications acknowledging NA-MIC support. Portions of the DBP workflows and roadmaps are already being utilized by researchers in the broader community and in the development of commercial products.

NA-MIC sponsored several events to promote NA-MIC tools and methodologies. In 2008 alone, NA-MIC hosted 12 workshops and training sessions at 12 venues, including training sessions at NCI, RSNA, and MICCAI. These workshops and tutorials were individually targeted to the specific needs and interests of clinicians, biomedical engineers, or algorithm developers. Two hundred and fifty clinical, biomedical, and algorithm researchers attended these events.

Timeline (Ross)

<The table needs to be updated>


This section of the report gives the milestones for years 1 through 5 that are associated with the timelines in the original proposal. We have organized the milestones by core. For each milestone we have indicated the proposed year of completion and a very brief description of the current status. In some cases the milestones include ongoing work, and we have try to indicate that in the status. We have also included tables that list any significant changes to the proposed timelines. On the wiki page, we have links to the notes from the various PIs that give more details on their progress and the status of the milestones.

These tables demonstrate that the project is, on the whole, proceeding according to the originally planned schedule.


Core 1: Algorithms

Timelines and Milestones

Group Aim Milestone Proposed time of completion Status
MIT 1 Shape-based segmentation
MIT 1.1 Methods to learn shape representations Year 2 Completed
MIT 1.2 Shape in atlas-driven segmentation Year 4 Completed
MIT 1.3 Validate and refine approach Year 5 Completed
MIT 2 Shape analysis
MIT 2.1 Methods to compute statistics of shapes Year 4 Completed
MIT 2.3 Validation of shape methods on application data Year 5 Completed, refinements ongoing
MIT 3 Analysis of DTI data
MIT 3.1 Fiber geometry Year 3 Completed
MIT 3.2 Fiber statistics Year 5 Completed, new developments ongoing
MIT 3.3 Validation on real data Year 5 Completed, refinements ongoing
Utah 1 Processing of DTI data
Utah 1.1 Filtering of DTI Year 2 Completed
Utah 1.2 Quantitative analysis of DTI Year 3 Completed
Utah 1.3 Segmentation of cortex/WM Year 3 Completed
Utah 1.4 Segmentation analysis of white matter tracts Year 3 Completed
Utah 1.5 Joint analysis of DTI and functional data Year 5 Ongoing
Utah 2 Nonparametric Shape Analysis Year 5 Completed
Utah 2.1 Framework in place Year 3 Complete
Utah 2.2 Demonstration on shape of neuranatomy (from Core 3) Year 4 Complete
Utah 2.3 Development for multiobject complexes Year 4 Complete
Utah 2.4 Demonstration of NP shape representations on clinical hypotheses from Core 3 Year 5 Complete
Utah 2.6 Integration into NAMIC-kit Year 5 In progress
Utah 2.7 Shape regression Year 5 Complete, validation ongoing.
UNC 1 Statistical shape analysis
UNC 1.1 Comparative anal. of shape anal. schemes Year 2 Completed
UNC 1.3 Statistical shape analysis incl. patient variable Year 5 Complete, extensions ongoing.
UNC 2 Structural analysis of DW-MRI
UNC 2.1 DTI tractography tools Year 4 Completed
UNC 2.2 Geometric characterization of fiber tracts Year 5 Completed
UNC 2.3 Quant. anal. of diffusion along fiber tracts Year 5 Completed.
GaTech 1.1 ITK Implementation of PDEs Year 2 Completed
GaTech 1.1 Applications to Core 3 data Year 4 Completed
GaTech 1.2 New statistic models Year 4 Completed
GaTech 1.2 Shape anaylsis Year 4 Completed
GaTech 2.0 Integration in to Slicer Year 4-5 Ongoing
MGH 1 Registration Modified (see AR 2008)
MGH 2 Group DTI Statistics Modified (see AR 2008)
MGH 3 Diffusion Segmentation Modified (see AR 2008)
MGH 4 Group Morphometry Statistics Modified (see AR 2008)
MGH 5 XNAT Desktop Years 4-5
MGH 5.1 Establish requirements for desktop version of XNAT Years 4-5 Complete
MGH 5.2 Develop implementation plan for prototype Years 4-5 Complete
MGH 5.3 Implement prototype version Years 4-5 Complete
MGH 5.4 Implement alpha version Year 5 Complete
MGH 6 XNAT Central Years 4-5
MGH 6.1 Deploy XNAT Central, a public access XNAT host Years 4-5 Complete
MGH 6.2 Coordinate with NAMIC sites to upload project data Years 4-5 Incomplete (ongoing)
MGH 6.3 Continue developing XNAT Central based on feedback from NAMIC sites Years 4-5 Complete, refinement ongoing
MGH 7 NAMIC Kit integration Years 4-5
MGH 7.1 Implement web services to exchange data with Slicer, Batchmake, and other client applications Years 4-5 Complete, testing ongoing
MGH 7.2 Add XNAT Desktop to standard NAMIC kit distribution Year 5-6 Incomplete. Modified

Timeline Modifications

Timeline Modifications

Group Aim Milestone Modification
MGH 7.2 Add XNAT Desktop to standard NAMIC kit distribution Testing is underway and XNAT capabilities will be included in NAMIC at the end of Year 5 or early in Year 6

Core 1 Timeline Notes

Core 2: Engineering

Core 2 Timelines and Milestones

Group Aim Milestone Proposed time of completion Status
GE 1 Define software architecture
GE 1 Object design Yr 1 Completed
GE 1 Identify patterns Yr 3 Patterns for processing scalar and vector images, models, fiducials complete. Patterns for diffusion weighted completed, fMRI ongoing.
GE 1 Create frameworks Yr 3 Frameworks for processing scalar and vector images, models, fiducials complete. Frameworks for diffusion weighted completed, fMRI ongoing.
GE 2 Software engineering process
GE 2 Extreme programming Yr 1-6 On schedule, ongoing
GE 2 Process automatiion Yr 3 Complete
GE 2 Refactoring Yr 3 Complete
GE 3 Automated quality system
GE 3 DART deployment Yr 2 Complete
GE 3 Persistent testing system Yr 5 Complete (ongoing support)
GE 3 Automatic defect detection Yr 5 Complete (ongoing support, revisions)
Kitware 1 Cross-platform development
Kitware 1 Deploy environment (CMake, CTest) Yr 1 Complete
Kitware 1 DART Integration and testing Yr 1 Complete
Kitware 1 Documentation tools Yr 2 Complete
Kitware 2 Integration tools
Kitware 2 File Formats/IO facilities Yr 2 Complete
Kitware 2 CableSWIG deployment Yr 3 Complete (integration ongoing)
Kitware 2 Establish XML schema Yr 4 Complete
Kitware 3 Technology delivery
Kitware 3 Deploy applications Yr 1 Complete (ongoing)
Kitware 3 Establish plug-in repository Yr 2 Complete
Kitware 3 Cpack Yr 4-5 Complete
Isomics 1 NAMIC builds of slicer Years 2--5 Complete (testing ongoing)
Isomics 1 Schizophrenia and DBP intefaces Year 3---5 Completed
Isomics 2 ITK Integration tools Year 1---3 Completed
Isomics 2 Experiment Control Interfaces Year 2---5 Completed
Isomics 2 fMRI/DTI algorithm support Year 2---5 Completed
Isomics 2 New DBP algorithm support Year 2---6 Ongoing
Isomics 3 Compatible build process Year 1---3 Completed
Isomics 3 Dart Integration Year 1---2 Completed (maintainence ongoing)
Isomics 3 Test scripts for new code Year 2---5 Ongoing
UCSD 1 Grid computing---base Year 1 Completed
UCSD 1 Grid enabled algorithms Year 3 First version (GWiz alpha) available - initial integration with Slicer3 and execution model.
UCSD 1 Testing infrastructure Year 4 Completed (testing ongoing)
UCSD 2 Data grid --- compatibility Year 2 Completed
UCSD 2 Data grid --- Slicer access Year 2 Completed
UCSD 3 Data mediation --- deploy Year 1 Modified (see Annual Report 2008)
UCLA 1 Debabeler functionality Year 1 Modified
UCLA 2 SLIPIE Interpretation (Layer 1) Year 1--Year2 Modified
UCLA 3 SLIPIE Interpretation (Layer 2) Year 1--Year2 Modified
UCLA 3 Developing ITK Modules Year2 Modified
UCLA 4 Integrating SRB (GSI-enabled) Year2 Modified
UCLA 5 Integrating IDA Year2 Modified
UCLA 5 Integrating External Visualization Applications Year2 Modified
UCLA 6 DTI Analysis Year 3-6
UCLA 6.1 Implemenation of mechanically-based DTI analysis in ITK Year 4 Complete
UCLA 6.2 Integration of command-line module into Slicer Year 5 Complete
UCLA 6.3 Testing/evaluation of DTI analysis module (pilot study) Year 5-6 Ongoing

Core 2 Timeline Modifications

Group Aim Milestone Modification
Isomics 3 Data mediation Delayed pending integration of databases into NAMIC infractructure

Core 2 Timeline Notes

Core 3: Driving Biological Problems

The Core 3 projects submitted R01 style proposals, as specified in the RFA, and did not submit timelines.

Core 4: Service

Core 4 Timelines and Milestones

Group Aim Milestone Proposed time of completion Status
Kitware 1 Implement Development Farms
Kitware 1 Deploy platforms Yrs 1 Complete
Kitware 1 Communications Yrs 1 Complete, ongoing
Kitware 2 Establish software process
Kitware 2 Secure developer database Yr 1 Complete, ongoing
Kitware 2 Collect guidelines Yr 1 Complete
Kitware 2 Manage software submission process Yr 1 Complete
Kitware 2 Configure process tools Yr 1 Complete
Kitware 2 Survey community Yr 1 Complete
Kitware 3 Deploy NAMIC Tools
Kitware 3 Toolkits Yr 1 Complete
Kitware 3 Integration tools Yr 1 Complete
Kitware 3 Applications Yr 1 Complete
Kitware 3 Integrate new computing resources Yr 1 Complete
Kitware 4 Provide support
Kitware 4 Esablish support infrastructure Yrs 1--5 On schedule, ongoing
Kitware 4 NAMIC support Yr 1 Complete
Kitware 5 Manage NAMIC Software Releases Yrs 1--5 On schedule, ongoing

Core 4 Timeline Modifications

Group Aim Milestone Modification
Kitware 2-5 Various Refined/modified the sub aims

Core 4 Timeline Notes

Core 5: Training

Core 5 Timelines and Milestones

Group Aim Milestone Proposed time of completion Status
Harvard 1 Formal Training Guidllines
Harvard 1 Functional neuroanatomy Yr 1 Complete
Harvard 1 Clinical correlations Yr 1 Complete
Harvard 2 Mentoring
Harvard 2 Programming workshops Yrs 1-5 On schedule, ongoing
Harvard 2 One-on-one mentoring, Cores 1, 2, 3 Yrs 1-5 On schedule, ongoing
Harvard 3 Collaborative work environment
Harvard 3 Wiki Yrs 1 Complete
Harvard 3 Mailing lists Yrs 1 Complete
Harvard 3 Regular telephone conferences Yrs 1-5 On schedule, ongoing
Harvard 4 Educational component for tools
Harvard 4 Slicer training modules Yr 2-5 Slicer 2.x tutorials complete, More than 10 Slicer 3 tutorials and modules.
Harvard 5 Demonstrations and hands-on training
Harvard 5 Various workshops and conferences Yrs 1--5 On schedule, ongoing

Core 5 Timeline Modifications

None.

Core 5 Timeline Notes

Core 6: Dissemination

Core 6 Timelines and Milestones

Group Aim Milestone Proposed time of completion Status
Isomics and BWH 1 Create a collaboration metholdology for NA-MIC
Isomics and BWH 1.1 develop a selection process Yr 1 Complete
Isomics and BWH 1.2 guidelines to govern the collaborations Yr 1-2 Complete
Isomics and BWH 1.3 Provide on-site training Yr 1-6 Complete for current tools (ongoing for tool refinement)
Isomics and BWH 1.4 develop a web site infrastructure Yr 1 Complete
Isomics and BWH 2 Facilitate communication between NA-MIC developers and wider research community
Isomics and BWH 2.1 develop materials describing NAMIC technology Yr 1-6 On Schedule
Isomics and BWH 2.2 participate in scientific meetings Yr 2-6 On Schedule
Isomics and BWH 2.3 Document interactions with external researchers Yr 2-6 On Schedule
Isomics and BWH 2.4 Coordinate publication strategies Yr 3-6 On Schedule
Isomics and BWH 3 Develop a publicly accessible internet resource of data, software, documentation, and publication of new discoveries
Isomics and BWH 3.1 On-line repository of NAMIC related publications and presentations Yr 1-6 On Schedule
Isomics and BWH 3.2 On-line repository of NAMIC tutorial and training material Yr 1-6 On Schedule
Isomics and BWH 3.3 Index and a searchable database Yr 1-2 Done
Isomics and BWH 3.4 Automated feedback systems that track software downloads Yr 3 Done

Core 6 Timeline Modifications

Dissemination efforts that were on-going in Year 5 will be extended into the at-cost extension Year 6. The dissemination function is shared between Isomics and BWH.

Core 6 Timeline Notes

Appendix A Publications (Mastrogiacomo)

A list should be mined from the publications database and attached here in MS word format.

Lee K, Yoshida T, Kubicki M, Bouix S, Westin C, Kindlemann G, Niznikiewicz M, Cohen A, McCarley R, Shenton ME. Increased diffusivity in superior temporal gyrus in patients with schizophrenia: A diffusion tensor imaging study. Schizophr Res 2009;108:33-40.

Appendix B EAB Report and Response (Kapur)

EAB Report

Response to EAB Report