2007 APR NIH Questions and Answers

From NAMIC Wiki
Jump to: navigation, search
Home < 2007 APR NIH Questions and Answers
 Back to 2007_Progress_Report

2007 NA-MIC APR Q&A (MS Word) - this is the final document sent to the NIH.


In a letter from Grace Peng, dated July 31 2007 the center team asks the following questions:



Contents

The weakest and probably most difficult parts of the NA-MIC effort are validation and comparison across algorithms. The validation that is being performed needs to be more systematic and coordinated like the tractography validation effort. Perhaps methods that engage the user community could be tried. (Ross Whitaker)


The question of validation has been an important and challenging issue within the field of medical image analysis, in general, and the image analysis work in NA-MIC is no exception. There has been comparison and validation work in several areas of NAMIC. For instance, the tractography work and associated publications [1-13] has included comparisons of conventional tractography with stochastic methods [14] and Hamilton Jacobi formulations [11, 12]. Courage et al. examine different methods of finding fiber correspondences [13] in order to be able to do statistics between fibers. Indeed, these comparisons are a necessary requirement for publishing new methodologies. Likewise, in the shape analysis work, Styner at al. [14] have compared alternative method of parameterization groups of shape, Cates et al. [15] have included comparisons of parametric and nonparametric shape correspondences, and Nain et al. [16] have systematically quantified improvements in shape parameterization and analysis with spherical wavelets.

In addition to algorithm validation, there is software validation. This software validation is being accomplished in several ways. First, is through a systematic engineering process that focus on continuous and/or nightly builds, careful tracking of contributions and changes to code, and regular regression testing. Our software development environment and process is world class. Second, is the interaction with biological/medical collaborators within and outside NAMIC.

We believe we can do better, and the comments from NIH on our 2006 report in this regard have provided us with some good incentives and directions. One strategy we have is to make move this validation to a level that is more systematic and benefits from greater exposure. This is the case with our project weeks, where we have encouraged collaborators to bring their data and experiment with algorithms in direct contact with the algorithm developers. This increased level of attention is also evident in the upcoming "Tractography Measures Conference" [17]. At this workshop we will have our biological collaborators, with data already preprocessed and available on Wiki, a set of Core 1 scientists with algorithms and results, and an agreement on some measures to compare different methods for the analysis of DTI. In a recent planning meeting (phone conference) we agreed to invite a selection of prominent non NA-MIC participants, who we think can come to the table with very useful and competitive DTI analysis methods. We believe this workshop, and the philosophy behind it, sets a new standard in comparing and validating algorithms.

These efforts are informed by a great deal of experience of NA-MIC Core 1 in image processing and computer vision, and this experience suggests that we should be cautious. For some years, for instance, image denoising, and in particular wavelet-shrinkage-based algorithms, have been compared on a standard set of photographs, and the literature shows a progression of wavelet-shrinkage strategies that produce progressively better RMS errors [18]. While this progress has been impressive, it has produced a series of algorithms and parameter settings that are particularly well suited to those data sets, and which do not necessarily generalize to images that are not well represented in that data [19]. Thus, close attention to comparison and validation against particular metrics and data sets tends to skew algorithms toward those measures. Another example is in the computer vision problem of object recognition. A database of test images with classes of different common objects, such as toys, automobiles, etc., of different types, in somewhat natural surroundings and various degrees of occlusion and lighting are available [20] for training and testing algorithms for object recognition. As with image denoising, the field has benefited significantly, but there are those who argued that these databases and the metrics used to analyze them have focused researchers on a particular class of algorithms---typically statistical classifiers based on large numbers of low-level features. Because the field generally enforces new algorithms to comparison against these databases, it can be difficult for researchers to move beyond the family of techniques that have proven effect in this regard. Thus, insistence on specific databases of images and imaging problems can, in some cases, stifle creativity and discourage researchers from looking at problems from different points of view.

We believe comparisons of different algorithms cannot rely solely on databases of standard images, but must also be evaluated in their direct impact on clinical problems. For example, the upcoming "Tractography Measures Conference" will include extensive participation from our collaborators at Brigham and Women's (DBP) who are applying DTI analysis to clinical problems. In general, the DBPs and the associated R01s, along with extensive list of collaborators at each site that are not formally part of NA-MIC, will provide a richer context for validating and comparing algorithms. To summarize, NA-MIC researchers have a track record of validating and comparing algorithms. However, we are extending our methods and our goals for validating and comparing algorithms to try to take advantage of the unique set of partners, clinical problems, and software infrastructure available in NA-MIC. At the same time we are cautious about avoid rigid, prescriptive metrics for comparison and maintain that the best evaluation of image analysis algorithms lies in their ability to be used by a wide range of researchers to produce new clinical or scientific results.

References

[1] J.H. Gilmore, W. Lin, I. Corouge, Y. Vetsa, K. Sampath, J.K. Smith, C. Kang, H. Gu, R. Hamer, J. Lieberman, G. Gerig. Early Postnatal Development of Corpus Callosum and Corticospinal White Matter Assessed with Quantitative Tractography, accepted AJNR- (American Journal of Neuroradiology), in print.

[2] J. Melonakos, E. Pichon, S. Angenent, A. Tannenbaum. Finsler Active Contours. IEEE PAMI. 2007. In Press.

[3] P.T. Fletcher, S. Joshi. Riemannian Geometry for the Statistical Analysis of Diffusion Tensor Data. Signal Processing, 2006.

[4] L. J. O'Donnell, C-F. Westin, A. J. Golby. Tract-Based Morphometry. MICCAI 2007. In Press.

[5] U. Ziyan, M.R. Sabuncu, L. O'Donnell, C-F. Westin. Fiber Bundle-based Nonlinear Registration of Diffusion MR Images. MICCAI 2007. In Press.

[6] C. Goodlett, P. T. Fletcher, W. Lin, and G. Gerig, "Quantification of measurement error in DTI: Theoretical predictions and validation", in print MICCAI 2007, to appear Nov. 2007

[7] M. Maddah, W. M. Wells, S. K. Warfield, C.-F. Westin, and W. E. L. Grimson, Probabilistic Clustering and Quantitative Analysis of White Matter Fiber Tracts, IPMI 2007, Netherlands.

[8] C. Goodlett, P.T. Fletcher, W. Lin, and G. Gerig. Noise-induced bias in low-direction diffusion tensor MRI: Replication of Monte-Carlo simulation with in-vivo scans. Accepted ISMRM 2007.

[9] J. Melonakos, V. Mohan, A. Tannenbaum. Finsler Tractography for White Matter Connectivity Analysis of Cingulum Bundle. MICCAI'07. In Press.

[10] Y. Rathi, J. Malcolm, A. Tannenbaum. Segmenting Images on the Tensor Manifold. CVPR'07. In Press.

[11] P.T. Fletcher, R. Tao, W. -K. Jeong, R.T. Whitaker, "A Volumetric Approach to Quantifying Region-to-Region White Matter Connectivity in Diffusion Tensor MRI," to appear Information Processing in Medical Imaging (IPMI) 2007.

[12] W.-K. Jeong, P.T. Fletcher, R. Tao, R.T. Whitaker, "Interactive Visualization of Volumetric White Matter Connectivity in Diffusion Tensor MRI Using a Parallel-Hardware Hamilton-Jacobi Solver," to appear IEEE Visualization Conference (VIS) 2007.

[13] I. Corouge, P.T. Fletcher, S. Joshi, S. Gouttard, G. Gerig. Fiber tract-oriented statistics for quantitative diffusion tensor MRI analysis. Medical Image Analysis, 10(5):786-798, 2006

[13] http://www.na-mic.org/Wiki/index.php/Algorithm:MIT:DTI_StochasticTractography

[14] M. Styner, SC Xu , M. El-Sayed, G. Gerig, Correspondence Evaluation in Local Shape Analysis and Structural Subdivision, IEEE Symposium on Biomedical Imaging ISBI 2007, in print

[15] J. Cates, P.T. Fletcher, M. Styner, M. Shenton, R. Whitaker. Shape Modeling and Analysis with Entropy-Based Particle Systems. IPMI 2007.

[16] D. Nain, S. Haker, A. Bobick, A. Tannenbaum. Multiscale 3D Shape Analysis using Spherical Wavelets. Proc MICCAI, Oct 26-29 2005; p 459-467

[17] http://www.na-mic.org/Wiki/index.php/SanteFe.Tractography.Conference

[18] J. Portilla, V. Strela, M. Wainwright, and E. Simoncelli. Image denoising using scale mixtures of gaussians in the wavelet domain. IEEE Trans. Imag. Proc., 12(11):1338– 1351, 2003.

[19] S. Awate, R. Whitaker. Unsupervised, Information-Theoretic, Adaptive Image Filtering for Image Restoration. IEEE Trans. on Pattern Analysis and Machine Intelligence, 28(3), 2005, pp. 364--376.

[20] http://www.pascal-network.org/challenges/VOC/databases.html

To what problems is DTI best applicable? Is it applicable across age ranges?(Ross Whitaker)

NA-MIC, as a "National Center for Biomedical Computing", is focused on algorithms and software for medical image analysis. Our work is driven by medical and biological problems defined by our collaborators (e.g. DBPs) both within and outside of the project, and it is their interests and hypotheses regarding DTI that drive the software development. Thus, the question of the appropriateness of clinical problems, in the short term and long term, is not central to the charter of the center. We are providing our clinical collaborators and the field as a whole a set of software tools that will help determine the how and where DTI is most useful.

That said, the project is informed about the role of DTI in scientific and clinical practice, and this guides our decisions about the allocation of resources in DTI. The conventional wisdom is that DTI is most applicable to problems that entail the evaluation of white matter fiber tracts. Disorders such as multiple sclerosis are particularly appropriate for using DTI to evaluate white matter lesions that are the sine quo non of this disorder. Other disorders such as stroke, Alzheimer's disease, and schizophrenia are also disorders that are demonstrate abnormalities in white matter, and thus DTI may help to further characterize white matter pathology in these diseases.

With respect to whether or not DTI is applicable across age ranges, we now know that white matter undergoes changes throughout our whole life span, including rapid early myelination of newborns, significant changes during adolescence, and neurodegenerative processes that alter basic white matter properties. Several NAMIC partners have access to a large number of DTI datasets of early brain development (UNC, UTAH, BWH) and apply NAMIC tools to process this data. Results demonstrate rapid significant changes of DTI-derived properties throughout the first years of life, measurements of white matter structuring associated with rapid myelination and axon pruning. Unlike structural MRI which represents a brain tissue contrast flip at the age of 6 to 8 months, DTI shows a continuous growth pattern that makes it particularly interesting to study and compare early growth trajectories across groups. Axon bundles at birth, although not yet myelinated, show sufficient structure for key fiber tracts to be extracted and analyzed. Neuroimaging protocols associated with a variety of disorders and diseases that affect children increasingly include DTI. These situations include autism, MPS, Krabbes, developmental delay, and generally the early detection of brain abnormalities in high risk populations. Joint analysis of DTI and structural MRI can reveal the necessary information to study structuring, myelination, maturation and fiber integrity in neurodevelopmeent neurodegeneration, brain injury, and mental disorders.

The particular focus of NAMIC on tackling basic issues of DWI analysis is not yet sufficiently developed in order to help to make all of the analyses associated with these issues possible. As the tools mature, we will see advanced DTI tools for measuring properties of white matter and of fiber tracts, on co-registration between DTI and structural MRI, and joint statistical analysis. This exciting work is ongoing and we are pleased to report that NA-MIC researchers are at the forefront

There is no gold standard for evaluating white matter using DTI post-processing tools, and thus some trial and error will be needed to determine which tools are optimal at which ages. We believe that the tools we develop are sufficiently robust to be be used to evaluate white matter changes across a wide range of ages. Furthermore, the underlying infrastructure of NA-MIC goes beyond DTI and the algorithms we are developing (e.g. for manipulating tensors, solving shortest paths, comparing large set of geometric structures) will have applications beyond DTI.

Although the NA-MIC Wiki contains information on who is using the NA-MIC kit and what are they using it for, the next annual report should either summarize this information or provide a link to the information. (Tina Kapur)

We will provide a link to this information in the next annual report.


The next annual report should include a link and reference to the User Manual for the NA-MIC Kit.(Will Schroeder)

Currently most of the documentation relative to the NAMIC Kit is distributed into separate projects. As the reviewers correctly point out, the documentation, including an overview of the various components of the NA-MIC Kit, could be improved by centralizing this information in the form of a unified User Manual. Our plan is to first place links to documentation from the kit components onto a single Wiki page. We will then write a higher-level user manual that refers to these links for detailed component information, while providing an overview of the various components and how they may be used to solve problems in biomedical computing. For example, the manual will explain how to go about writing Slicer3 modules using ITK, but leave detailed knowledge of ITK to the existing documentation resources. Our view is that the User Manual will provide a roadmap to NA-MIC Kit, providing an efficient way for new users to become oriented to the kit, its components, and how they are used to address various classes of research challenges.

The beginnings of this approach are currently available on the NAMIC Kit resource page. We are actively rewriting these web pages to better represent the current state of NAMIC after the first three years of development. A direct reference to the User Manual for the NA-MIC Kit will be placed here when it is completed.


What is the rational for choosing a particular method (tool) for solving a particular problem (DBP)? Why was a particular method (tool) chosen for development? Is there a listing of which tool might be helpful for which family of problems? Please provide more specific details to these questions as they have been asked previously by the Center Team. (Ross Whitaker)




The rationale for choosing a particular image analysis tool are multifold, and the motivations and justifications include a wide range of considerations. Here we give only a brief overview of the kinds of thinking that goes into these decisions.

On the whole, image analysis much like many other aspects of biomedical engineering; people use tools that have been demonstrated to work. Thus, there are software examples of software tools in neuroimaging which are in standard use and continue to remain useful because they are, through experience, effective at solving certain kinds of problems. This philosophy is undoubted part of NA-MIC---a primary motivation and measure of success in tool development is to solve clinical problems.

Medical image analysis is, traditionally, an empirical discipline. The successful techniques are either imagined, often from first principles, and then tested and refined, or derived by incremental improvements to existing methods. Either way, the emphasis is on experimentation, and NA-MIC does offer a unique opportunity to improve this process for a large segment of the field by producing a reliable, diverse software infrastructure and establishing publically-available data sets and results.

In deciding which tools get developed and deployed, the Core 1 researchers in NA-MIC also rely on a extensive body of experience in the field (empirical data over many projects and applications) and an adherence to first principles. That is, we are predisposed toward solutions that are justifiable in terms their adherence to basic principles of statistics, geometry, signal analysis, etc., because such methods are more apt (based on experience) to be general and reusable.

NA-MIC is, perhaps, somewhat different from other projects in image analysis, because it is an alliance. Our choice of approaches and tools does not reflect the interests or experiences of a single investigator or research group, but instead it relies on the experiences and talents of strong collection of independent research groups who interact regularly to discuss a common set of problems and a common collection of software and tools. Thus, the answer to the question "How should I process my data", will not be delivered in a top down fashion, but will emerge as our collaborators and the field at large examine the methods and gain experience.

We can nurture this process, by having regular interactions, holding workshops (such as the "Tractography Measures" example), and providing a venues to vet methods, experiences, and results as well as incentives for our participants to do so. Of course, we are also learning as we go the best ways to this and we are quite confident the that work we are doing now in this regard will bear fruit. Thus, in the long run we hope that there will be a collection of examples of processing pipelines or recipes that people can go to that demonstrate different kinds of clinical results.

A clinical project between Toronto and BWH still is recruitment phase in planning a DTI and genetic study of psychosis. What would be the genetic component? (Martha Shenton)

Drs. Martha Shenton (BWH) and James Kennedy (University of Toronto) are beginning a collaboration based on mutual interests, although the specific goals have yet to be worked out. More specifically, Dr. Shenton is very much interested in developing further expertise in her laboratory in the area of genetics, particularly in the area of white matter genes and their association with white matter fiber tract abnormalities evaluated using DTI in schizophrenia. Dr. Shenton has an instructor in Psychiatry in her laboratory who will be visiting Dr. Kennedy’s laboratory for a one week period in August of 2007, to be followed by several later visits, in order to learn state-of-the-art techniques used for evaluating white matter genes and their role in schizophrenia.

In parallel, Dr. Kennedy is very much interested in developing further expertise in his laboratory in the area of neuroimaging, particularly in the area of MR morphometry and DTI measures of white matter in schizophrenia, which he would like to correlate with genetic data involving white matter genes. Following up on this interest, Dr. Kennedy has a 4th year resident in psychiatry at the University of Toronto School of Medicine who works in his laboratory and who is visiting Dr. Shenton’s laboratory from July 1, 2007 to December 31, 2007, in order to learn state-of-the-art neuroimaging techniques, including DTI and its application to understanding white matter pathology in schizophrenia.

The common thread with respect to the genetic component is thus a focus on white matter genes that are relevant to schizophrenia. At this point it is too early to determine where this collaborative effort will go, although it is clear that there is a tremendous amount of interest on both Dr. Shenton’s and Dr. Kennedy’s part, and the hope is that these early efforts will come to fruition in a more extensive collaboration as well as grant funding that supports this collaborative endeavor.


The visualization tool allows the overlay of spherical, vector and ellipsoid data onto surfaces via versatile color maps. Is this extensible to other data, such a genetic or molecular data? (Steve Pieper)

The NA-MIC Kit is a set of compatible tools including utilities, libraries, and applications. At the application level, there are many promising areas of genetic or molecular research to which 3D Slicer has not been applied. 3D Slicer is extensible though, with current active projects and pending collaboration grant proposals to adapt and enhance the application to process microscopy data. For example, Drs. Bryan Smith and Mark Ellisman of UCSD are working on this topic through a supplement via the NCBC program. In addition Drs. Machiraju, Pieper, Aylward, and Davis of Ohio State, Isomics, and Kitware have jointly applied for a NA-MIC collaboration grant with the goal of implementing advanced image analysis algorithms that are well adapted to detecting cellular structures (currently in review). Dr. Gouaillard of CalTech is also collaborating with NA-MIC to adapt tools from the Center of Excellence in Genomic Science (CEGS) to work with their studies of the zebra fish embryogenesis. Beyond these specific examples, a wide range of research applications from surgery planning to astronomy have been enabled by the software. As the slicer3 platform matures, an even larger range of applications is anticipated. At the library and utility levels an even greater diversity of applications is possible as demonstrated by the range of applications using VTK and the applications developed on ITK.

Our approach to extending our software into new fields, such as the wider ranges of genetic or molecular images mentioned in the question, is to identify collaborators who need new image computing solutions of the type NA-MIC is providing. These collaborations often start through technical points of contact; programmers often research open source tools and begin 'tinkering' to see what can be re-used in a new application. If there is sufficient interest, these experiments can grow into collaboration in new fields. For example the collaboration with University of Iowa on Finite Element Meshing applies the software in a new direction that other NA-MIC developers had not been exploring.