Non Rigid Registration: Ongoing Discussion

From NAMIC Wiki
Jump to: navigation, search
Home < Non Rigid Registration: Ongoing Discussion

Ongoing Discussion (most recent on top)

  • Guido Gerig (08-23-06) The requirement of "one tool that fits most" is probably too much contrained, I rather think NAMIC needs a set of tools/methods that smoothly work together as part of an integration/interoperability effort e.g. by Slicer 3. E.g., issues are linear versus nonlinear registration or a combination of both using the same tool (which is most common in brain mapping), choice of image match metrics to calculate the deformation field, choice of the degree of freedom (affine versus polynomial based versus high-dimensional etc.), need for cascading several transformations to be converted into one resulting transformation, choice of directed non-invertible versus diffeomorphic and fully invertible transformations,and a generic tool that applies the deformation matrix/field with choice of interpolation, specification of target voxel-spacing (world versus image) two an image or sets of images.
    • At UNC, we have excellent experience since over 7 years with Daniel Rueckert's Rview/cisgvtk tool (freely available), which combines linear and nonlinear registration using a polynomial approach with choice of grid spacing, choice of 6 different image match metrics like MI, NMI, cross-correlation etc., excellent esign of GUI/visualization/ROI selection/parameter settings and command-line execution as batch jobs, very nice design of cascading deformation fields, running the tool for calculation of deformation and separately for applying the deformation field etc. This tool is embeded into all our brain processing pipelines and we have experience on thousands of intra- inter-patient and inter-modality registrations and atlas building. This method/tool seems reimplemented in ITK but my students could never really made it work as robust as the original Rview version, and it is not clear to me if the ITK version is a re-implementation or complete redisign independent of the original developers. For our autimatic EMS brain segmentation, our PhD students even went that far just to reimplement MI linear/nonlinear registration a la Rueckert in ITK (Prastawa et al.). The Rueckert tool is not invertable and does mathematically not guarantee that there isn't overfolding of space.
    • For diffeomorphic high-dimensional transformations, we use high-dimensional fluid deformation as developed by Miller/Christensen/Joshi, which is also a central part on the population-based unbiased atlas-building developed by Joshi et al. Since this method is diffeomorphic, and was extended to provide a symmetric transformation between pairs and populations of datasets, it can be used for transforming to an average/template but also to go backwards by mapping atlas segmentations back to the individual cases for automatic segmentation and statistical analysis. The fluid transformation is not part of ITK and there are speed issues with the Fourier transforms/backtransforms if integrated in ITK, but my programmer colleagues might know better if there was is an ITK-based fluid deformation available. Speed issues were a concern a few years ago, but current versions take 30 to 60' on standard cheap PCs, which is good enough for automatic batch processing.
    • I strongly suggest that NAMIC should lay out specifications/categorizations (as I listed above and initiated by Ron and others below) for providing a plan for a versatile, generic set of interoperable tools - i.e. by first discuss specifications for functionality and type of function. E.g. for DTI, our group provides a new DTI tensor deformation/interpolation tool for NAMIC (Casey Goodlett, MICCAI'06) that is using an existing deformation field, calculated by a new image metric derived from DTI-FA maps but could be any deformation field provided by other tools. As my colleagues below point out, there seem to be several tools out already with excellent track record and being published and rigorously tested, which could be part of a NAMIC registration Toolkit. There is also Hammer by the Davatzikos group which builds in a highly advanced image match metric based on a rich set of local features. A key issue for NAMIC could be interoparability via standardization of matrix definitions and deformation field formats.
  • Bruce Fischl we have one that is part of our segmentation proceudre that meets all these needs except the "relatively fast" one :) . It's quite robust, we've run it on hundreds of AD, schizophrenia, etc..., but it is also quite slot (15 hours or so). Gheorhge's NA-MIC project is also on non-rigid registration, although it's not published we're hoping to write it up soon.
    • Kilian Pohl
      Just to make sure that we are all on the same boat, I have a couple of questions:
      What type of registration do you use as part of your segmentation ?
      Is there a paper that describes the registration in detail ?
      Does the registration rely on tissue classification ?
      If so, can you register brains with pathologies, such as MS lesions or meningiomas ?
      • Bruce Fischl
        yes, the linear part was described in our Neuron 2002 paper (page 9 - 10), and the nonlinear extension in the IPAM thing that was published in NeuroImage NeuroImage (pages 5-7). It works fine with MS and white matter damage, not sure about tumors and such as we haven't really tried it. It is designed to be part of our segmentation (the MRF stuff), and I doubt it's optimal for functional alignment, but it works quite well for classification. It doesn't require classification - the classification requires it.
    • Ron Kikinis
      Excellent. Does it have the potential to be parallelized?
      • Bruce Fischl
        As far as parallelization, I don't see any reason why not. I think Anders may have been messing around with parallelizing it, I'm not sure.
  • Stephen Aylward
    Is perhaps the first focus defining use-cases/requirements? Perhaps other requirements to consider
    • Multi-scale
      • insensitive to moving/fixed images being at different resolutions
      • e.g., DTI/fMRI vs 3D T1
    • Diffeomorphic for use in atlases
    • Handles large deformations
      • Should it handle or be a part of registration process for patient-atlas registration in the presence of large tumors or resections?
      • Should it support the use of "don't care" regions across which the deformation is smoothly interpolated
    • The registration discussion should probably consider the transform, optimizer, and metric as separate considerations. For example, my favorite class of registration metrics is the class of feature-image registration metrics. The metric used in HAMMER registration is one example from this class; however I am not promoting that particular metric - it is simply the most well known method from that class.
      • Features used in the metric could be tuned for the modalities/scales involved.
      • Using sparse, pre-selected features makes the metric very fast.
      • Parameters/features are limited by domain knowledge (e.g., presets for T1/fMRI registration).
      • The ITK image-image and feature-image registration frameworks support such metrics, in general - improvements are clearly possible/needed.
  • Allen Tannenbaum We could try the optimal transport one.
  • Jim Miller
    • How fast is fast? Seconds per volume? Minutes per volume?
    • The temptation is to develop a new algorithm, rather than pick a published technique.
    • I am currently fond of the idea of using a network low order transformations to model a complicated deformations.
  • Sandy Wells
    • One general purpose method that is used pretty widely in neuroimaging is Daniel Reuckert's combination of the MI objective function with a B spline mesh. My impression is that both of those things are in ItK already... do they play well?
    • While the MI objective function can be nice to use, because it does not require domain knowledge, sometimes additional robustness can be gained by using objective functions that do, such as recent contributions by Chung and Zollei.
      • Albert Chung's KLD approach has been shown experimentally to be substantially more robust than MI on MRI/CT registration on the Vanderbilt data set, though somewhat less accurate, i.e., there is a "bias vs capture" tradeoff.
      • Lilla Zollei's "Dirichlet" approach provides a natural generalizatin of the entropy approach that incorporates prior knowledge in a more controlled way, and it is better for EPI / MRI than MI is. It appeared in her thesis, and in a recent WBIR paper (see her web page for those... just google "Lilla Zollei").
    • On choice of optimizers -- my work, and related bits of ITK use variants of "stochastic gradient descent" (SGC), and I have found it, empirically, to be an effective (i.e., fast) choice. There was an interesting paper at the WBIR conference by one of Josien Pluim's students that evaluated a collection of optimizers on the Reuckert-style MI + B spline approach. This paper showed winning performance by SGC.
    • I have some concerns about the application of general-purpose non-rigid registration approaches to the EPI/MRI registration problem. While such approaches may produce pairs of EPI/MRI that "look better", I would be cautious about expecting that approach to be accurate. My feeling is that robust solutions to this problem will require some of the physics of the problem to be built into the solution, either by way of field maps, or physics simulations. I feel that this applies even more to Echo-Planar DTI data.