Difference between revisions of "Projects:BayesianMRSegmentation"

From NAMIC Wiki
Jump to: navigation, search
Line 7: Line 7:
 
MRI data.
 
MRI data.
  
 +
=  =
  
 +
We use a Bayesian modeling approach, in which we first build an explicit computational model of how an MRI image around the hippocampal area is generated, and subsequently use this model to obtain fully automated segmentations. The model incorporates a ''prior'' distribution that makes predictions about where neuroanatomical labels typically occur throughout the image, and is based on a generalization of probabilistic atlases that uses a deformable, compact tetrahedrical mesh representation. The model also includes a ''likelihood'' distribution that predicts how a label image, where each voxel is assigned a unique neuroanatomical label, translates into an MRI image, where each voxel has an intensity.
 +
 +
= Results =
  
  

Revision as of 20:42, 16 May 2008

Home < Projects:BayesianMRSegmentation

Back to NA-MIC Collaborations, MIT Algorithms


Model-Based Segmentation of Hippocampal Subfields

Recent developments in MR data acquisition technology are starting to yield images that show anatomical features of the hippocampal formation at an unprecedented level of detail, providing the basis for hippocampal subfield measurement. Because of the role of the hippocampus in human memory and its implication in a variety of disorders and conditions, the ability to reliably and efficiently quantify its subfields through in vivo neuroimaging is of great interest to both basic neuroscience and clinical research. The aim of this project is to develop and validate a fully-automated method for segmenting the hippocampal subfields in ultra-high resolution MRI data.

We use a Bayesian modeling approach, in which we first build an explicit computational model of how an MRI image around the hippocampal area is generated, and subsequently use this model to obtain fully automated segmentations. The model incorporates a prior distribution that makes predictions about where neuroanatomical labels typically occur throughout the image, and is based on a generalization of probabilistic atlases that uses a deformable, compact tetrahedrical mesh representation. The model also includes a likelihood distribution that predicts how a label image, where each voxel is assigned a unique neuroanatomical label, translates into an MRI image, where each voxel has an intensity.

Results

Key Investigators

  • MIT Algorithms: Koen Van Leemput, Polina Golland

Publications