Projects:RegistrationLibrary:RegLib C10
From NAMIC Wiki
Home < Projects:RegistrationLibrary:RegLib C10
Back to ARRA main page
Back to Registration main page
Back to Registration Use-case Inventory
Contents
v3.6.1 Slicer Registration Library Case #10: Co-registration of probabilistic tissue atlas for subsequent EM segmentation
Input
Target Brain | Probabilistic Atlas |
Modules
- Slicer 3.6.1 recommended modules: BrainsFit
Slicer Registration Library Exampe #10: Co-registration of probabilistic tissue atlas for subsequent EM segmentation
Objective / Background
This is an example of sparse atlas co-registration. Not all atlases have an associated reference image that can be used for registration. Because the atlas represents a map of a particular tissue class probability, its contrast differs significantly from the target image.
Keywords
MRI, brain, head, inter-subject, probabilistic atlas, atlas-based segmentation
Input Data
- fixed : T1w axial, 0.9375 x 0.9375 x 1.5 mm axial, 256 x 256 x 124
- moving: Probabilistic Tissue atlas, 0.9375 x 0.9375 x 1.5 mm axial, 256 x 256 x 124
Methods
- Version 1: BRAINSfit w/o masking
- open BRAINSfit module
- fixed image: fixed; moving image: moving
- Registration phases: check: Rigid, ScaleVersor3D, Affine, BSpline
- Output: under Slicer BSpline Transform, select "create new" and rename to "Xf3_BFit_unmasked"
- Output: under Output Image Volume, select "create new" and rename to "moving_Xf3"
- Output Image Pixel Type: check box for "short"
- Registration Parameters: increase Number of Samples to 200,000
- Number of Grid Subdivisions: 5,5,5
- leave rest at defaults
- click: Apply
- Version 2: Expert Automated + Fast BSpline Modules incl. masking
- build brain mask for the fixed image only:
- open Skull Stripping module.
- Settings: 100 iterations, 20 subdivisions.
- Ouput: create new volum, rename to "fixed_mask"
- Click: Apply
- manually edit brain mask with Editor module. required manual fix at frontal and occipital lobe
- open 'Expert Automated Registration module
- Settings: Fixed Image: fixed, moving image: moving
- Save Transform: Xf1_Affine_wmsk
- Initialization: Centers of Mass, Registration: PipelineAffine
- Expected offset: 10 , Expected Rotation: 0.2
- Expected Scale: 0.1 , Expected Skew: 0.05
- Fixed Image Mask: "fixed_mask"
- Affine Max Iteration: 80 , Affine Sampling Ratio: 0.05
- (alternatively automated Affine Registration: Register Images Multires also produces good results
- run Fast Deformable B-spline Registration module. Settings:
- Iterations: 20 , Grid Size: 9
- fixed image: fixed; moving image: moving
- Histogram Bins: 50, Spatial Samples: 50000
- initial transform: "Xf1_Affine_wmsk"
- Output Transform: Xf2_BSpline1
- Output Volume: create new, rename to "moving_Xf2"
Registration Results
Download
- Data
- Presets
- Documentation
Discussion: Registration Challenges
- Because the atlas represents a probabilistic image (i.e. contains blurring from combining multiple subjects), its contrast differs significantly from the target image.
- The atlas has strong rotational misalignment that can cause difficulty for automated affine registration.
- The two images represent different anatomies, a non-rigid registration is required
Discussion: Key Strategies
- Because of the strong differences in image contrast, Mutual Information is recommended as the most robust metric.
- masking (skull stripping) is highly recommended to obtain good results for the initial affine alignment. For the 2nd stage BSpline use the full image (i.e. do NOT use the masked version) unless high-quality masks are available for both fixed & moving image. Using the crude mask created for the initial alignment for the BSpline will likely destabilize.
- because speed is not that critical, we increase the sampling rate for both affine and BSpline registration
- we also expect larger differences in scale & distortion than with regular structural scans: so we significantly (2x-3x) increase the expected values for scale and skew from the defaults.
- a good affine alignment is important before proceeding to non-rigid alignment to further correct for distortions.
Acknowledgments
- dataset provided by Killian Pohl, Ph.D.