Difference between revisions of "NA-MIC/Projects/Collaboration/MGH RadOnc"

From NAMIC Wiki
Jump to: navigation, search
 
(25 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 
__NOTOC__
 
__NOTOC__
  
 +
===Adaptive RT for Head and Neck===
  
===Interactive Segmentation===
+
This first example shows the kind of anatomic changes that can occur in head and neck cancer.  The pre-treatment scan is in green, and the mid-treatment scan is in red.  The image on the left is the rigid registration, on the right is deformable registration. 
  
Rad Onc departments use interactive segmentation every day for both research and patient care.  
+
{|
Prior to treatment planning, the target and critical structures are delineated in CT. The current state of the art is manual segmentation in axial view. A outline tool, used delineate the boundary, is generally prefered over a paintbrush tool that fills pixels. Commercial products generally support some subset of the following tools to assist the operator.
+
|[[Image:RadOnc_HN2_rigid.png|thumb|380px|Rigid registration]]
 +
|[[Image:RadOnc_HN2_deformable.png|thumb|380px|Deformable registration]]
 +
|}
  
1 contour interpolation between slices
+
Here is another pertinent example for head and neck.  In axial view, there appears to be some weight loss. Note the change in positioning of the mandible, and also the twisting of the cervical spine between scansAlso note the strong CT artifacts caused by dental fillings.  In both examples, registration of the soft palette is worse using deformable registration than rigid registration, probably due to these artifacts.
2 boundary editing
 
3 mixed axial/coronal/sagittal drawing
 
4 livewire or intelligent scissors
 
5 drawing constraints (e.g. constraints on volume overlap/distance)
 
6 post-processing tools to nudge or smooth the boundary
 
 
 
===Adaptive RT===
 
 
 
Below are some examples of anatomic change in head & neck and thorax.   
 
  
 
{|
 
{|
|[[Image:gcsGUI.png|thumb|320px|Simple GUI for plastimatch]]
+
|[[Image:RadOnc_HN5_rigid.png|thumb|380px|Rigid registration]]
|[[Image:gcsImage.jpg|thumb|320px|Registration output in Slicer]]
+
|[[Image:RadOnc_HN5_deformable.jpeg|thumb|380px|Deformable registration]]
|[[Image:gcsHeartLung.jpg|thumb|320px|Conversion of DICOM RT structures]]
 
 
|}
 
|}
  
  
===General Discussion of Registration===
 
  
Deformable registration is still not as reliable as it should be. Image acquisition has residual artifacts which cause unrealistic deformations. Registration algorithms are not always robust, and require experimentation and tuning. Validation of registration results is not easy, since there are inadequate tools. Temporal regularization is generally not done, because of slow algorithms and large memory footprints. And so on.
+
===Adaptive RT for Thorax===
  
===4D-CT Registration in Thorax===
+
This example shows anatomic change in the thorax.  The patient has a collapsed left lower lobe in the pre-treatment scan (top), which has recovered in the mid-treatment scan (bottom).  Notice there is some kind of fluid accumulation below the collapsed lung.
  
Thorax is a special case.  Patient images are acquired using 4D-CT, and radiation dose can computed for 3D volumes at each breathing phase. The volumes are aligned using deformable registration, and radiation dose is accumulated in a reference phase (e.g. exhale). Ideally this procedure is repeated to perform 4D treatment plan optimization.
+
{|
 +
|[[Image:RadOnc_LU1.png|thumb|380px|Lung cancer 1]]
 +
|}
  
The sliding of the lungs against the chest wall is difficult to model. We sometimes segment the images at the pleural boundary. This allows us to separate the moving set of organs from the non-moving set, which are registered separately. Ideally we would always do this, but segmentation is manual and therefore we usually skip this step.
+
Here is another example, showing the magnitude of tumor regression that can occur during treatment. (Sorry, low resolution image).
  
 
{|
 
{|
|[[Image:gcsGUI.png|thumb|320px|Simple GUI for plastimatch]]
+
|[[Image:RadOnc_LU2.png|thumb|380px|Lung cancer 2]]
|[[Image:gcsImage.jpg|thumb|320px|Registration output in Slicer]]
 
|[[Image:gcsHeartLung.jpg|thumb|320px|Conversion of DICOM RT structures]]
 
 
|}
 
|}
  
If you ignore the pleural boundary, registration of 4D-CT is considered "easy".   
+
Thorax is a special case.  Patient images are acquired using 4D-CT, and radiation treatment plan can be evaluated at each breathing phase. The volumes are aligned using deformable registration, and radiation dose from each phase is accumulated into a reference phase (e.g. exhale).  This process is called "4D treatment planning."
 +
 
 +
Deformable registration of the phases within a 4D-CT is considered "easy".  The reason for this is:
  
 
# Single-session imaging, so patient is already co-registered
 
# Single-session imaging, so patient is already co-registered
Line 47: Line 41:
 
# High contrast of vessels against lung parenchema
 
# High contrast of vessels against lung parenchema
  
State of the art is probably around 2-3 mm RMS error for point landmarks.
+
However, the sliding of the lungs against the chest wall is difficult to model. We sometimes segment the images at the pleural boundary. This allows us to separate the moving set of organs from the non-moving set, which are registered separately. Ideally we would always do this, but segmentation is manual and therefore we usually skip this step.
  
<div style="margin: 20px;">
+
{|
 +
|[[Image:RadOnc_LU3.png|thumb|300px|Segmentation at pleural boundary]]
 +
|[[Image:RadOnc_LU4_before.png|thumb|300px|4D-CT phases]]
 +
|[[Image:RadOnc_LU4_after.png|thumb|300px|Registration of 4D-CT phases]]
 +
|}
  
<div style="width: 23%; float: left; padding-right: 3%;">
+
===General Discussion of Registration===
  
<h1>Progress</h1>
+
Deformable registration is still not as reliable as it should be.  Here are some of my complaints:
* Working CLP program
 
* Got fiducials from Slicer -- Wow!
 
* Preliminary CTest interface
 
* DicomRT contour conversion
 
  
</div>
+
# Image acquisition has residual artifacts which cause unrealistic deformations (dental artifacts in H&N, motion artifacts in thorax).
 +
# Registration algorithms are not always robust, and require experimentation and tuning.
 +
# Validation of registration results is not easy, since there are inadequate tools.
 +
# In thorax, temporal regularization is generally not done, because of slow algorithms and large memory footprints.
 +
# For cone-beam CT (and sometimes MR), intensity values are not globally stable.  The most common suggestion is adaptive histogram equalization, but isn't there a better way?
 +
# Interactive tools to repair (or reinitialize) registration are virtually non-existent.
  
<div style="width: 38%; float: left;">
+
===Segmentation===
  
<h1>Todo</h1>
+
Rad Onc departments use interactive segmentation every day for both research and patient care.
* Convert from CLP to scriptable or loadable module
+
Prior to treatment planning, the target and critical structures are delineated in CT. The current state of the art is manual segmentation in axial view. A outline tool, used delineate the boundary, is generally prefered over a paintbrush tool that fills pixels. Commercial products generally support some subset of the following tools to assist the operator.
* Improved visualization of registration output
 
* Improved interactivity of fiducials
 
* Export of deformed contours to DicomRT
 
  
</div>
+
# contour interpolation between slices
 +
# boundary editing
 +
# mixed axial/coronal/sagittal drawing
 +
# livewire or intelligent scissors
 +
# drawing constraints (e.g. constraints on volume overlap/distance)
 +
# post-processing tools to nudge or smooth the boundary
  
<br style="clear: both;" />
+
There are many opportunities for using computer vision algorithms to improve interactive segmentation.  For example, using prior models of shape and intensity to improve interpolation.  Automatic segmentation also exists is several commercial products, each with impressive demos.  We have GE Adv Sim software which does model-based segmentation of structures such as spinal cord, lens & optic nerve, liver.  For H&N, atlas-based segmentation is the most popular approach.
  
</div>
+
Below are examples of segmentations of targets and critical structures for head and neck, and thorax.  These structures (or similar) would be manually delineated for every patient who gets 3D planning.
  
 +
{|
 +
|[[Image:RadOnc_HN_seg.png|thumb|380px|Head and Neck segmentation]]
 +
|
 +
{| class="wikitable"
 +
|-
 +
! optic chiasm
 +
! med green
 +
|-
 +
! brain stem
 +
! dim green
 +
|-
 +
! spinal cord
 +
! bright green
 +
|-
 +
! left parotid
 +
! violet
 +
|-
 +
! right parotid
 +
! dim blue
 +
|-
 +
!oral cavity
 +
! cyan
 +
|-
 +
! mandible
 +
! pink
 +
|-
 +
! larynx
 +
! bright blue
 +
|-
 +
! esophagus
 +
! orange
 +
|-
 +
! (target)
 +
! red
 +
|-
 +
! (target)
 +
! yellow
 +
|}
 +
|}
  
===References===
+
{|
* GC Sharp, Z Wu, N Kandasamy, "A Data Structure for B-Spline Registration," AAPM 50, Houston TX, July 2008.
+
|[[Image:RadOnc_LU_seg.png|thumb|380px|Thorax segmentation]]
* V Boldea, GC Sharp, SB Jiang, D Sarrut, “4D-CT lung motion estimation with deformable registration: quantification of motion nonlinearity and hysteresis,” Medical Physics, Vol 35, No 3, pp 1008-1018, March 2008.
+
|
* Z Wu, E Rietzel, V Boldea, D Sarrut, GC Sharp, "Evaluation of deformable registration of patient lung 4DCT with sub-anatomical region segmentations," Medical Physics, Vol 35, No 2, pp 775-81, February 2008.
+
{| class="wikitable"
* GC Sharp, N Kandasamy, H Singh, M Folkert, "GPU-based streaming architectures for fast cone-beam CT image reconstruction and demons deformable registration,” Physics in Medicine and Biology, Vol 52, No 19, pp 5771--83, October 7, 2007.
+
|-
 +
! left lung
 +
! dark red
 +
|-
 +
! right lung
 +
! green
 +
|-
 +
! esophagus
 +
! violet
 +
|-
 +
! heart
 +
! cyan
 +
|-
 +
! cord
 +
! yellow
 +
|-
 +
! (target)
 +
! light red
 +
|}
 +
|}

Latest revision as of 22:26, 8 June 2009

Home < NA-MIC < Projects < Collaboration < MGH RadOnc


Adaptive RT for Head and Neck

This first example shows the kind of anatomic changes that can occur in head and neck cancer. The pre-treatment scan is in green, and the mid-treatment scan is in red. The image on the left is the rigid registration, on the right is deformable registration.

Rigid registration
Deformable registration

Here is another pertinent example for head and neck. In axial view, there appears to be some weight loss. Note the change in positioning of the mandible, and also the twisting of the cervical spine between scans. Also note the strong CT artifacts caused by dental fillings. In both examples, registration of the soft palette is worse using deformable registration than rigid registration, probably due to these artifacts.

Rigid registration
Deformable registration


Adaptive RT for Thorax

This example shows anatomic change in the thorax. The patient has a collapsed left lower lobe in the pre-treatment scan (top), which has recovered in the mid-treatment scan (bottom). Notice there is some kind of fluid accumulation below the collapsed lung.

Lung cancer 1

Here is another example, showing the magnitude of tumor regression that can occur during treatment. (Sorry, low resolution image).

Lung cancer 2

Thorax is a special case. Patient images are acquired using 4D-CT, and radiation treatment plan can be evaluated at each breathing phase. The volumes are aligned using deformable registration, and radiation dose from each phase is accumulated into a reference phase (e.g. exhale). This process is called "4D treatment planning."

Deformable registration of the phases within a 4D-CT is considered "easy". The reason for this is:

  1. Single-session imaging, so patient is already co-registered
  2. Single-session imaging, so no anatomic change
  3. High contrast of vessels against lung parenchema

However, the sliding of the lungs against the chest wall is difficult to model. We sometimes segment the images at the pleural boundary. This allows us to separate the moving set of organs from the non-moving set, which are registered separately. Ideally we would always do this, but segmentation is manual and therefore we usually skip this step.

Segmentation at pleural boundary
4D-CT phases
Registration of 4D-CT phases

General Discussion of Registration

Deformable registration is still not as reliable as it should be. Here are some of my complaints:

  1. Image acquisition has residual artifacts which cause unrealistic deformations (dental artifacts in H&N, motion artifacts in thorax).
  2. Registration algorithms are not always robust, and require experimentation and tuning.
  3. Validation of registration results is not easy, since there are inadequate tools.
  4. In thorax, temporal regularization is generally not done, because of slow algorithms and large memory footprints.
  5. For cone-beam CT (and sometimes MR), intensity values are not globally stable. The most common suggestion is adaptive histogram equalization, but isn't there a better way?
  6. Interactive tools to repair (or reinitialize) registration are virtually non-existent.

Segmentation

Rad Onc departments use interactive segmentation every day for both research and patient care. Prior to treatment planning, the target and critical structures are delineated in CT. The current state of the art is manual segmentation in axial view. A outline tool, used delineate the boundary, is generally prefered over a paintbrush tool that fills pixels. Commercial products generally support some subset of the following tools to assist the operator.

  1. contour interpolation between slices
  2. boundary editing
  3. mixed axial/coronal/sagittal drawing
  4. livewire or intelligent scissors
  5. drawing constraints (e.g. constraints on volume overlap/distance)
  6. post-processing tools to nudge or smooth the boundary

There are many opportunities for using computer vision algorithms to improve interactive segmentation. For example, using prior models of shape and intensity to improve interpolation. Automatic segmentation also exists is several commercial products, each with impressive demos. We have GE Adv Sim software which does model-based segmentation of structures such as spinal cord, lens & optic nerve, liver. For H&N, atlas-based segmentation is the most popular approach.

Below are examples of segmentations of targets and critical structures for head and neck, and thorax. These structures (or similar) would be manually delineated for every patient who gets 3D planning.

Head and Neck segmentation
optic chiasm med green
brain stem dim green
spinal cord bright green
left parotid violet
right parotid dim blue
oral cavity cyan
mandible pink
larynx bright blue
esophagus orange
(target) red
(target) yellow
Thorax segmentation
left lung dark red
right lung green
esophagus violet
heart cyan
cord yellow
(target) light red