NA-MIC/Projects/Collaboration/MGH RadOnc

From NAMIC Wiki
Revision as of 22:10, 29 January 2009 by Gregsharp (talk | contribs)
Jump to: navigation, search
Home < NA-MIC < Projects < Collaboration < MGH RadOnc


Adaptive RT for Head and Neck

This first example shows the kind of anatomic changes that can occur in head and neck cancer. The pre-treatment scan is in green, and the mid-treatment scan is in red. The image on the left is the rigid registration, on the right is deformable registration.

Rigid registration
Deformable registration

Here is another pertinent example for head and neck. In axial view, there appears to be some weight loss. Note the change in positioning of the mandible, and also the twisting of the cervical spine between scans. Also note the strong CT artifacts caused by dental fillings. In both examples, registration of the soft palette is worse using deformable registration than rigid registration, probably due to these artifacts.

Rigid registration
File:RadOnc HN5 deformable.png

(The wiki can't give a preview for the deformable registration image. Anyone know why?)


Adaptive RT for Thorax

This example shows anatomic change in the thorax. The patient has a collapsed left lower lobe in the pre-treatment scan (top), which has recovered in the mid-treatment scan (bottom). Notice there is some kind of fluid accumulation below the collapsed lung.

Lung cancer 1

Here is another example, showing the magnitude of tumor regression that can occur during treatment. (Sorry, low resolution image).

Lung cancer 2

Thorax is a special case. Patient images are acquired using 4D-CT, and radiation treatment plan can be evaluated at each breathing phase. The volumes are aligned using deformable registration, and radiation dose from each phase is accumulated into a reference phase (e.g. exhale). This process is called "4D treatment planning."

Deformable registration of the phases within a 4D-CT is considered "easy". The reason for this is:

  1. Single-session imaging, so patient is already co-registered
  2. Single-session imaging, so no anatomic change
  3. High contrast of vessels against lung parenchema

However, the sliding of the lungs against the chest wall is difficult to model. We sometimes segment the images at the pleural boundary. This allows us to separate the moving set of organs from the non-moving set, which are registered separately. Ideally we would always do this, but segmentation is manual and therefore we usually skip this step.

Segmentation at pleural boundary
4D-CT phases
Registration of 4D-CT phases

General Discussion of Registration

Deformable registration is still not as reliable as it should be. Here are some of my complaints:

  1. Image acquisition has residual artifacts which cause unrealistic deformations (dental artifacts in H&N, motion artifacts in thorax).
  2. Registration algorithms are not always robust, and require experimentation and tuning.
  3. Validation of registration results is not easy, since there are inadequate tools.
  4. In thorax, temporal regularization is generally not done, because of slow algorithms and large memory footprints.
  5. For cone-beam CT (and sometimes MR), intensity values are not globally stable. The most common suggestion is adaptive histogram equalization, but isn't there a better way?
  6. Interactive tools to repair (or reinitialize) registration are virtually non-existent.

Segmentation

Rad Onc departments use interactive segmentation every day for both research and patient care. Prior to treatment planning, the target and critical structures are delineated in CT. The current state of the art is manual segmentation in axial view. A outline tool, used delineate the boundary, is generally prefered over a paintbrush tool that fills pixels. Commercial products generally support some subset of the following tools to assist the operator.

  1. contour interpolation between slices
  2. boundary editing
  3. mixed axial/coronal/sagittal drawing
  4. livewire or intelligent scissors
  5. drawing constraints (e.g. constraints on volume overlap/distance)
  6. post-processing tools to nudge or smooth the boundary

There are many opportunities for using computer vision algorithms to improve interactive segmentation. For example, using prior models of shape and intensity to improve interpolation. Automatic segmentation also exists is several commercial products, each with impressive demos. We have GE Adv Sim software which does model-based segmentation of structures such as spinal cord, lens & optic nerve, liver. For H&N, atlas-based segmentation is the most popular approach.

Below are examples of segmentations of targets and critical structures for head and neck, and thorax. These structures (or similar) would be manually delineated for every patient who gets 3D planning.

Head and Neck segmentation
optic chiasm med green
brain stem dim green
spinal cord bright green
left parotid violet
right parotid dim blue
oral cavity cyan
mandible pink
larynx bright blue
esophagus orange
(target) red
(target) yellow
Thorax segmentation
left lung dark red
right lung green
esophagus violet
heart cyan
cord yellow
(target) light red