Difference between revisions of "Projects:ProstateSegmentation"

From NAMIC Wiki
Jump to: navigation, search
Line 48: Line 48:
 
Principle component analysis was carried on to learn the above representations of the shapes. And the eigen-vectors with high eigen-values are extracted as the basis of the shape space. Next, we segment a new image within this learned shape space, denoted as S.
 
Principle component analysis was carried on to learn the above representations of the shapes. And the eigen-vectors with high eigen-values are extracted as the basis of the shape space. Next, we segment a new image within this learned shape space, denoted as S.
  
=== segment new shape ===
+
=== Segment new shape ===
 +
 
 +
The new image to be segmented is first preprocessed to high light the region of prostate. Then, within the shape space S, we look for the one that after similarity transformation, best fits the preprocessed image. The fitness is defined by the energy proposed by Yezzi et. al. in ''A Statistical Approach to Snakes for Bimodal and Trimodal Imagery''. Next we detail the pre-processing and the segmentation.
 +
 
 +
==== Pre-processing ====
 +
 
 +
==== Segmentation of the pre-processed image ====
  
 
= Key Investigators =
 
= Key Investigators =

Revision as of 19:22, 1 December 2008

Home < Projects:ProstateSegmentation

Back to Georgia Tech Algorithms

Prostate Segmentation

The objective is to extract the prostate from a 3D ultrasound data set.

Description

Two ways are employed to attack the problem. The first way is Random Walks for prostate segmentation. While the second is using a shape based method. The details are given below.

Random Walks for prostate segmentation

Overview

The algorithm starts with a few initial seeds marking object and background. Then to decide the category of each of the other(non-seed) pixels, a random walker is released and it walks on the lattice of the image. The difficulty of walking from on pixel position to another is inversely related with the difference between the intensities of the two pixels. Given those settings, the probability of the random walker first hitting each kind of seeds are calculated. And the category of the pixel is assigned to the one with the greatest first hit probability.

Ideally, through this process the category of each pixel in the image could be decided.

Algorithm detail

The philosophy of random walk segmentation being stated above, it's practically impossible to implement that way. (Say, releasing many random walkers at each pixel and get the first hit probability using Monte-Carlo method.)

An analogy between first hit probability of random walks and the electrical potential distribution on a circuit network was given in the middle of last century. The resistance(conductance) in the circuit network is analogous to the crossing difficulty in the random walking scenario. Through the analogy the problem could be converted to a Dirichlet problem: [math] \triangle u = 0[/math] with boundary conditions: [math]u = 1 for object seeds[/math] [math]u = 0 for background seeds[/math]

In the above equation the [math]\triangle [/math] is a Laplace-Beltrami operator where the spatial variant resistance is considered to be metric of the space.


Shape based segmentation

The shape based method could constrain the resulting segmentation within the learned variance of the shape. The algorithm consists two stages, learning the shapes and segment new case.

Shape learning

The training set consists 29 data sets, each of which is manually segmented. Not only the prostate is segmented, but also the rectum. The reason for segmenting rectum will be given later. The segmented results are saved in label map where 0 indicates background, 1 means prostate while 2 is the rectum.

To learn the shape of prostate, we first need to register them to get rid of the variance of rigid motion and isotropic scaling.

There is very large shape variance in prostate. Just for ease of describing, we use ellipsoid analogy for the prostate. Some of the prostate has long axis horizontally posed, while some vertically. And some prostate is spherical shaped, having no eminent axis in any direction. So if we are to register the prostates alone, then the "horizontal prostate" would rotate 90 degree to best fit the "vertical prostate" and this may cause wrong point correspondence. This brings the necessity of having another object to "hold the frame", thus the rectum is also segmented out.

Having multiple objects to register at once, we employ the label space representation. It's a vectorial representation for multiple objects in image. Then a vector image registration process, based on the L-2 measure between two label space images, was carried on and 7 parameters(3 translation, 3 rotation and 1 isotropic scale) are registered.

Registration eliminates the similarity transform among shapes, next we learn the shape of prostate. The signed distance function(SDF) of the object has been used for shape learning by many researchers. In our case, we observed that two similar shapes may have large difference in the far field of their corresponding SDFs. However the shapes are embedded as the zero-contour of the SDF. So the large far field difference brings unnecessary variance.

To cure this, instead of learning SDFs, we learn the tanh(hyperbolic tangent) of SDF. The tanh function preserves the zero-contour of SDF while eliminates the far field variance. To sum up, we use tanh(SDF) to represent the shape.

Principle component analysis was carried on to learn the above representations of the shapes. And the eigen-vectors with high eigen-values are extracted as the basis of the shape space. Next, we segment a new image within this learned shape space, denoted as S.

Segment new shape

The new image to be segmented is first preprocessed to high light the region of prostate. Then, within the shape space S, we look for the one that after similarity transformation, best fits the preprocessed image. The fitness is defined by the energy proposed by Yezzi et. al. in A Statistical Approach to Snakes for Bimodal and Trimodal Imagery. Next we detail the pre-processing and the segmentation.

Pre-processing

Segmentation of the pre-processed image

Key Investigators

  • Georgia Tech Algorithms: Yi Gao, Allen Tannenbaum

Publications