Difference between revisions of "Projects:MGH-HeadAndNeck-PtSetReg"
Ivan.kolesov (talk | contribs) |
Ivan.kolesov (talk | contribs) |
||
Line 1: | Line 1: | ||
Back to [[Algorithm:BU|Boston University Algorithms]] | Back to [[Algorithm:BU|Boston University Algorithms]] | ||
__NOTOC__ | __NOTOC__ | ||
− | = | + | = Semi-Automatic Image Registration = |
− | + | We recognize that the difference between a failure of an automatic image registration approach and a success of a semi-automatic method can be a small amount of user input. The goal of this work is to register two CT volumes of different patients that are related by a large deformation. The user sets two thresholds for each image: one for the bone mask and another for flesh tissue. This operation is not time consuming but simplifies the registration task dramatically for the automatic algorithm. | |
= Description = | = Description = |
Revision as of 02:45, 19 November 2012
Home < Projects:MGH-HeadAndNeck-PtSetRegBack to Boston University Algorithms
Semi-Automatic Image Registration
We recognize that the difference between a failure of an automatic image registration approach and a success of a semi-automatic method can be a small amount of user input. The goal of this work is to register two CT volumes of different patients that are related by a large deformation. The user sets two thresholds for each image: one for the bone mask and another for flesh tissue. This operation is not time consuming but simplifies the registration task dramatically for the automatic algorithm.
Description
In this work, interactive segmentation is integrated with an active contour model and segmentation is posed as a human-supervisory control problem. User input is tightly coupled with an automatic segmentation algorithm leveraging the user's high-level anatomical knowledge and the automated method's speed. Real-time visualization enables the user to quickly identify and correct the result in a sub-domain where the variational model's statistical assumptions do not agree with his expert knowledge. Methods developed in this work are applied to magnetic resonance imaging (MRI) volumes as part of a population study of human skeletal development. Segmentation time is reduced by approximately five times over similarly accurate manual segmentation of large bone structures.
Flowchart for the interactive segmentation approach. Notice the user's pivotal role in the process.
Time-line of user input into the system. Note that user input is sparse, has local effect only, and decreases in frequency and magnitude over time.
Result of the segmentation.
Current State of Work
The described algorithm is implemented in c++ and delivered to physicians. We have begun to analyze the data they created by segmenting the knee with out tool. Future work incorporates shape prior into the segmentation and improves user interaction(according to feedback physician's provide us).
Key Investigators
- Georgia Tech: Ivan Kolesov, Patricio Vela
- Boston University: Jehoon Lee, Allen Tannenbaum
- MGH: Gregory Sharp
Publications
In Press
I. Kolesov, P.Karasev, G.Muller, K.Chudy, J.Xerogeanes, and A. Tannenbaum. Human Supervisory Control Framework for Interactive Medical Image Segmentation. MICCAI Workshop on Computational Biomechanics for Medicine 2011.
P.Karasev, I.Kolesov, K.Chudy, G.Muller, J.Xerogeanes, and A. Tannenbaum. Interactive MRI Segmentation with Controlled Active Vision. IEEE CDC-ECC 2011.