Difference between revisions of "Configurable fiducial-based device to image registration"
From NAMIC Wiki
m (Text replacement - "http://www.slicer.org/slicerWiki/index.php/" to "https://www.slicer.org/wiki/") |
|||
(3 intermediate revisions by one other user not shown) | |||
Line 8: | Line 8: | ||
* Junichi Tokuda, Brigham and Women's Hospital | * Junichi Tokuda, Brigham and Women's Hospital | ||
+ | * Nobuhiko Hata, Brigham and Women's Hospital | ||
==Project Description== | ==Project Description== | ||
Line 28: | Line 29: | ||
*Deliverable | *Deliverable | ||
**CLI module. | **CLI module. | ||
− | **The source code is available from: https://github.com/SNRLab/ | + | **The source code is available from: https://github.com/SNRLab/LineMarkerRegistration |
</div> | </div> | ||
<div style="width: 27%; float: left; padding-right: 3%;"> | <div style="width: 27%; float: left; padding-right: 3%;"> | ||
<h3>Progress</h3> | <h3>Progress</h3> | ||
− | * | + | * Implemented the following features: |
+ | ** Line-to-line distance metric to match the model to the detected markers. | ||
+ | ** CSV parser to read marker configuration file. | ||
+ | ** Tested with Z-frame data from MRI-guided prostate biopsy program at Brigham and Women's Hospital. | ||
+ | ** [https://www.slicer.org/wiki/Documentation/Nightly/Modules/LineMarkerRegistration Slicer Documentation page] | ||
+ | |||
</div> | </div> | ||
</div> | </div> |
Latest revision as of 17:11, 10 July 2017
Home < Configurable fiducial-based device to image registration- Yourimagehere.png
Image description
Key Investigators
- Junichi Tokuda, Brigham and Women's Hospital
- Nobuhiko Hata, Brigham and Women's Hospital
Project Description
Objective
- Background
- Any devices for guiding needle insertion under image-guidance has to be registered to the image coordinate system. Fiducial markers are widely used to localize the mechanical structure of the device on the image, and find the spatial correlation between the physical space and the image space. However, detection of the fiducial markers often requires some user interaction, e.g. pointing a fiducial markers on the image, or use of active tracking method rather than a simple marker that create a bright spot on the image.
- Objectie
- The objective of this project is to develop an image processing method for general-purpose fiducial detection. Specifically, the method can:
- automatically detect the fiducial markers attached on the mechanical structure without user interaction
- automatically find the correspondence of the points detected on the image, and the points in the physical space (defined as part of mechanical design)
- compute the linear transformation that defines the location and orientation of the device in the image coordinate system
- The objective of this project is to develop an image processing method for general-purpose fiducial detection. Specifically, the method can:
Approach, Plan
- Approach
- We will use a tube-shape fiducial markers that can be automatically segmented by Hessian filter.
- Deliverable
- CLI module.
- The source code is available from: https://github.com/SNRLab/LineMarkerRegistration
Progress
- Implemented the following features:
- Line-to-line distance metric to match the model to the detected markers.
- CSV parser to read marker configuration file.
- Tested with Z-frame data from MRI-guided prostate biopsy program at Brigham and Women's Hospital.
- Slicer Documentation page