Difference between revisions of "NA-MIC/Projects/Collaboration/3D Ultrasound Module in Slicer3"
(5 intermediate revisions by 2 users not shown) | |||
Line 9: | Line 9: | ||
* SPL: Junichi Tokuda, Haiying Liu, Benjamin Grauer, Noby Hata | * SPL: Junichi Tokuda, Haiying Liu, Benjamin Grauer, Noby Hata | ||
* SJTU, Shanghai: Ziying Jiang, Qingfeng Jin, Tingting Xi, Shuqin Ding, Yan Sun, Lixu Gu | * SJTU, Shanghai: Ziying Jiang, Qingfeng Jin, Tingting Xi, Shuqin Ding, Yan Sun, Lixu Gu | ||
− | * Queen's: David Gobbi | + | * Queen's: Jonathan Boisvert, David Gobbi, Siddharth Vikal, Purang Abolmaesumi |
* Robarts: Danielle Pace, Terry Peters | * Robarts: Danielle Pace, Terry Peters | ||
− | * Present at the NA-MIC programming week: Junichi, Haiying, Noby, David, Danielle | + | * Present at the NA-MIC programming week: Junichi, Haiying, Noby, David, Siddharth, Danielle |
Line 22: | Line 22: | ||
* reconstruct 3D ultrasound volumes from multiple tracked 2D images | * reconstruct 3D ultrasound volumes from multiple tracked 2D images | ||
* reconstruct 4D ultrasound volumes from multiple tracked 2D images with ECG-gating | * reconstruct 4D ultrasound volumes from multiple tracked 2D images with ECG-gating | ||
− | * reconstruct panoramic 3D ultrasound volumes from multiple tracked 3D images | + | * reconstruct panoramic 3D ultrasound volumes from multiple tracked 3D images on a wide variety of ultrasound scanners |
− | on a wide variety of ultrasound scanners | ||
</div> | </div> | ||
Line 33: | Line 32: | ||
3D ultrasound volumes can be created from 2D ultrasound images by acquiring multiple 2D images while tracking the probe. The tracking information is used to insert the images in the correct position and orientation within the 3D volume. A time series of 3D ultrasound volumes (4D US) can be created by incorporating ECG-gating: the 2D ultrasound images are inserted into the correct volume using the ECG information. Finally, panoramic 3D ultrasound volumes can also be built up from smaller 3D ultrasound volumes by simply tracking the probe. | 3D ultrasound volumes can be created from 2D ultrasound images by acquiring multiple 2D images while tracking the probe. The tracking information is used to insert the images in the correct position and orientation within the 3D volume. A time series of 3D ultrasound volumes (4D US) can be created by incorporating ECG-gating: the 2D ultrasound images are inserted into the correct volume using the ECG information. Finally, panoramic 3D ultrasound volumes can also be built up from smaller 3D ultrasound volumes by simply tracking the probe. | ||
− | During the project week, | + | During the project week, |
+ | * The Queen's group will interface their SonixRP ultrasound acquisition pipeline to Slicer3 via OpenIGTLink with the help of Junichi and Noby | ||
+ | * Danielle will work with David to fix up the last bugs in the freehand 3D reconstruction code and to add the ECG-gating functionality. | ||
</div> | </div> | ||
Line 40: | Line 41: | ||
<h1>Progress</h1> | <h1>Progress</h1> | ||
+ | * For accurate ECG-gating, the patient's heart rate must not change during a single cardiac cycle - this is because phase-detection uses only the patient's heart rate and the time since the beginning of the cardiac cycle. We plan to ensure that the heart rate measurement is valid by "buffering" the incoming images so that they are inserted into the volume 1-2 seconds after being read from the video source. However, we discovered that the current vtkVideoSource mechanism doesn't allow you to "seek" and "record" at the same time. | ||
+ | * This week, we have been restructuring the vtkVideoSource class so that its frame buffer and frames are stored in a separate class. This restructuring will allow us to access old frames while the video source is still recording. | ||
+ | * Progress this week: | ||
+ | ** vtkVideoSource, vtkVideoBuffer and vtkVideoFrame classes are completed but not tested | ||
+ | ** we outlined the mechanism for integrating cardiac gating into the current reconstruction framework, but it is not coded yet | ||
+ | ** Discussed the use of OpenIGTLink to push tracked ultrsound images to Slicer with Junichi. | ||
</div> | </div> | ||
Line 46: | Line 53: | ||
</div> | </div> | ||
− | |||
===References=== | ===References=== | ||
See the 3D ultrasound module wiki page at http://www.na-mic.org/Wiki/index.php/3D_Ultrasound_Module_in_Slicer_3 | See the 3D ultrasound module wiki page at http://www.na-mic.org/Wiki/index.php/3D_Ultrasound_Module_in_Slicer_3 |
Latest revision as of 20:27, 27 June 2008
Home < NA-MIC < Projects < Collaboration < 3D Ultrasound Module in Slicer3
Key Investigators
- SPL: Junichi Tokuda, Haiying Liu, Benjamin Grauer, Noby Hata
- SJTU, Shanghai: Ziying Jiang, Qingfeng Jin, Tingting Xi, Shuqin Ding, Yan Sun, Lixu Gu
- Queen's: Jonathan Boisvert, David Gobbi, Siddharth Vikal, Purang Abolmaesumi
- Robarts: Danielle Pace, Terry Peters
- Present at the NA-MIC programming week: Junichi, Haiying, Noby, David, Siddharth, Danielle
Objective
We are working on a 3D ultrasound module in Slicer3, with the goal of creating a module that can
- reconstruct 3D ultrasound volumes from multiple tracked 2D images
- reconstruct 4D ultrasound volumes from multiple tracked 2D images with ECG-gating
- reconstruct panoramic 3D ultrasound volumes from multiple tracked 3D images on a wide variety of ultrasound scanners
Approach, Plan
3D ultrasound volumes can be created from 2D ultrasound images by acquiring multiple 2D images while tracking the probe. The tracking information is used to insert the images in the correct position and orientation within the 3D volume. A time series of 3D ultrasound volumes (4D US) can be created by incorporating ECG-gating: the 2D ultrasound images are inserted into the correct volume using the ECG information. Finally, panoramic 3D ultrasound volumes can also be built up from smaller 3D ultrasound volumes by simply tracking the probe.
During the project week,
- The Queen's group will interface their SonixRP ultrasound acquisition pipeline to Slicer3 via OpenIGTLink with the help of Junichi and Noby
- Danielle will work with David to fix up the last bugs in the freehand 3D reconstruction code and to add the ECG-gating functionality.
Progress
- For accurate ECG-gating, the patient's heart rate must not change during a single cardiac cycle - this is because phase-detection uses only the patient's heart rate and the time since the beginning of the cardiac cycle. We plan to ensure that the heart rate measurement is valid by "buffering" the incoming images so that they are inserted into the volume 1-2 seconds after being read from the video source. However, we discovered that the current vtkVideoSource mechanism doesn't allow you to "seek" and "record" at the same time.
- This week, we have been restructuring the vtkVideoSource class so that its frame buffer and frames are stored in a separate class. This restructuring will allow us to access old frames while the video source is still recording.
- Progress this week:
- vtkVideoSource, vtkVideoBuffer and vtkVideoFrame classes are completed but not tested
- we outlined the mechanism for integrating cardiac gating into the current reconstruction framework, but it is not coded yet
- Discussed the use of OpenIGTLink to push tracked ultrsound images to Slicer with Junichi.
References
See the 3D ultrasound module wiki page at http://www.na-mic.org/Wiki/index.php/3D_Ultrasound_Module_in_Slicer_3