Difference between revisions of "Integration of stereo video into Slicer3"

From NAMIC Wiki
Jump to: navigation, search
Line 8: Line 8:
  
 
<h3>Objective</h3>
 
<h3>Objective</h3>
The objective of this study is to grab and visualize video images in Slicer3 as soon as they are acquired. The source of the video can be laparoscope or ultrasound for my project. However, generally the video source can be any modality capable of streaming out the video. Actually, I plan to integrate laparascope images and intra-operative ultrasound with pre-operative MR image. Moreover, in order to present the video in right position with respective to pre-operative MR, we need to track the laparoscope camera and/or ultrasound transducer. Therefore, camera calibration and ultrasound calibration should be added to the slicer in the long run.
+
The objective of this study is to grab and visualize video images in Slicer3 as soon as they are acquired. The source of the video can be laparoscope or ultrasound for my project. However, generally the video source can be any modality capable of streaming out the video. Actually, I plan to integrate laparoscope images and intra-operative ultrasound with pre-operative MR image. Moreover, in order to present the video in right position with respective to pre-operative MR, we need to track the laparoscope camera and/or ultrasound transducer. Therefore, camera calibration and ultrasound calibration should be added to the slicer in the long run.
  
  
Line 16: Line 16:
  
 
<h3>Approach, Plan</h3>
 
<h3>Approach, Plan</h3>
Real-time video grabbing and visualization have been implemented previously for the ultrasound in [http://www.atamai.com AtamiViewr] by My colleague Danielle Pace. However, this time I am trying to do that in Slicer3. Up to now, I studied two possible alternatives so as to tackle video grabbing in Slicer3. The first possible approach was using a IGSTK library containing a [http://public.kitware.com/IGSTKWIKI/index.php/VideoGrabber_classes VideoImager]. Actually, since the code is recently developed and is currently under review by Andinet Enquobahrie. He believes that the code is not developed enough so that it can be used for Slicer3.  
+
Real-time video grabbing and visualization have been implemented previously for the ultrasound in [http://www.atamai.com AtamiViewr] by my colleague Danielle Pace. However, this time I am trying to do that in Slicer3. Up to now, I studied two possible alternatives so as to tackle video grabbing in Slicer3. The first possible approach was using a IGSTK library containing a [http://public.kitware.com/IGSTKWIKI/index.php/VideoGrabber_classes VideoImager]. Actually, since the code is recently developed and is currently under review by Andinet Enquobahrie. He believes that the code is not developed enough so that it can be used for Slicer3.  
  
 
The second alternative is to use [http://www.vtk.org/doc/release/4.2/html/classvtkVideoSource.html vtkVideoSource] and to extend it according to the targeted modality. For instance, [http://www.vtk.org/doc/release/4.2/html/classvtkMILVideoSource.html vtkMILVideoSource] provides an interface to Matrox Meteor, MeteorII and Corona video digitizers through the Matrox Imaging Library interface. As an another example [http://www.vtk.org/doc/release/4.2/html/classvtkWin32VideoSource.html vtkWin32VideoSource] grabs frames or streaming video from a Video for Windows compatible device on the Win32 platform.  
 
The second alternative is to use [http://www.vtk.org/doc/release/4.2/html/classvtkVideoSource.html vtkVideoSource] and to extend it according to the targeted modality. For instance, [http://www.vtk.org/doc/release/4.2/html/classvtkMILVideoSource.html vtkMILVideoSource] provides an interface to Matrox Meteor, MeteorII and Corona video digitizers through the Matrox Imaging Library interface. As an another example [http://www.vtk.org/doc/release/4.2/html/classvtkWin32VideoSource.html vtkWin32VideoSource] grabs frames or streaming video from a Video for Windows compatible device on the Win32 platform.  
  
I plan to start with a simple GUI capable of showing a video stream in a plane in the 3D view of the slicer. However, the video grabbing process should perform on separate thread so that it would not stop the Slicer3.  
+
I started with a simple GUI to show a 3D volume in the 3D view of the Slicer3. I just run through the "Gradient Anisotropic Deffusion" and modified the method that had been associated with 'Apply' button in that module. To read and show an image, I used the code that is used by the 'Volumes' module. However, the time that is used to visualize an image using this method is not short enough to let us achieve an acceptable frame rate.
 +
 
 +
In order to reduce the visualization time, Steve showed me 'vtkSlicerSliceLogic::CreateSliceModel' method. I run through the code but could not undrestand that much.
 +
 
 +
I also talked with  Alexander Yarmarkovich. He has implemented a module which communicates with a tracking machine via network sockets so as to transfer tracking information. He has used [http://www.na-mic.org/Wiki/index.php/OpenIGTLink OpenIGTLink] to transfer tracking data. He has used this tracking information to show the apparatus in right position in the 3D view of the Slicer3. His code can visualize the corresponding apparatus so fast thanks to turning off unnecessary events trough visualization step.
 +
 
  
  
Line 28: Line 33:
  
 
<h3>Progress</h3>
 
<h3>Progress</h3>
I have studied everything but implemented nothing. :-(
 
  
  

Revision as of 11:47, 26 June 2009

Home < Integration of stereo video into Slicer3

Key Investigators

  • Robarts Research Institute / University of Western Ontario: Mehdi Esteghamatian


Objective

The objective of this study is to grab and visualize video images in Slicer3 as soon as they are acquired. The source of the video can be laparoscope or ultrasound for my project. However, generally the video source can be any modality capable of streaming out the video. Actually, I plan to integrate laparoscope images and intra-operative ultrasound with pre-operative MR image. Moreover, in order to present the video in right position with respective to pre-operative MR, we need to track the laparoscope camera and/or ultrasound transducer. Therefore, camera calibration and ultrasound calibration should be added to the slicer in the long run.


Approach, Plan

Real-time video grabbing and visualization have been implemented previously for the ultrasound in AtamiViewr by my colleague Danielle Pace. However, this time I am trying to do that in Slicer3. Up to now, I studied two possible alternatives so as to tackle video grabbing in Slicer3. The first possible approach was using a IGSTK library containing a VideoImager. Actually, since the code is recently developed and is currently under review by Andinet Enquobahrie. He believes that the code is not developed enough so that it can be used for Slicer3.

The second alternative is to use vtkVideoSource and to extend it according to the targeted modality. For instance, vtkMILVideoSource provides an interface to Matrox Meteor, MeteorII and Corona video digitizers through the Matrox Imaging Library interface. As an another example vtkWin32VideoSource grabs frames or streaming video from a Video for Windows compatible device on the Win32 platform.

I started with a simple GUI to show a 3D volume in the 3D view of the Slicer3. I just run through the "Gradient Anisotropic Deffusion" and modified the method that had been associated with 'Apply' button in that module. To read and show an image, I used the code that is used by the 'Volumes' module. However, the time that is used to visualize an image using this method is not short enough to let us achieve an acceptable frame rate.

In order to reduce the visualization time, Steve showed me 'vtkSlicerSliceLogic::CreateSliceModel' method. I run through the code but could not undrestand that much.

I also talked with Alexander Yarmarkovich. He has implemented a module which communicates with a tracking machine via network sockets so as to transfer tracking information. He has used OpenIGTLink to transfer tracking data. He has used this tracking information to show the apparatus in right position in the 3D view of the Slicer3. His code can visualize the corresponding apparatus so fast thanks to turning off unnecessary events trough visualization step.


Progress