Difference between revisions of "2011 Winter Project Week:Breakout Multi-Image Engineering"

From NAMIC Wiki
Jump to: navigation, search
Line 7: Line 7:
  
 
Session Leaders: Jim Miller, Steve Pieper, Alex Yarmakovich, Junichi Tokuda, Demian Wasserman
 
Session Leaders: Jim Miller, Steve Pieper, Alex Yarmakovich, Junichi Tokuda, Demian Wasserman
 +
 +
=Background=
 +
 +
Increasingly, we have data in slicer which are multi something. Examples are:
 +
*multi-channel (T1, T2, flair, dual echo, etc.)
 +
*DWI
 +
*rgb and other multi-channel images
 +
*time series
 +
 +
=Current State=
 +
We have part of the ability to handle such multi-dimensional volume in the DWI infrastructure:
 +
*Dicom to nrrd, the slider in the volumes module to look at any of the individual volumes in DWI.
 +
*Junichi's 4D mrml node takes a different approach but ultimately also allows to look at a time point at a time through a slider.
 +
*Brainsfit has the ability to register such multi data sets to each other and Gtract has a first solution which is engineered for DWI
 +
*Compareview would allow to show multi data in compareviewers. When we have a few volumes, then we can display all. If its a large number of volumes, we will only be able to display a few.
 +
 +
Multi volumes can be processed in specialized ways. Existing examples are:
 +
*Parametric analysis:
 +
**DTI estimation from DWI
 +
**Tofts parameter estimation from DCE
 +
*Segmentation:
 +
**EM segmentation from multi-channel morphology data
 +
 +
=The Need=
 +
All of the current examples are special cases. If we can generalize those into a single architecture for multi data, there would be a lot of potential for cross-benefits.
 +
==I/O==
 +
*We should have a single module to organize the data. DICOM to nrrd is a good start, but we need to be able to handle separate T1 and T2 acquisitions as well. We also need to be able to handle non-dicom data. Perhaps something like: load all data into slicer and associate them inside slicer in a special module. Write out as a single nrrd file.
 +
==Display==
 +
*we should create a single visualization infrastructure to handle multi data: compare viewers, rgb channels, time series movies: equivalent slice viewers and 3D viewers
 +
==Processing==
 +
*common api for processing: EM segmentation, pharmacokinetic models, DWI filtering, tensor estimation should all plug into the data in the same way.

Revision as of 11:49, 3 January 2011

Home < 2011 Winter Project Week:Breakout Multi-Image Engineering
 Back to  Project Week Agenda


Agenda breakout session: Multi-Image Engineering in slicer

Wednesday 8-10am

Session Leaders: Jim Miller, Steve Pieper, Alex Yarmakovich, Junichi Tokuda, Demian Wasserman

Background

Increasingly, we have data in slicer which are multi something. Examples are:

  • multi-channel (T1, T2, flair, dual echo, etc.)
  • DWI
  • rgb and other multi-channel images
  • time series

Current State

We have part of the ability to handle such multi-dimensional volume in the DWI infrastructure:

  • Dicom to nrrd, the slider in the volumes module to look at any of the individual volumes in DWI.
  • Junichi's 4D mrml node takes a different approach but ultimately also allows to look at a time point at a time through a slider.
  • Brainsfit has the ability to register such multi data sets to each other and Gtract has a first solution which is engineered for DWI
  • Compareview would allow to show multi data in compareviewers. When we have a few volumes, then we can display all. If its a large number of volumes, we will only be able to display a few.

Multi volumes can be processed in specialized ways. Existing examples are:

  • Parametric analysis:
    • DTI estimation from DWI
    • Tofts parameter estimation from DCE
  • Segmentation:
    • EM segmentation from multi-channel morphology data

The Need

All of the current examples are special cases. If we can generalize those into a single architecture for multi data, there would be a lot of potential for cross-benefits.

I/O

  • We should have a single module to organize the data. DICOM to nrrd is a good start, but we need to be able to handle separate T1 and T2 acquisitions as well. We also need to be able to handle non-dicom data. Perhaps something like: load all data into slicer and associate them inside slicer in a special module. Write out as a single nrrd file.

Display

  • we should create a single visualization infrastructure to handle multi data: compare viewers, rgb channels, time series movies: equivalent slice viewers and 3D viewers

Processing

  • common api for processing: EM segmentation, pharmacokinetic models, DWI filtering, tensor estimation should all plug into the data in the same way.