Difference between revisions of "Event:2011-Registration-Retreat-Wednesday"
(wed meeting notes) |
|||
(One intermediate revision by one other user not shown) | |||
Line 1: | Line 1: | ||
[[Event:2011_Registration_Retreat| Back to Registration Brainstorming 2011]] | [[Event:2011_Registration_Retreat| Back to Registration Brainstorming 2011]] | ||
− | Survey of the available data from NCI: https://wiki.nci.nih.gov/display/CIP/CIP+Survey+of+Biomedical+Imaging+Archives | + | ==Some relevant links== |
+ | * Survey of the available data from NCI: https://wiki.nci.nih.gov/display/CIP/CIP+Survey+of+Biomedical+Imaging+Archives | ||
+ | * FDA guidance RFC on getting a drug development device through the regulatory approval: http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/UCM230597.pdf | ||
+ | |||
+ | =Wednesday registration topics= | ||
+ | |||
+ | ==Multi-modal registration (A. Fedorov)== | ||
+ | The task of multi-parametric MRI prostate registration is an example of a problem where the clinical case numbers are low, subject positioning is highly variable, and image resolution is also quite low. One of the biggest challenges at present is the lack of a validation mechanism. What kind of measures, techniques, ... could we introduce to measure performance in the case of this complex problem? | ||
+ | |||
+ | Diverging from this discussion a bit, what guidelines / principles should standard validation pipelines all follow? | ||
+ | |||
+ | * Could multi-center studies be helpful in non-brain centered image analysis? How could we catalyze their collection? | ||
+ | * Should the final step be to be used in an FDA approved system? | ||
+ | * We should probably separate scenarios where working on thousands of data is feasible and those where solution needs to be offered to unique problems, challenging interventions, where large numbers are not available. | ||
+ | * Should we also work on a "proper experimental design" paper so that data collection and collaboration with the clinical side would smoother. | ||
+ | * http://nipy.sourceforge.net/ (wrapper around tools and to test and compare registration techniques) | ||
+ | |||
+ | ==Poly-affine / multi-affine registration (CF Westin)== | ||
+ | Why hasn't the poly-/multi-affine techniques caught up more over the past ~10 years? It seems we mostly focus on the two extremes: rigid/affine and dense / diffeomorphic warps. | ||
+ | |||
+ | * direct displacement averaging | ||
+ | * infinitessimal displacement averaging | ||
+ | |||
+ | ==Wrap up discussions== | ||
+ | |||
+ | === White paper details === | ||
+ | |||
+ | # Target audience | ||
+ | ## NIH | ||
+ | ## clinicians, RAs, end users | ||
+ | ## tech developers looking for gap in development (PhD candidates, junior scientists, ...) | ||
+ | |||
+ | # Contents | ||
+ | ## What works at present | ||
+ | ## What should be funded | ||
+ | ## What open question are out there | ||
+ | |||
+ | Can we advance new and existing techniques through setting up a framework? | ||
+ | What to do about non-brain areas? | ||
+ | Appropriate inappropriate users | ||
+ | |||
+ | # Paper details and contents | ||
+ | ## define "registration" and its scope | ||
+ | ## educate clinicians about "it is not magic" | ||
+ | ## describe a single or a couple of successful case studies (ADNI, OASIS, MNI pediatric, AFIB, cardiac JHU, ...) so that investigators can plan ahead of time and data become more usable and consistent | ||
+ | ## more annotated data sets are needed! | ||
+ | ## how quickly could we get new data sets if an announcement went out? | ||
+ | ## list / reference of available data sets (indicated above) | ||
+ | ## require data sets being accompanied by problem definition and data parameters | ||
+ | ## promote the idea of papers being accompanied by data and code; they seem to be more useful and more heavily cited / downloaded currently. We should set an example doing that more, but it difficult to require it from all conf / journal submissions because of IRB issues. | ||
+ | ## identify environments in which comparing results and techniques is easier (eg.: NIREP http://www.nirep.org/index.php?id=22) | ||
+ | ## promote the aim for graceful failure and detailed feedback to the user about performance | ||
+ | |||
+ | # Misc | ||
+ | ## should we include extensive terminology in the paper? | ||
+ | ## could more extensive clinical and engineering partnership to move things forward? | ||
+ | ## we need to establish "bridges" between the two collaborating sides | ||
+ | ## "cost centers" could be an example workable model | ||
+ | ## new challenges: new acquisition techniques, longitudinal data sets | ||
+ | ## it is important to also describe when and how things fail | ||
+ | ## "computational steering": there is a need for interactive rigid and non-rigid registration tools; visualization is very important besides registration technique; can we borrow examples from the gaming industry? (can we validate certain results / outcomes with real-time feedback?) | ||
+ | |||
+ | ===What is next? === | ||
+ | People sign up for working on sections; smaller group will come up with an introduction and conclusion draft |
Latest revision as of 15:40, 25 February 2011
Home < Event:2011-Registration-Retreat-WednesdayBack to Registration Brainstorming 2011
Contents
Some relevant links
- Survey of the available data from NCI: https://wiki.nci.nih.gov/display/CIP/CIP+Survey+of+Biomedical+Imaging+Archives
- FDA guidance RFC on getting a drug development device through the regulatory approval: http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/UCM230597.pdf
Wednesday registration topics
Multi-modal registration (A. Fedorov)
The task of multi-parametric MRI prostate registration is an example of a problem where the clinical case numbers are low, subject positioning is highly variable, and image resolution is also quite low. One of the biggest challenges at present is the lack of a validation mechanism. What kind of measures, techniques, ... could we introduce to measure performance in the case of this complex problem?
Diverging from this discussion a bit, what guidelines / principles should standard validation pipelines all follow?
- Could multi-center studies be helpful in non-brain centered image analysis? How could we catalyze their collection?
- Should the final step be to be used in an FDA approved system?
- We should probably separate scenarios where working on thousands of data is feasible and those where solution needs to be offered to unique problems, challenging interventions, where large numbers are not available.
- Should we also work on a "proper experimental design" paper so that data collection and collaboration with the clinical side would smoother.
- http://nipy.sourceforge.net/ (wrapper around tools and to test and compare registration techniques)
Poly-affine / multi-affine registration (CF Westin)
Why hasn't the poly-/multi-affine techniques caught up more over the past ~10 years? It seems we mostly focus on the two extremes: rigid/affine and dense / diffeomorphic warps.
- direct displacement averaging
- infinitessimal displacement averaging
Wrap up discussions
White paper details
- Target audience
- NIH
- clinicians, RAs, end users
- tech developers looking for gap in development (PhD candidates, junior scientists, ...)
- Contents
- What works at present
- What should be funded
- What open question are out there
Can we advance new and existing techniques through setting up a framework? What to do about non-brain areas? Appropriate inappropriate users
- Paper details and contents
- define "registration" and its scope
- educate clinicians about "it is not magic"
- describe a single or a couple of successful case studies (ADNI, OASIS, MNI pediatric, AFIB, cardiac JHU, ...) so that investigators can plan ahead of time and data become more usable and consistent
- more annotated data sets are needed!
- how quickly could we get new data sets if an announcement went out?
- list / reference of available data sets (indicated above)
- require data sets being accompanied by problem definition and data parameters
- promote the idea of papers being accompanied by data and code; they seem to be more useful and more heavily cited / downloaded currently. We should set an example doing that more, but it difficult to require it from all conf / journal submissions because of IRB issues.
- identify environments in which comparing results and techniques is easier (eg.: NIREP http://www.nirep.org/index.php?id=22)
- promote the aim for graceful failure and detailed feedback to the user about performance
- Misc
- should we include extensive terminology in the paper?
- could more extensive clinical and engineering partnership to move things forward?
- we need to establish "bridges" between the two collaborating sides
- "cost centers" could be an example workable model
- new challenges: new acquisition techniques, longitudinal data sets
- it is important to also describe when and how things fail
- "computational steering": there is a need for interactive rigid and non-rigid registration tools; visualization is very important besides registration technique; can we borrow examples from the gaming industry? (can we validate certain results / outcomes with real-time feedback?)
What is next?
People sign up for working on sections; smaller group will come up with an introduction and conclusion draft