Projects:ARRA:miAnnotation
Aim
Medical images often contain a wealth of information, such as anatomy and pathology, not explicitly accessible. One way to address this issue is via image annotation and markup. We propose to create a comprehensive framework for annotation and markup within 3D Slicer, enabling users to capture structured information easily. Furthermore, we will develop schemas for saving and recovering this information into and from XNAT, allowing queries of larger data sets of medical scans. This tool will provide clinicians with a relatively simple way to capture information latent in medical scans, and also to select micro-cohorts of medical scans for studying diseases.
Research Plan
3D Slicer currently provides very basic technology for annotating images. This limits users in their ability to properly capture semantic information contained in images and data sets. We propose to address this issue by expanding Slicer's mark up and annotation capabilities. New features will include:
- a rich set of geometric objects for improved visual differentiation between annotations
- markers for measuring anatomical characteristics, such as the volume of an annotated region, to provide patient specific information difficult to extract from visual inspection
- entry fields beyond free-text, such as graphics and external data, to capture comprehensive information and support for emerging domain specific ontologiesand
- a full integration of these capabilities with the mrml tree to support Scenesnapshots, load, save both to disk and XNAT.
We will implement these features by developing two different modules. The first module, called Marker Module, creates different types of markers based on current ITK technology. The user defines the appearance of the marker by specifying its color, size, and shape, such as points and 3D boxes. The user also labels each marker with tags and specifies its function, such as measuring the volume of a region.
The Annotation Module, the second module, provides the interface for annotating images with these markers. Users place the markers on the image and further specify the semantic information through free text, plots, and references to ontology and internet. The annotations are shown both in 3D and 2D viewers. The module also allows annotating entire scenes by linking annotations across images, as well as within an image. All annotations are stored in a database targeted towards medical imaging, called XNAT. The structure of the database is automatically defined by the tags of the markers. Thus, users can query across large image data sets by looking for specific tag values.
Both modules are accompanied by training materials and documentation to ensure usability.
Design of Module
Key Personnel
60% Kilian Pohl
95% Yong Zhang
Progress
(reverse chronological order, most recent on top)
- 12/18/09 Held workshop to integrate AMI into 3D Slicer. Hired Yong Zhang to implement AMI in Slicer.
- 12/11/09 Summary of tools demoed at RSNA
- 12/04/09 Visited RSNA to review annotation tools by GE, Siemens & Phillips. Connected to caBIG AIM project to see how we can make use of their data scheme
- 11/29/09 Created GUI for MarkUp module
- 11/20/09 Design MRML Structure of Annotation and MarkUp Module
- 11/13/09 Organize Annotation Brain Storming Session
- 11/06/09 Designed User Interface , Meet with Julien Finet and Jean-Christophe Fillion-Robin from Kitware to discuss integration of Qt in 3D Slicer
- 10/30/09 Participated in Qt-Tcon, Interviewed candidate at Almaden, coordinated efforts with Nicole Aucoin
- 10/23/09 Organized onsite interview , got in contact with Steve Pieper to discuss next steps, installed Slicer3
- 10/17/09 Started interviewing postdoc as well as solving several HR issues for hiring personal