Difference between revisions of "CTSC IGT, BWH"

From NAMIC Wiki
Jump to: navigation, search
Line 70: Line 70:
 
* Confirm data is uploaded & represented properly with web GUI
 
* Confirm data is uploaded & represented properly with web GUI
  
==Target Data Management Process (Step 1.) Option B. (web services API)==
+
==Target Data Management Process (Step 1.) Option B. (web services API) '''CURRENTLY BEING TESTED!'''==
  
 
* Run CLI Tool for batch anonymization (See here for HowTo:  http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html)
 
* Run CLI Tool for batch anonymization (See here for HowTo:  http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html)

Revision as of 15:36, 14 August 2009

Home < CTSC IGT, BWH

Back to CTSC Imaging Informatics Initiative


Mission

Mark Anderson at Surgical Planning and Channing labs currently manages data for many investigators, pulling data from PACS into the research environment. There is interest in setting up a parallel channel by which the data are also enrolled into an XNAT database and accessed from client, and comparing its ease of use with the existing infrastucture. To explore XNAT as a possible long-term informatics solution for the NCIGT project, Mark will be uploading retrospective data for a number of NCIGT efforts (and PIs):

  • NCIGT_Brain_Function (SS/AG)
    • Key Investigators:
    • Brief Description:
  • NCIGT_Tumor_Resection (HK/AG)
    • Key Investigators:
    • Brief Description:
  • NCIGT_Prostate (HE/CT)
    • Key Investigators:
    • Brief Description:
  • NCIGT_Prostate_Fully_Segmented (HE/CT)
    • Key Investigators:
    • Brief Description:
  • NCIGT_Brain_Biopsy (FT)
    • Key Investigators:
    • Brief Description:

Use-Case Goals

Step 1. Data Management

  • Anonymize, apply DICOM metadata and upload retrospective datasets; confirm appropriate organization and naming scheme via web GUI.

Step 2. Query & Retrieval

  • Make specific queries using XNAT web services,
  • Download data conforming to specific naming convention and directory structure, using XNAT web services

Each effort listed above will have different requirements for being able to query, retrieve and use data collections. Brief description of how retrospective data will be used within the NCIGT is described below:

  • NCIGT_Brain_Function:
  • NCIGT_Tumor_Resection:
  • NCIGT_Prostate:
  • NCIGT_Prostate_Fully_Segmented:
  • NCIGT_Brain_Biopsy:

Step 3. Disseminating & Sharing

  • In addition to NCIGT mandate to share data, each effort listed above will have different requirements for being able to make data available to collaborating and other interested groups.

Outcome Metrics

Step 1. Data Management

Step 2. Query & Retrieval

Step 3. Dissemination & Sharing

Fundamental Requirements

Participants

  • Mark Anderson
  • Tina Kapur

Data

Workflows

Current Data Management Process

Target Data Management Process (Step 1.) Option A.

  • Create new project using web GUI
  • Manage project: Configure settings to automatically place data into the archive (no pre-archive)
  • Create a subject template
  • Create a spreadsheet conforming to subject template
  • Upload spreadsheet to create subjects
  • Run CLI Tool for batch anonymization (See here for HowTo: http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html)
  • Need pointer for script to do batch upload & apply DICOM metadata.
  • Confirm data is uploaded & represented properly with web GUI

Target Data Management Process (Step 1.) Option B. (web services API) CURRENTLY BEING TESTED!

  • Run CLI Tool for batch anonymization (See here for HowTo: http://nrg.wustl.edu/projects/DICOM/DicomBrowser/batch-anon.html)
  • Create new project using web GUI
  • Manage project: Configure settings to automatically place data into the archive (no pre-archive)
  • Write script that uses curl or XNATRestClient to:
    • Authenticate with server and create new session
curl -d POST $XNE_Svr/REST/JSESSION -u $XNE_UserName:$XNE_Password
 * The response from this request is a session ID that should be stored in $JSessionID; that ID can be used in all subsequent queries so authentication doesn't have to be performed at each transaction.
    • Create subject
curl $XNE_Srv/REST/projects/$ProjectID/subjects/S0001 --cookie JSESSIONID=$JSessionID (This would create a subject called 'S0001' within the project $ProjectID)
    • Create experiment (which is a collection of related scans; scans may be MR, CT, PET, etc.
 curl $XNE_Srv/REST/projects/$ProjectID/subjects/S0001/experiments/FirstExperiment --cookie JSESSIONID=$JSessionID (This would create an experiment called 'S0001' within the project $ProjectID for subject S0001)
    • Upload scan data
  • Confirm data is uploaded & represented properly with web GUI

Target Query Formulation (Step 2.)

Target Processing Workflow (Step 3.)

Other Information