Core 2 Timeline Notes

From NAMIC
Revision as of 09:27, 18 December 2006 by Andy (talk | contribs) (Update from Wiki)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
Home < Core 2 Timeline Notes

Core 2 Timeline Notes

Core 2 Timeline Notes from Site PIs

GE

1) Define Software Architecture

We are well ahead of schedule regarding the new Slicer3 software architecture. The original schedule showed completion after year 3. Over the past year we have worked with all Core 2 participants to define an architecture that will adapt to existing toolkits, applications, and data access software. The new software includes mechanisms to adapt algorithms developed by Core 1 and will permit focused Applications defined by Core 3. As we described in the proposal, the software architecture accommodates and interfaces to the existing inventory of the NAMIC members.

2) Create Software Engineering Process

This Aim supports and promotes the use of Extreme Programming throughout the Center. The Programmer/project weeks bring together Core 2 engineers with the algorithm developers in Core 1. The week of intense software development also include training for Core 1 participants on the NA-MIC software process. Core 2 meets periodically in Boston to review designs and code. The process is in place and on schedule. Code refactoring is also an on-going process. Of note, this we refactored the statistical distribution support required for fMRI processing, replacing the GPL-ed GNU Statistics Library newly designed ITK classes.

3) Create an automated quality system

Following a successful DART classic deployment in the first year, DART 2, release 1, is complete. DART is the distributed dashboard reporting system developed a GE. Dart 2 uses a database to maintain persistent build/test results. This Aim is ahead of schedule for its initial, Dart 2, release. This allows more time to integrate feedback from the NA-MIC and international community.

Kitware

Core 2: Our timeline included the following elements:

1) Cross-platform development

• Deploy environment (CMake, CTest)

• DART (Testing) Integration

• Documentation tools

2) Integration tools

• File formats; IO facilities

• CableSWIG deployment

• Establish XML Schema

3) Technology Delivery

• Deploy applications

• Establish plug-in repository

• CPack

The first three bulleted items in task#1 have been completed.

In the next rhree bullets item#2: File formats and IO facilities are deployed including support for nrrd. Note that new formats may require new bits of work in order to accommodate them. CableSWIG is deployed and being used in ITK. We are introducing wrapping technology to Slicer 3.0. The work on the integration schema has just started (which is earlier that the schedule calls for). This is in the form of the Slicer 3.0 mrml schema.

In the last three bullets: item #3 we have deployed the initial applications (in the NA-MIC kit). This is an ongoing activity, when Slicer 3.0 is available we will deploy this later as well. We are just beginning to assemble (plug-in) modules for Slicer 3.0, so I'd say we are behind the original schedule here (waiting to some extent on solidification of the Slicer 3.0 architecture). We are way ahead on CPack (actually we are nearly done say 85%).

Harvard - Brigham

Aim 1 – End User Environment

- Slicer2.5 and Slicer2.6, incorporating NA-MIC software engineering methodology improvements and ITK-based algorithms, were released to NA-MIC community and used as the basis for training and dissemination activities for the NA-MIC community and the larger medical image computing community. - Specific enhancements to the image Editor Module, EMBrainAtlasClassifier, DTMRI, and FMRIEngine modules in Slicer we added in response to the needs of the NA-MIC DBPs to support schizophrenia research.

Aim 2 – Integrate Algorithms

- The vtkITK set of C++ classes provide a mechanism for integrating ITK-based algorithms directly into VTK-based interactive visualization software. The NAMICSandbox is a source code repository for NA-MIC researchers to develop and test new algorithm implementations. Of these are now part of the standard Slicer build and deployment system for cross platform delivery to end users. - A 'Slicer Daemon' was released as part of Slicer2.6 which allows external programs to read and write volume data sets that exist in a running instance of slicer. This mechanism supports interoperability with the LONI Pipeline. - In collaboration with GE, new statistical routines were added to the NAMICSandbox and used as the basis for the fMRI calculations in Slicer2.6. Significant work on representation of DWI and DTI volumetric data using the new NRRD file format was released with Slicer2.6. - Initial discussions with possible next-round DBP groups have provided input needed to prioritize our infrastructure development.

Aim 3 – Quality System Integration

- The slicer project has a na-mic.org hosted dashboard powered by the Dart system. Slicer2.6 has the hooks to support the increasing number of test scripts for module and base functionality. The use of CTest will allow transition to the Dart2 system when it is deployed.

UCSD

1) Grid Computing Environment

a) Deploy base environment: Completed - the configuration of infrastructure and provisioning of additional infrastructure was completed. This included NA-MIC specific nodes that are used as a gateway to the BIRN environment, configuration of cluster resources for NA-MIC, configuration of Portal environment for use by NA-MIC in sharing data. In addition, support of this infrastructure for NA-MIC is provided to ensure high availability of these resources. b) Grid enable selected NA-MIC algorithms: In progress (year 2-3) - this is our work on the Slicer 3 grid interface (http://www.na-mic.org/Wiki/index.php/Slicer3:Grid_Interface) and contribution to the execution model work (http://www.na-mic.org/Wiki/index.php/Slicer3:Execution_Model) c) Provide testing infrastructure for algorithms extending DART: This is a year 3 and 4 activity. However, the groundwork for being able to accomplish this is currently being undertaken - work on Slicer 3 Execution model and the Slicer 3 grid interface (http://www.na-mic.org/Wiki/index.php/Slicer3:Grid_Interface) d) Grid enable select NAMIC algorithms from new driving projects: N/A - projected start at 48 months e) Support Grid enabling of NAMIC environments (e.g. LONI Pipeline): Even though the projected start for this item was at 36 months, work has already been undertaken to provide more seamless access to the data grid & integration of grid security with the LONI pipeline. As a result of this, the LONI Pipeline has been updated to be able to access the data grid so that collaborative NA-MIC data sets can be accessed directly.

2) Data Grid

a) Compatibility of data grid with NA-MIC: Completed - The data grid has been used as a primary space for the collaborative sharing of data between Core 3 and Core 1 (http://www.na-mic.org/Wiki/index.php/DataRepository). Part of the work also included developing, testing and deploying the infrastructure to allow data grid access from highly restricted firewall environments. b) Enable Slicer to directly access data grid: In progress (year 2 ; 18-24 month). The main integration with Slicer is the on-going Slicer 3 architecture discussions discussed in (1) and (3), i.e. Slicer execution model. However, in BIRN/NA-MIC work, the ability for users to launch Slicer as a visualization tool for data on the data grid that is accessible via the Portal was enhanced. Work in Portal integration is also continuing - integration of functionality with new Portal environment (i.e. based on JAva JSR-168 portlet standard) that will be available with new portal release.

3) Data Mediation Environment

Databases were not brought into NA-MIC and therefor work was focused on other core NA-MIC requirements. This included extending the efforts from (1) and (2) to include contributions to the Slicer 3 architecture (mainly the Slicer 3 execution model and its interface with distributed environments) and a newer project to begin integrating the NA-MIC toolkit with the automated provisioning software used in BIRN (i.e. Rocks; http://www.rocksclusters.org) so that any research group could easily build a compute environment that contains the complete NA-MIC kit.

In addition, requirements from NA-MICs use of the project space within the BIRN portal has helped drive the refinement of this infrastructure and has led to a much improved collaborative workspace within the next release of the Portal environment occurring in the next month.

UCLA

1) Data Debabeler

Data debabeler functionality has been extended to include new special cases of existent data formats and to improve on the data-deidentification protocols. There is considerable progress (a Year 2, 3 activity) in the mediating framework for the communication between LONI Pipeline and Slicer.

2) SLIPIE Interpretation (Layers 1 and 2)

A slim networking library has been added to the LONI Pipeline Server to enable external applications like SLICER to trigger and manage the LONI Pipeline remotely. Also, discussions during the Slicer 3 Execution Model led to the design and implementation of a commandline version of the LONI Pipeline. Also components that allow the LONI Pipeline to invoke and launch other external applications have been tested and integrated into the Pipeline.

3) ITK Tools Integration

Constant progress is being registered as more ITK based segmentation and registration tools are being integrated into the LONI pipeline. The modules would be available in the standard library shipped with the LONI Pipeline.

4) LONI Grid Integration/ LONI Pipeline Server

Enable LONI Grid: This was a Year-2 activity - the LONI Pipeline Server has been successfully integrated with the core, it allows all jobs submitted by the client to run on the Pipeline server. Also, the interface to the LONI grid has been completed, all jobs submitted to the server can be successfully routed to the Sun Grid Engine to exploit the hardware and software parallelism, inherently present in the Pipeline modules.

5) Integrating Remote Filesystems (SRB)

Remote filesystems like SRB has been successfully integrated with the LONI Pipeline. MDAS based SRB access has been implemented such that the Pipeline can download and upload the files to the remote filesystem. GSI-based secure SRB access also has been completed in Year 2, which allows basic SRB-based file uploads and downloads with Grid-Security-Infrastructure based certificates -

6) Integrating homegrown image databases (IDA)

The LONI IDA (Image Data Archive), an archiving alternative enabled with default data-conversion capabilities is being integrated with the LONI Pipeline, a Year 2,3 activity. This will allow uploading images and downloading images in specific formats, enabling us to compose different portions of the pipelines with varying image-processing capabilities.

7) Integration of external viewers

Capability to use any external visualization application to view data bound to pipelines. Essentially different viewers can be configured internal to the Pipeline to handle different file, image types thus allowing the user to tailor the application to his/her specific needs. This ties in with the SLIPIE LAYER 2 (Year 2-Year3 activity), which allows the Pipeline to launch SLICER as an external Viewer for handling specific image types.