Special:Badtitle/NS100:Engineering

From NAMIC Wiki
Revision as of 14:04, 25 September 2007 by Will (talk | contribs)
Jump to: navigation, search


Engineering

Big-Engineering-Logo.png

The Center’s success depends on how well the techniques developed in Core 1 can be used to solve the biological problems posed by the DBPs of Core 3. Core 2 is the link between the innovative techniques of Core 1 and the biological questions of the Core 3 end-user practitioners. To build this link, Core 2 establishes software architectures and software processes that empower the Core 1 algorithm developers to create robust, well-designed software and interfaces. In addition to software from Core 1, Core 2 builds interfaces to existing software and data sources. The Core 3 end-users define the requirements for the applications developed by Core 2.

These Core 2 activities require innovative solutions from our experienced team of biomedical software engineers and researchers. Defining Software Architectures and Frameworks Software architecture defines the overall structure of a system including its partition into subsystems and their interactions. Frameworks define internal software structures that provide uniform approaches to complex control sequences and algorithms. The Core 2 architecture adapts to existing toolkits, applications, and data access software. New algorithms are continually being added to existing toolkits. The architecture supports distributed execution and scales according to hardware resources. The new software includes algorithms developed by Core 1 and applications defined by Core 3. The software architecture accommodates and interfaces to the existing software inventory of NA-MIC members. In addition, Core 2 defines and provides open interfaces that accept software and data from resources outside of this alliance.

Creating a Software Process

A well defined process brings discipline and control to software development. Core 2 defines and provides tools to support a lightweight software engineering process. The process combines object-oriented design techniques with test-driven development (TDD) and extreme programming development methodologies. A pragmatic rather than dogmatic approach encourages Core 1 algorithm developers to adhere to the process principles without restricting innovation and productivity. Core 2 enhances the Core 1 software by generalizing designs and implementations. Throughout the process, Core 2 revisits designs to improve the existing code by altering the internal structure of the software. Software engineering processes can potentially place a seemingly bureaucratic burden on software developers. Although guidelines and common architectures can produce long-term benefits, manual application of these requirements can cause developers to ignore or bypass the procedures. To minimize the burden of the software process, Core 2 develops software tools to automate software quality assurance, support cross-platform software development, and manage cross-platform software distribution. These tools build on existing software that Core 2 has developed under other funding.

Software and Data Integration

Most of the neuroimaging and neuroinformatics software developments in the past decade have utilized the functionalities of object-oriented programming languages (C, C++ and Java), scripting languages (Python and Tcl), and a multitude of visualization libraries (VTK and Java 3D). To date, there is no direct way of integrating, merging or linking neuroimaging tools that are developed under different languages or using different libraries. Many software and library interfaces have been developed for transferring, interactive manipulation, automatic processing, tracking, visualization, and archiving of neuroimaging data. Only a limited suite of tools, however, exists at or within any one geographic location, research group, or computer architecture. We need to augment the modeling and software development efforts with new integration initiatives that provide foundations for bridging the gaps between investigators in relation to hardware, software, and neuroscientific expertise. The full potential of neuroimaging and neurological clinical data to describe brain structure and function can be realized only if powerful neuroimaging, neuroengineering, neuroinformatics, database, and visualization tools are integrated in a functional, extensible and portable graphic environment. Our goal is to create an infrastructure that will ensure efficient data management, reliable processing, and interactive data visualization for large-scale brain-mapping and neuroimaging studies, substantially increasing our ability to efficiently perform large state-of-the-art imaging investigations.

The Grid

Grid computing is rapidly gaining acceptance as a routine way to transparently access distributed computational resources for biological and biomedical informatics applications. These techniques are being employed in academia as well as industry. Enabling biomedical code to run in such an environment, however, requires “Grid-enabling” of such code along with the associated data repositories. This process of bringing software programs and data resources together generally involves development of a layer of software. These grid-enabling codes and databases will leverage emerging Grid practices and experience gained within the BIRN Coordinating Center.

Applications

A robust, well-designed application environment, with a logical user interface and a set of complementary image analysis and data manipulation tools, provides a critical component for the success of the overall effort. The application must also provide a framework for researchers to plug-in their algorithms and user interfaces with minimal learning curve and disruption to the existing application tools. In addition, the end-user application environment must be flexible enough to support a variety of interaction styles including interactive versus batch processing modes and support for different operating systems and machine architectures. Because application development requires significant long-term commitment, the applications developed here will leverage existing, proven applications and toolkits. One of the major goals of Core 2 is to assist Core 1 in the integration of their algorithms into the NAMIC application and toolkit suite.