Event:Journal Club at BWH

From NAMIC Wiki
Jump to: navigation, search
Home < Event:Journal Club at BWH

Back to Events

General information

Unless otherwise indicated, all SPL Journal Club presentations will take place in the Hollenberg Conference Room, on the 3rd floor of the Thorn Research Building, between 1pm and 2pm, on Wednesdays.

We are always looking for speakers. Please email Andriy Fedorov (fedorov at bwh dot harvard dot edu) if you would like to schedule a talk.

Suggested topics are, but are not limited to:

  • Accomplished work
  • Work-in-progress
  • Brainstorming ideas
  • Review of literature
  • Update of research activities

Upcoming Talks

Past Talks

2009

Dec 18: Satrajit Ghosh, RLE MIT and Harvard HST

Slides:Media:SatraGhosh_20091218JCTalkSlides.pdf

Time: 2:00 PM - 3:00 PM Location: Boston -- 1249 Boylston St, 2nd floor conference room


Title: Nipype - A Python framework for neuroimaging

Abstract

Nipype is a project under the umbrella of Nipy, an effort to develop open-source, community-developed neuroimaging tools in Python. The goals of Nipype are two-fold: 1) to provide a uniform interface to existing neuroimaging software packages; and 2) to provide a pipelined environment for efficient batch-processing that can tie together different neuroimaging data analysis algorithms.

The interface component of nipype provides access to command-line, matlab-mediated, and pure-python based algorithms from packages such as FSL, SPM, AFNI and Freesurfer, along with the growing number of algorithms being developed in Python. The uniform calling-convention of the nipype interface across all these packages reduces the learning curve associated with understanding the algorithms, the API and the user interface in the separate packages.

The interface component extends easily to a rich pipeline environment, able to interchange processing steps between different packages and iterate over a set of parameters, along with providing automated provenance tracking. The structure of the pipeline allows the user to easily add data and change parameters, and the pipeline will run only the steps necessary to update the new data or analysis parameters. Because it is written in Python, the pipeline can also take advantage of standard Python packages for future integration with a variety of database systems for storing processed data and metadata.

By exposing a consistent interface to the external packages, researchers are able to explore a wide range of imaging algorithms and configure their own analysis pipeline which best fits their data and research objectives, and perform their analysis in a highly structured environment. The nipype framework is accessible to the wide range of programming expertise often found in neuroimaging, allowing for both easy-to-use high-level scripting and low-level algorithm development for unlimited customization. We will explain the software architecture and challenges in interfacing the external packages, and demonstrate the flexibility of nipype in performing an analysis.

This work is partially supported by NIH grant R03 EB008673 (NIBIB; PIs: Ghosh, Whitfield-Gabrieli).


Bio

Satrajit Ghosh is a Research Scientist in the Research Laboratory of Electronics at MIT. Dr. Ghosh received his undergraduate degree in Computer Science specializing in artificial intelligence and his graduate degree in Computational and Cognitive Neuroscience, specializing in functional imaging and computational modeling of speech motor control. His current research focuses on developing software related to nipype, relating macro-neuroanatomy and function and understanding mechanisms of speech production and perception.

Dec 9: Pádraig Cantillon-Murphy, MIT and BWH

TITLE: Magnetic Self Assembly in Minimally-Invasive Gastrointestinal Surgery

ABSTRACT:

The development of Single-Incision Laparoscopic Surgery (SILS) and, more recently, Natural Orifice Transluminal Endoscopic Surgery (NOTES) as minimally-invasive alternatives to traditional laparoscopy has generated enormous interest in the surgical community in recent years. However, the creation of secure translumenal ports for access to target organs (e.g., accessing the small bowel or gallbladder through the gastric wall) is still a major challenge to the widespread adoption of these advanced techniques. I have recently designed and tested a self-assembling magnetic microsystem which solves this shortcoming. The microsystem, currently undergoing animal trials, will serve as the platform for the transformation of some of the most common laparoscopic procedures (e.g., gallbladder removal, gastrojejunal fistula formation) into minimally-invasive endoscopic procedures. Hypothesized results include shorter hospitalization, reduced anesthesia and significantly decreased healthcare costs.


BIOGRAPHY:

Pádraig Cantillon-Murphy is a post-doctoral research fellow at the Laboratory for Electromagnetic and Electronic Systems (LEES) at the Massachusetts Institute of Technology and a research fellow of Harvard Medical School at Brigham and Women's Hospital, Boston. He graduated with a bachelor's degree in Electrical and Electronic Engineering from University College Cork, Ireland, in 2003 before joining the Master's program at the Department of Electrical at MIT. He graduated with a Master of Science in Electrical Engineering in 2005 and joined the MRI group in the Research Laboratory for Electronics, also at MIT. He graduated with a Ph.D. in Electrical Engineering from MIT in June 2008 where his doctoral thesis examined the dynamic behavior of magnetic nanoparticles in MRI. His current work investigates the role that magnetic self-assembly and smart systems can play in minimally-invasive surgery, with an emphasis on common gastrointestinal procedures.

June 10, 2009. Ronilda Lacson TBA

April 15th, 2009: Arkadiusz Sitek, Brigham and Women's Hospital

"Image-Guided Spinal Interventions Using Virtual Fluoroscopy and Tomography"

The goal of this work is to create a virtual radiology workstation (VRW) for use in fluoroscopy image-guided spinal interventions. The VRW will create a virtual reality environment in which the surgeon will be able to visualize locations of critical spine structures with respect to surgical tools (needles used for spinal injections or other surgical tools). . Registration between pre-operative imaging data (CT, MRI) and fluoroscopic is performed first. The registration is based on novel algorithms for photon propagation. Using stereoscopic visualizations and head tracking device, we create a virtual reality system in which spine structures and surgical tools are accurately visualized to interventional radiologist. The virtual reality environment showing the spine and surgical tools is created based on preoperative volumetric data coregistered with intra-operative fluoroscopy. We plan to extend the VRW to other areas of spinal surgery. The VRW will increase the accuracy of these procedures minimizing patient risks, increase surgeon confidence during interventions, and reduce patient and surgeon radiation doses

March 25th, 2009: Yaroslav Tencer, PhD Candidate, Mechatronics in Medicine Lab, Imperial College London, UK

"Improving the Realism of Haptic Perceptions in Virtual Arthroscopy Training"

Human tactile sensing is essential for perceiving the environment, which is why many virtual reality simulators offer force feedback (haptics) alongside realistic visual rendering. Haptic feedback is especially important in applications where forces carry useful information. I will elaborate on the following four subtopics; First, the OrthoForce, which is an arthroscopy training simulator, which consists of a custom-built haptic device with 4 haptic degrees of freedom and appears to posses the highest “stiffness to size” ratio among devices of its kind to date. Second, the “Haptic Noise” method, which is a novel approach for quantitative evaluation of a haptic system, which we developed to help overcome the inherent problems of evaluating and comparing haptic devices. Third, vibrotactile feedback, where measured high frequency vibrations were produced on the OrthoForce handle to improve haptic realism. Vibrotactile feedback has been shown to improve haptic perceptions in general, although its application to surgical training simulators has not yet been demonstrated. Last, a novel programmable brake will be presented, which was developed to improve the stability and stiffness of haptic devices at minimum cost.

December 10th, 2008: James Balter, Professor, Department of Radiation Oncology, University of Michigan

Abstract: While dramatic improvements in imaging access and speed have occurred in the image-guided (radiation) therapy arena, the cost (in terms of time and possibly radiation dose) of acquiring verify high fidelity volume information is prohibitive for real-time monitoring for (re-)adjustment of therapy to account for patient changes between modeled and treated states. To overcome such limitations, surrogates of various forms are introduced to the process. Surrogates provide a reduced representation of the patient and typically present information in a format that is conveniently analyzed. Furthermore, extraction of surrogate information is typically faster than the initial targeting process used in planning therapy. Difficulties exist in selecting appropriate surrogates, and optimizing their relationship to the actual patient state. This talk explores these issues, including typical sources of uncertainty, the error budget involved in an image guidance process (for Radiation Oncology), and the potential impact of advanced models and use of prior information to maximize the value of surrogate measurements.

December 17, 2008: Andriy Fedorov

July 23rd, 2008: Matt Toews

"Modeling Appearance via the Object Class Invariant"

Abstract: As humans, we are able to identify, localize, describe and classify a wide range of object classes, such as faces, cars or the human brain, by their appearance in images. Designing a general computational model of appearance with similar capabilities remains a long standing goal in the research community. A major challenge is effectively coping with the many sources of variability operative in determining image appearance: illumination, noise, unrelated clutter, occlusion, sensor geometry, natural intra-class variation and abnormal variation due to pathology to name a few. Explicitly modeling sources of variability can be computationally expensive, can lead to domain-specific solutions and may be unnecessary, as they may be ultimately unrelated to the computational tasks at hand.

In this talk, I will show how appearance can instead be modeled in a manner invariant to nuisance variations, or sources of variability unrelated to the tasks at hand. This is done by relating spatially localized image features to an object class invariant (OCI), a reference frame which remains geometrically consistent with the underlying object class despite nuisance variations. The resulting OCI model is a probabilistic collage of local image patterns that can be automatically learned from sets of images and robustly fit to new images, with little or no manual supervision. Due to its general nature, the OCI model can be used to address a variety of difficult, open problems in the contexts of computer vision and medical image analysis. I will show how the model can be used both as a viewpoint-invariant model of 3D object classes in photographic imagery and as a robust anatomical atlas of the brain in magnetic resonance imagery.

July 23rd, 2008: Xenios Papademetris Departments of Diagnostic Radiology and Biomedical Engineering,Yale University

      • TO BE GIVEN AT 2:30 pm AT 1249 BOYLSTON. NOT IN THE HOLLENBERG CONFERENCE ROOM.***

"Development of a Research Interface for Image Guided Navigation"

Abstract: In this talk, I will describe the development and application of techniques to integrate research image analysis methods and software with a commercial image guided surgery navigation system (the BrainLAB VectorVision Cranial System.) The integration was achieved using a custom designed client/server architecture termed VectorVision Link (VVLink) which extends functionality from the Visualization Toolkit (VTK). VVLink enables bi-directional transfer of data such as images, visualizations and tool positions in real time. In this paper, we describe both the design and the application programming interface of VVLink, as well as show the function of an example VVLink client control. The resulting interface provides a practical and versatile link for bringing image analysis research techniques into the operating room (OR). I will present examples from the successful use of this research interface in both phantom experiments and in real neurosurgeries.

I will also, briefly, present some more recent work begun at the NAMIC all-hands meeting last month to establish a communication link between Slicer and the VVCranial System using a combination of OpenIGTLink andVVLink.

May 14th, 2008: Anna Custo, MIT

Purely Optical Tomography: Atlas-Based Reconstruction of Brain Activation"

Abstract:

Diffuse Optical Tomography (DOT) is a relatively new method used to image blood volume and oxygen saturation in vivo. Because of its relatively poor spatial resolution (typically no better than 1-2 cm), DOT is increasingly combined with other imaging techniques, such as MRI, fMRI and CT, which provide high-resolution structural information to guide the characterization of the unique physiological information offered by DOI. This work aims at improving DOT by offering new strategies for a more accurate, efficient, and faster image processor. Specifically, after investigating the influence of Cerebral Spinal Fluid (CSF) properties over the optical measurements, we propose using a realistic segmented head model that includes a novel CSF segmentation approach for a more accurate solution of the DOT forward problem. Moreover, we outline the benefits and applicability of a Diffusion Approximation-based faster forward model solver such as the one proposed by Barnett. We also describe a new registration algorithm based on superficial landmarks which is an essential tool for the purely optical tomographic image process here proposed. A purely optical tomography of the brain during neural activity will greatly enhance DOT applicability and many advantages, in the sense that DOT low cost, portability and non-invasive nature would be fully exploited without the compromises due to the MRI role in the DOT forward image process. We achieve a purely optical tomography by using a generalized head model (or atlas) in place of the subject specific anatomical MRI.We validate the proposed imaging protocol by comparing measurements derived from the DOT forward problem solution obtained using the subject specific anatomical model versus these acquired using the atlas registered to the subject, and we show the results thus calculated over a database of 31 healthy human subjects, focusing on a set of 12 functional region of interests per hemisphere. We conclude our study presenting 3 experimental subjects with acquired measurements of the absorption coefficient perturbation during motor cortex activation. We apply our purely optical tomography protocol to the 3 subjects and analyze the observations derived from both the DOT forward and inverse solutions. The experimental results demonstrate that it is possible to guide DOT forward problem with a general anatomical model in place of the subject's specific head geometry to localize the macroanatomical structures of neural activity.

February 27th, 2008: Haytham Elhawary, Mechatronics in Medicine Lab, Imperial College London, United Kingdom

Hosted by Noby Hata.

Abstract:

Mechatronic systems compatible with magnetic resonance imaging (MRI) promise interventional robots guided by real-time MRI as well as efficient tools for clinical diagnosis in internal medicine. We have designed two MR compatible systems for use inside high-field closed bore scanners. The first consists of a device designed to perform trans-rectal prostate biopsy under real-time image guidance. The 5-DOF robot is actuated by means of piezoceramic motors located very close to the isocentre, and its position is registered by both compatible optical encoders as well as passive fiducial markers embedded in an endorectal probe. By tracking the markers in the probe, the scan plane orientation can be updated to always include the needle axis. A force sensor along the needle driving DOF, allows the generation of a force profile as the needle is inserted, quantifying tissue rigidity. Future work includes adding haptic feedback to the system. The second system provides limb positioning capabilities to exploit the denominated “magic angle” coupling effect, which is the increase in signal shown in tendinous tissue when oriented at 55 degrees to Bo. The system uses specifically developed pneumatic rotary actuators to provide the large torques required and compatible encoders to feedback position. With 3-DOF a limb can be positioned at its required angle at a minimum distance from the isocentre while assuring patient comfort. Once initial registration is complete, the system can provide the location of the scan planes as the limb is oriented at a specified angle. Preliminary trials imaging the Achilles tendon of healthy volunteers prove the functionality of the device.

Professor Akihito Sano, Department of Engineering Physics, Electronics and Mechanics, Nagoya Institute of Technology, Japan

Hosted by Noby Hata

Title: Toward intuitive teleoperation and touch enhancing

Abstract: Master-slave systems take advantage of human cognitive and sensorimotor skills. In applications of this system, an intuitive teleoperation based on a natural and instinctive manner is strongly desired in order that the operator can use his full set of daily experiences. Keeping medical applications in mind, we have developed a master console with a compact spherical stereoscopic display, named Micro Dome. Accurate visual/haptic registration has been realized based on the framework of a mixed reality. And, we have recently developed a multi-finger telemanipulation system. One of the key devices is a bio-mimetic soft-finger with tactile sensor that can instantaneously detect whether the object is rough or slippery. Another is a tactile display based on the squeeze effect using an ultrasonic vibrator that can control the frictional coefficient. Further, we show a touch enhancing tool that operates not only as a disturbance filter, but also, supposedly, as a magnifier of surface undulation. Dr. Sano is a Professor at Nagoya Institute of Technology (JAPAN). He received M.S. degree in Precision Engineering from Gifu University in 1987 and Ph.D. degree from Nagoya University in 1992. He received the JSME Award for young engineers in 1992, the JSME Robotics and Mechatronics Achievement Award in 1996, the ASME-ISCIE Japan-USA Symposium on Flexible Automation Best Paper Award (Ford Motor Company) in 2000, the 1st IEEE Technical Exhibition Based Conference on Robotics and Automation Best Technical Exhibition Award in 2004, and the SICE Transaction Best Paper Award in 2005. He is a fellow of the Japan Society of Mechanical Engineers. The main research interests in his laboratory are Human-centered Robotics, Haptics, and Passive Walking. He was a Visiting Scholar in Mechanical Engineering of Stanford University.

February 6th, 2008: Ender Konukoglu, INRIA

Hosted by Kilian Pohl

Title:Patient Specific Tumor Growth Modeling

Abstract: Patient specific tumor growth models combine mathematical explanations of dynamics of the tumor growth with clinical data. Mathematically general models are modified for each patient to provide information to the clinician regarding the growth of the tumor and its state. In this talk we are going to present our work on tumor growth modeling, adapting these models to specific patient cases using MR images and applications of these models on radiotherapy planning.

January 16th, 2008: Marco Ribaldi, MGH

Title: 4D Targeting Error Analysis in Image-Guided Radiotherapy

Abstract:

Image-guided therapy (IGT) involves the acquisition and processing of biomedical images to actively guide medical interventions. This field has been rapidly evolving in recent years, resulting in commercially available IGT solutions. The proliferation of IGT technologies has been particularly significant in image-guided external beam irradiation (image-guided radiotherapy, IGRT), as a way to increase targeting accuracy. When IGRT is applied to the treatment of moving tumors, such as for lung or liver lesions, image guidance becomes challenging, as intra-fraction motion leads to increased uncertainty in tumor localization. Different strategies, such as respiratory gating or tumor tracking may be applied to mitigate the effects of motion. Each technique is related to a different technological effort to be pursued; also, a different level of complexity in treatment planning and delivery is required. We postulate that the advantages of IGRT should be formalized by a mathematical description, to objectively compare different motion mitigation strategies. This will allow the comparison of different approaches to the compensation of inter and intra-fractional motion in terms of the residual uncertainties in tumor targeting, to be detected by IGRT technologies. Quantitation of targeting error requires an extension of targeting error to a 4D space, where the 3D tumor trajectory as a function of time is taken explicitly into account. This extension makes possible the evaluation of the accuracy of the delivered treatment (4D targeting error analysis, 4DTEA). Accurate 4DTEA can be represented by a motion probability density function, describing the statistical fluctuations of tumor position over time. We illustrate the application of 4DTEA through examples, including: daily variations in tumor trajectory as detected by 4DCT, respiratory gated irradiation via external surrogates, and real-time tumor tracking by means of stereoscopic X-ray imaging.

December 5th, 2007: Arkadiusz Sitek, Physicist, Nuclear Medicine, Brigham and Women's Hospital, Assistant Professor Harvard Medical School

Title: "Programming and Medical Applications Using Graphics Hardware"

This will be an introduction to programming and applications of graphics processing unit (GPU) in medical imaging. Driven by the computer game industry, the development of graphics hardware experienced tremendous growth in recent years. Due to parallel computational architecture as well as availability of GPU hardware implemented geometrical functions used frequently in data analysis and reconstruction, the GPU offers readily available fast computational resource that can be used in medical imaging applications. The GPU programming model is substantially different than standard Von Neumann architecture used for the programming of the CPUs. I will introduce computational model of the GPU in the context of basic computer graphics and general purpose computing. Examples of GPU implementations in the area of medical data visualization and reconstruction for Nuclear Medicine, X-Ray CT, and MRI data will be given.

May 9th, 2007: Eigil Samset

"Image-guided navigation software and novel applications." Eigils_last_JC_talk.pdf

May 16th 2007: Mert Rory Sabuncu

"Inter-subject Image Registration." MertSabuncu_JC_BrighamTalk.pdf‎

May 23rd 2007: Martin Reuter

"Laplace-Beltrami Spectra for Global Shape Analysis of 3d Medical Data." MartinReuter-JC-shapedna-handout.pdf

May 30th, 2007. Padma Sundaram

"A Geometry Processing Approach to Colon Polyp Detection."