The MIDAS Journal logo

The MIDAS Journal


Component-based software for dynamic configuration and control of computer assisted intervention systems

Balicki, Marcin, Deguet, Anton, Jung, Min Yang, Kazanzides, Peter, Taylor, Russell, Vagvolgyi, Balazs
Johns Hopkins University
Publication cover image

Please use this identifier to cite or link to this publication:
Published in The MIDAS Journal - MICCAI 2011 Workshop: Systems and Architectures for Computer Assisted Interventions.
Submitted by Peter Kazanzides on 2011-08-01 03:04:37.

This paper presents the rationale for the use of a component-based architecture for computer-assisted intervention (CAI) systems, including the ability to reuse components and to easily develop distributed systems. We introduce three additional capabilities, however, that we believe are especially important for research and development of CAI systems. The first is the ability to deploy components among different processes (as conventionally done) or within the same process (for optimal real-time performance), without requiring source-level modifications to the component. This is particularly relevant for real-time video processing, where the use of multiple processes could cause perceptible delays in the video stream. The second key feature is the ability to dynamically reconfigure the system. In a system composed of multiple processes on multiple computers, this allows one process to be restarted (e.g., after correcting a problem) and reconnected to the rest of the system, which is more convenient than restarting the entire distributed application and enables better fault recovery. The third key feature is the availability of run-time tools for data collection, interactive control, and introspection, and offline tools for data analysis and playback. The above features are provided by the open-source cisst software package, which forms the basis for the Surgical Assistant Workstation (SAW) framework. A complex computer-assisted intervention system for retinal microsurgery is presented as an example that relies on these features. This system integrates robotics, stereo microscopy, force sensing, and optical coherence tomography (OCT) imaging to transcend the current limitations of vitreoretinal surgery.