IMHOTEP: virtual reality framework for surgical applications

liver-ct-3d-rendering

IMHOTEP: virtual reality framework for surgical applications, Pfeiffer, M., Kenngott, H., Preukschas, A. et al. Int J CARS (2018).

Abstract

Purpose
The data which is available to surgeons before, during and after surgery is steadily increasing in quantity as well as diversity. When planning a patient’s treatment, this large amount of information can be difficult to interpret. To aid in processing the information, new methods need to be found to present multimodal patient data, ideally combining textual, imagery, temporal and 3D data in a holistic and context-aware system.

Methods
We present an open-source framework which allows handling of patient data in a virtual reality (VR) environment. By using VR technology, the workspace available to the surgeon is maximized and 3D patient data is rendered in stereo, which increases depth perception. The framework organizes the data into workspaces and contains tools which allow users to control, manipulate and enhance the data. Due to the framework’s modular design, it can easily be adapted and extended for various clinical applications.

Results
The framework was evaluated by clinical personnel (77 participants). The majority of the group stated that a complex surgical situation is easier to comprehend by using the framework, and that it is very well suited for education. Furthermore, the application to various clinical scenarios—including the simulation of excitation propagation in the human atrium—demonstrated the framework’s adaptability. As a feasibility study, the framework was used during the planning phase of the surgical removal of a large central carcinoma from a patient’s liver.

Conclusion
The clinical evaluation showed a large potential and high acceptance for the VR environment in a medical context. The various applications confirmed that the framework is easily extended and can be used in real-time simulation as well as for the manipulation of complex anatomical structures.

imhotep-architecture
IMHOTEP architecture overview. The framework builds on the Unity3D engine. It uses ITK to parse DICOM data, reads Blender3D surface meshes and communicates withVRhardware through OpenVR. The various data types can be manipulated by tools and visualized using various methods. Demanding tasks can be outsourced to separate threads. Asynchronous tasks can communicate across modules by firing events. Tested with Unity3D Version 2017.2, SimpleITK Version 1.0.1, OpenVR/SteamVR Version 1.2.1 and Blender3D Version 2.78

Combining intraoperative ultrasound brain shift correction and augmented reality visualizations

brain-AR

Combining intraoperative ultrasound brain shift correction and augmented reality visualizations: a pilot study of eight cases by Gerard et al., J. of Medical Imaging, 5(2), 021210 (2018).

NOTE: You can read or download the paper at ResearchGate.

Abstract:

We present our work investigating the feasibility of combining intraoperative ultrasound for brain shift correction and augmented reality (AR) visualization for intraoperative interpretation of patient-specific models in image-guided neurosurgery (IGNS) of brain tumors. We combine two imaging technologies for image-guided brain tumor neurosurgery. Throughout surgical interventions, AR was used to assess different surgical strategies using three-dimensional (3-D) patient-specific models of the patient’s cortex, vasculature, and lesion. Ultrasound imaging was acquired intraoperatively, and preoperative images and models were registered to the intraoperative data. The quality and reliability of the AR views were evaluated with both qualitative and quantitative metrics. A pilot study of eight patients demonstrates the feasible combination of these two technologies and their complementary features. In each case, the AR visualizations enabled the surgeon to accurately visualize the anatomy and pathology of interest for an extended period of the intervention. Inaccuracies associated with misregistration, brain shift, and AR were improved in all cases. These results demonstrate the potential of combining ultrasound-based registration with AR to become a useful tool for neurosurgeons to improve intraoperative patient-specific planning by improving the understanding of complex 3-D medical imaging data and prolonging the reliable use of IGNS.

brain-AR-flowchart
Flowchart of the intraoperative workflow and how surgical tasks are related to IGNS tasks. A-Patient-to-image registration. After the patient’s head is immobilized a tracking reference is attached to the clamp and 8 facial landmarks are chosen that correspond to identical landmarks on the preoperative images to create a mapping between the two spaces. B Augmented reality visualization on the skull is being qualitatively assessed by comparing the tumor contour as defined by the preoperative guidance images and the overlay of the augmented image. C– A series of US images are acquired once the craniotomy has been performed on the dura and then reconstructed and registered with the preoperative MRI images using the gradient orientation alignment algorithm. D-Augmented reality visualization on the cortex showing the location of the tumor (green) and a vessel of interest (blue). E– The AR accuracy is quantitatively evaluated by having the surgeon choose an identifiable landmark on the physical patient, recording the coordinates, and then choosing the corresponding landmark on the augmented image, recording the coordinates and measuring the two-dimensional distance between the coordinates.

AR system lets doctors see under patients’ skin without the scalpel

New technology is bringing the power of augmented reality into clinical practice.

The system, called ProjectDR, allows medical images such as CT scans and MRI data to be displayed directly on a patient’s body in a way that moves as the patient does.

“We wanted to create a system that would show clinicians a patient’s internal anatomy within the context of the body,” explained Ian Watts, a computing science graduate student and the developer of ProjectDR.

The technology includes a motion-tracking system using infrared cameras and markers on the patient’s body, as well as a projector to display the images. But the really difficult part, Watts explained, is having the image track properly on the patient’s body even as they shift and move. The solution: custom software written by Watts that gets all of the components working together.

ProjectDR was presented last November at the Virtual Reality Software and Technology Symposium in Gothenburg, Sweden.

Read more at University of Alberta.