IMHOTEP: virtual reality framework for surgical applications

liver-ct-3d-rendering

IMHOTEP: virtual reality framework for surgical applications, Pfeiffer, M., Kenngott, H., Preukschas, A. et al. Int J CARS (2018).

Abstract

Purpose
The data which is available to surgeons before, during and after surgery is steadily increasing in quantity as well as diversity. When planning a patient’s treatment, this large amount of information can be difficult to interpret. To aid in processing the information, new methods need to be found to present multimodal patient data, ideally combining textual, imagery, temporal and 3D data in a holistic and context-aware system.

Methods
We present an open-source framework which allows handling of patient data in a virtual reality (VR) environment. By using VR technology, the workspace available to the surgeon is maximized and 3D patient data is rendered in stereo, which increases depth perception. The framework organizes the data into workspaces and contains tools which allow users to control, manipulate and enhance the data. Due to the framework’s modular design, it can easily be adapted and extended for various clinical applications.

Results
The framework was evaluated by clinical personnel (77 participants). The majority of the group stated that a complex surgical situation is easier to comprehend by using the framework, and that it is very well suited for education. Furthermore, the application to various clinical scenarios—including the simulation of excitation propagation in the human atrium—demonstrated the framework’s adaptability. As a feasibility study, the framework was used during the planning phase of the surgical removal of a large central carcinoma from a patient’s liver.

Conclusion
The clinical evaluation showed a large potential and high acceptance for the VR environment in a medical context. The various applications confirmed that the framework is easily extended and can be used in real-time simulation as well as for the manipulation of complex anatomical structures.

imhotep-architecture
IMHOTEP architecture overview. The framework builds on the Unity3D engine. It uses ITK to parse DICOM data, reads Blender3D surface meshes and communicates withVRhardware through OpenVR. The various data types can be manipulated by tools and visualized using various methods. Demanding tasks can be outsourced to separate threads. Asynchronous tasks can communicate across modules by firing events. Tested with Unity3D Version 2017.2, SimpleITK Version 1.0.1, OpenVR/SteamVR Version 1.2.1 and Blender3D Version 2.78

Virtual surgery simulation versus traditional approaches in training of residents in cervical pedicle screw placement

man-machine-interactive-interface-VSTS

Virtual surgery simulation versus traditional approaches in training of residents in cervical pedicle screw placement, by Hou et al. Arch Orthop Trauma Surg (2018).

Abstract:

Introduction
The cervical screw placement is one of the most difficult procedures in spine surgery, which often needs a long period of repeated practices and could cause screw placement-related complications. We performed this cadaver study to investigate the effectiveness of virtual surgical training system (VSTS) on cervical pedicle screw instrumentation for residents.

Materials and methods
A total of ten novice residents were randomly assigned to two groups: the simulation training (ST) group (n = 5) and control group (n = 5). The ST group received a surgical training of cervical pedicle screw placement on VSTS and the control group was given an introductory teaching session before cadaver test. Ten fresh adult spine specimens including 6 males and 4 females were collected, and were randomly allocated to the two groups. The bilateral C3–C6 pedicle screw instrumentation was performed in the specimens of the two groups, respectively. After instrumentation, screw positions of the two groups were evaluated by image examinations.

Results
There was significantly statistical difference in screw penetration rates between the ST (10%) and control group (62.5%, P < 0.05). The acceptable rates of screws were 100 and 50% in the ST and control groups with significant difference between each other (P < 0.05). In addition, the average screw penetration distance in the ST group (1.12 ± 0.47 mm) was significantly lower than the control group (2.08 ± 0.39 mm, P < 0.05). Conclusions This study demonstrated that the VSTS as an advanced training tool exhibited promising effects on improving performance of novice residents in cervical pedicle screw placement compared with the traditional teaching methods.

Direct volume rendering in virtual reality

stereoscopic-image-pairs

Book chapter Scholl I., Suder S., Schiffer S. (2018) Direct Volume Rendering in Virtual Reality. In: Maier A., Deserno T., Handels H., Maier-Hein K., Palm C., Tolxdorff T. (eds) Bildverarbeitung für die Medizin 2018. Informatik aktuell. Springer Vieweg, Berlin, Heidelberg.

Abstract:

Direct Volume Rendering (DVR) techniques are used to visualize surfaces from 3D volume data sets, without computing a 3D geometry. Several surfaces can be classified using a transfer function by assigning optical properties like color and opacity (RGBα) to the voxel data. Finding a good transfer function in order to separate specific structures from the volume data set, is in general a manual and time-consuming procedure, and requires detailed knowledge of the data and the image acquisition technique. In this paper, we present a new Virtual Reality (VR) application based on the HTC Vive headset. Onedimensional transfer functions can be designed in VR while continuously rendering the stereoscopic image pair through massively parallel GPUbased ray casting shader techniques. The usability of the VR application is evaluated.

Apple’s smart AR/VR glasses: Optical system now patent-pending

apple-AR-glass

Patently Apple reports on a new optical system for head-mounted displays related to virtual reality.

The patent application number is PCT/US2017/044247.

Abstract:

A head-mounted display may include a display system and an optical system in a housing. The display system may have a pixel array that produces light associated with images. The display system may also have a linear polarizer through which light from the pixel array passes and a quarter wave plate through which the light passes after passing through the quarter wave plate. The optical system may be a catadioptric optical system having one or more lens elements. The lens elements may include a plano-convex lens and a plano-concave lens. A partially reflective mirror may be formed on a convex surface of the plano-convex lens. A reflective polarizer may be formed on the planar surface of the plano-convex lens or the concave surface of the plano-concave lens. An additional quarter wave plate may be located between the reflective polarizer and the partially reflective mirror.

AR system lets doctors see under patients’ skin without the scalpel

New technology is bringing the power of augmented reality into clinical practice.

The system, called ProjectDR, allows medical images such as CT scans and MRI data to be displayed directly on a patient’s body in a way that moves as the patient does.

“We wanted to create a system that would show clinicians a patient’s internal anatomy within the context of the body,” explained Ian Watts, a computing science graduate student and the developer of ProjectDR.

The technology includes a motion-tracking system using infrared cameras and markers on the patient’s body, as well as a projector to display the images. But the really difficult part, Watts explained, is having the image track properly on the patient’s body even as they shift and move. The solution: custom software written by Watts that gets all of the components working together.

ProjectDR was presented last November at the Virtual Reality Software and Technology Symposium in Gothenburg, Sweden.

Read more at University of Alberta.