The data which is available to surgeons before, during and after surgery is steadily increasing in quantity as well as diversity. When planning a patient’s treatment, this large amount of information can be difficult to interpret. To aid in processing the information, new methods need to be found to present multimodal patient data, ideally combining textual, imagery, temporal and 3D data in a holistic and context-aware system.
We present an open-source framework which allows handling of patient data in a virtual reality (VR) environment. By using VR technology, the workspace available to the surgeon is maximized and 3D patient data is rendered in stereo, which increases depth perception. The framework organizes the data into workspaces and contains tools which allow users to control, manipulate and enhance the data. Due to the framework’s modular design, it can easily be adapted and extended for various clinical applications.
The framework was evaluated by clinical personnel (77 participants). The majority of the group stated that a complex surgical situation is easier to comprehend by using the framework, and that it is very well suited for education. Furthermore, the application to various clinical scenarios—including the simulation of excitation propagation in the human atrium—demonstrated the framework’s adaptability. As a feasibility study, the framework was used during the planning phase of the surgical removal of a large central carcinoma from a patient’s liver.
The clinical evaluation showed a large potential and high acceptance for the VR environment in a medical context. The various applications confirmed that the framework is easily extended and can be used in real-time simulation as well as for the manipulation of complex anatomical structures.
The cervical screw placement is one of the most difficult procedures in spine surgery, which often needs a long period of repeated practices and could cause screw placement-related complications. We performed this cadaver study to investigate the effectiveness of virtual surgical training system (VSTS) on cervical pedicle screw instrumentation for residents.
Materials and methods
A total of ten novice residents were randomly assigned to two groups: the simulation training (ST) group (n = 5) and control group (n = 5). The ST group received a surgical training of cervical pedicle screw placement on VSTS and the control group was given an introductory teaching session before cadaver test. Ten fresh adult spine specimens including 6 males and 4 females were collected, and were randomly allocated to the two groups. The bilateral C3–C6 pedicle screw instrumentation was performed in the specimens of the two groups, respectively. After instrumentation, screw positions of the two groups were evaluated by image examinations.
There was significantly statistical difference in screw penetration rates between the ST (10%) and control group (62.5%, P < 0.05). The acceptable rates of screws were 100 and 50% in the ST and control groups with significant difference between each other (P < 0.05). In addition, the average screw penetration distance in the ST group (1.12 ± 0.47 mm) was significantly lower than the control group (2.08 ± 0.39 mm, P < 0.05).
This study demonstrated that the VSTS as an advanced training tool exhibited promising effects on improving performance of novice residents in cervical pedicle screw placement compared with the traditional teaching methods.
Book chapter Scholl I., Suder S., Schiffer S. (2018) Direct Volume Rendering in Virtual Reality. In: Maier A., Deserno T., Handels H., Maier-Hein K., Palm C., Tolxdorff T. (eds) Bildverarbeitung für die Medizin 2018. Informatik aktuell. Springer Vieweg, Berlin, Heidelberg.
Direct Volume Rendering (DVR) techniques are used to visualize surfaces from 3D volume data sets, without computing a 3D geometry. Several surfaces can be classified using a transfer function by assigning optical properties like color and opacity (RGBα) to the voxel data. Finding a good transfer function in order to separate specific structures from the volume data set, is in general a manual and time-consuming procedure, and requires detailed knowledge of the data and the image acquisition technique. In this paper, we present a new Virtual Reality (VR) application based on the HTC Vive headset. Onedimensional transfer functions can be designed in VR while continuously rendering the stereoscopic image pair through massively parallel GPUbased ray casting shader techniques. The usability of the VR application is evaluated.
Radiologic evaluation of images from computed tomography (CT) or magnetic resonance imaging for diagnostic purposes is based on the analysis of single slices, occasionally supplementing this information with 3D reconstructions as well as surface or volume rendered images. However, due to the complexity of anatomical or pathological structures in biomedical imaging, innovative visualization techniques are required to display morphological characteristics three dimensionally. Virtual reality is a modern tool of representing visual data, The observer has the impression of being “inside” a virtual surrounding, which is referred to as immersive imaging. Such techniques are currently being used in technical applications, e.g. in the automobile industry. Our aim is to introduce a workflow realized within one simple program which processes common image stacks from CT, produces 3D volume and surface reconstruction and rendering, and finally includes the data into a virtual reality device equipped with a motion head tracking cave automatic virtual environment system. Such techniques have the potential to augment the possibilities in non-invasive medical imaging, e.g. for surgical planning or educational purposes to add another dimension for advanced understanding of complex anatomical and pathological structures. To this end, the reconstructions are based on advanced mathematical techniques and the corresponding grids which we can export are intended to form the basis for simulations of mathematical models of the pathogenesis of different diseases.
A head-mounted display may include a display system and an optical system in a housing. The display system may have a pixel array that produces light associated with images. The display system may also have a linear polarizer through which light from the pixel array passes and a quarter wave plate through which the light passes after passing through the quarter wave plate. The optical system may be a catadioptric optical system having one or more lens elements. The lens elements may include a plano-convex lens and a plano-concave lens. A partially reflective mirror may be formed on a convex surface of the plano-convex lens. A reflective polarizer may be formed on the planar surface of the plano-convex lens or the concave surface of the plano-concave lens. An additional quarter wave plate may be located between the reflective polarizer and the partially reflective mirror.