This article presents a hardware/software simulation environment suitable for anytime/anywhere surgical skills training. It blends the advantages of physical hardware and task analogs with the flexibility of virtual environments. This is further enhanced by a web-based implementation of training feedback accessible to both trainees and trainers. Our training system provides a self-paced and interactive means to attain proficiency in basic tasks that could potentially be applied across a spectrum of trainees from first responder field medical personnel to physicians. This results in a powerful training tool for surgical skills acquisition relevant to helping injured warfighters.
Purpose
Glenoid reaming is a technically challenging step during shoulder arthroplasty that could possibly be learned during simulation training. Creation of a realistic simulation using vibration feedback in this context is innovative. Our study focused on the development and internal validation of a novel glenoid reaming simulator for potential use as a training tool.
Methods
Vibration and force profiles associated with glenoid reaming were quantified during a cadaveric experiment. Subsequently, a simulator was fabricated utilizing a haptic vibration transducer with high- and low-fidelity amplifiers; system calibration was performed matching vibration peak–peak values for both amplifiers. Eight experts performed simulated reaming trials. The experts were asked to identify isolated layer profiles produced by the simulator. Additionally, experts’ efficiency to successfully perform a simulated glenoid ream based solely on vibration feedback was recorded.
Results
Cadaveric experimental cartilage reaming produced lower vibrations compared to subchondral and cancellous bones ( p≤0.03). Gain calibration of a lower-fidelity (3.5 gpk−pk,0.36grms) and higher-fidelity (3.4 gpk−pk,0.33grms) amplifier resulted in values similar to the cadaveric experimental benchmark (3.5 gpk−pk,0.30grms). When identifying random tissue layer samples, experts were correct 52±9% of the time and success rate varied with tissue type ( p=0.003). During simulated reaming, the experts stopped at the targeted subchondral bone with a success rate of 78±24%. The fidelity of the simulation did not have an effect on accuracy, applied force, or reaming time ( p>0.05). However, the applied force tended to increase with trial number ( p=0.047).
Conclusions
Development of the glenoid reaming simulator, coupled with expert evaluation furthered our understanding of the role of haptic vibration feedback during glenoid reaming. This study was the first to (1) propose, develop and examine simulated glenoid reaming, and (2) explore the use of haptic vibration feedback in the realm of shoulder arthroplasty.
deconstructed view of components that constitute the glenoid reaming simulator (GRS).Ahaptic vibration transducer produced vibrations at the glenoid–reamer interface. A load cell was instrumented between the transducer’s housing unit and its base to detect the force applied by the user while interacting with the GRS. The surgical reamer was fitted with a non-fluted non-wearing reaming tip to eliminate the reamer’s material removal functionality.Collectively, these components allow the user to ‘ream’ the rapid-prototyped glenoid component while the haptic vibration transducer generates the specific vibration profiles dependent on the force applied and the layer that the user is reaming
Purpose
The data which is available to surgeons before, during and after surgery is steadily increasing in quantity as well as diversity. When planning a patient’s treatment, this large amount of information can be difficult to interpret. To aid in processing the information, new methods need to be found to present multimodal patient data, ideally combining textual, imagery, temporal and 3D data in a holistic and context-aware system.
Methods
We present an open-source framework which allows handling of patient data in a virtual reality (VR) environment. By using VR technology, the workspace available to the surgeon is maximized and 3D patient data is rendered in stereo, which increases depth perception. The framework organizes the data into workspaces and contains tools which allow users to control, manipulate and enhance the data. Due to the framework’s modular design, it can easily be adapted and extended for various clinical applications.
Results
The framework was evaluated by clinical personnel (77 participants). The majority of the group stated that a complex surgical situation is easier to comprehend by using the framework, and that it is very well suited for education. Furthermore, the application to various clinical scenarios—including the simulation of excitation propagation in the human atrium—demonstrated the framework’s adaptability. As a feasibility study, the framework was used during the planning phase of the surgical removal of a large central carcinoma from a patient’s liver.
Conclusion
The clinical evaluation showed a large potential and high acceptance for the VR environment in a medical context. The various applications confirmed that the framework is easily extended and can be used in real-time simulation as well as for the manipulation of complex anatomical structures.
IMHOTEP architecture overview. The framework builds on the Unity3D engine. It uses ITK to parse DICOM data, reads Blender3D surface meshes and communicates withVRhardware through OpenVR. The various data types can be manipulated by tools and visualized using various methods. Demanding tasks can be outsourced to separate threads. Asynchronous tasks can communicate across modules by firing events. Tested with Unity3D Version 2017.2, SimpleITK Version 1.0.1, OpenVR/SteamVR Version 1.2.1 and Blender3D Version 2.78
Introduction
The cervical screw placement is one of the most difficult procedures in spine surgery, which often needs a long period of repeated practices and could cause screw placement-related complications. We performed this cadaver study to investigate the effectiveness of virtual surgical training system (VSTS) on cervical pedicle screw instrumentation for residents.
Materials and methods
A total of ten novice residents were randomly assigned to two groups: the simulation training (ST) group (n = 5) and control group (n = 5). The ST group received a surgical training of cervical pedicle screw placement on VSTS and the control group was given an introductory teaching session before cadaver test. Ten fresh adult spine specimens including 6 males and 4 females were collected, and were randomly allocated to the two groups. The bilateral C3–C6 pedicle screw instrumentation was performed in the specimens of the two groups, respectively. After instrumentation, screw positions of the two groups were evaluated by image examinations.
Results
There was significantly statistical difference in screw penetration rates between the ST (10%) and control group (62.5%, P < 0.05). The acceptable rates of screws were 100 and 50% in the ST and control groups with significant difference between each other (P < 0.05). In addition, the average screw penetration distance in the ST group (1.12 ± 0.47 mm) was significantly lower than the control group (2.08 ± 0.39 mm, P < 0.05).
Conclusions
This study demonstrated that the VSTS as an advanced training tool exhibited promising effects on improving performance of novice residents in cervical pedicle screw placement compared with the traditional teaching methods.
Book chapter Scholl I., Suder S., Schiffer S. (2018) Direct Volume Rendering in Virtual Reality. In: Maier A., Deserno T., Handels H., Maier-Hein K., Palm C., Tolxdorff T. (eds) Bildverarbeitung für die Medizin 2018. Informatik aktuell. Springer Vieweg, Berlin, Heidelberg.
Abstract:
Direct Volume Rendering (DVR) techniques are used to visualize surfaces from 3D volume data sets, without computing a 3D geometry. Several surfaces can be classified using a transfer function by assigning optical properties like color and opacity (RGBα) to the voxel data. Finding a good transfer function in order to separate specific structures from the volume data set, is in general a manual and time-consuming procedure, and requires detailed knowledge of the data and the image acquisition technique. In this paper, we present a new Virtual Reality (VR) application based on the HTC Vive headset. Onedimensional transfer functions can be designed in VR while continuously rendering the stereoscopic image pair through massively parallel GPUbased ray casting shader techniques. The usability of the VR application is evaluated.
Book chapter Force-feedback-assisted Bone Drilling Simulation Based on CT Data, by Maier et al. pp. 291-296 In: Maier A., Deserno T., Handels H., Maier-Hein K., Palm C., Tolxdorff T. (eds) Bildverarbeitung für die Medizin 2018. Informatik aktuell. Springer Vieweg, Berlin, Heidelberg.
Abstract:
In order to fix a fracture using minimally invasive surgery approaches, surgeons are drilling complex and tiny bones with a 2 dimensional X-ray as single imaging modality in the operating room. Our novel haptic force-feedback and visual assisted training system will potentially help hand surgeons to learn the drilling procedure in a realistic visual environment. Within the simulation, the collision detection as well as the interaction between virtual drill, bone voxels and surfaces are important. In this work, the chai3d collision detection and force calculation algorithms are combined with a physics engine to simulate the bone drilling process. The chosen Bullet-Physics-Engine provides a stable simulation of rigid bodies, if the collision model of the drill and the tool holder is generated as a compound shape. Three haptic points are added to the K-wire tip for removing single voxels from the bone. For the drilling process three modes are proposed to emulate the different phases of drilling in restricting the movement of a haptic device.
Radiologic evaluation of images from computed tomography (CT) or magnetic resonance imaging for diagnostic purposes is based on the analysis of single slices, occasionally supplementing this information with 3D reconstructions as well as surface or volume rendered images. However, due to the complexity of anatomical or pathological structures in biomedical imaging, innovative visualization techniques are required to display morphological characteristics three dimensionally. Virtual reality is a modern tool of representing visual data, The observer has the impression of being “inside” a virtual surrounding, which is referred to as immersive imaging. Such techniques are currently being used in technical applications, e.g. in the automobile industry. Our aim is to introduce a workflow realized within one simple program which processes common image stacks from CT, produces 3D volume and surface reconstruction and rendering, and finally includes the data into a virtual reality device equipped with a motion head tracking cave automatic virtual environment system. Such techniques have the potential to augment the possibilities in non-invasive medical imaging, e.g. for surgical planning or educational purposes to add another dimension for advanced understanding of complex anatomical and pathological structures. To this end, the reconstructions are based on advanced mathematical techniques and the corresponding grids which we can export are intended to form the basis for simulations of mathematical models of the pathogenesis of different diseases.
Background Application of surgical navigation for pelvic bone cancer surgery may prove useful, but in addition to the fact that research supporting its adoption remains relatively preliminary, the actual navigation devices are physically large, occupying considerable space in already crowded operating rooms. To address this issue, we developed and tested a navigation system for pelvic bone cancer surgery assimilating augmented reality (AR) technology to simplify the system by embedding the navigation software into a tablet personal computer (PC).
Questions/purposes Using simulated tumors and resections in a pig pelvic model, we asked: Can AR-assisted resection reduce errors in terms of planned bone cuts and improve ability to achieve the planned margin around a tumor in pelvic bone cancer surgery?
Methods We developed an AR-based navigation system for pelvic bone tumor surgery, which could be operated on a tablet PC. We created 36 bone tumor models for simulation of tumor resection in pig pelves and assigned 18 each to the AR-assisted resection group and conventional resection group. To simulate a bone tumor, bone cement was inserted into the acetabular dome of the pig pelvis. Tumor resection was simulated in two scenarios. The first was AR-assisted resection by an orthopaedic resident and the second was resection using conventional methods by an orthopaedic oncologist. For both groups, resection was planned with a 1-cm safety margin around the bone cement. Resection margins were evaluated by an independent orthopaedic surgeon who was blinded as to the type of resection. All specimens were sectioned twice: first through a plane parallel to the medial wall of the acetabulum and second through a plane perpendicular to the first. The distance from the resection margin to the bone cement was measured at four different locations for each plane. The largest of the four errors on a plane was adopted for evaluation. Therefore, each specimen had two values of error, which were collected from two perpendicular planes. The resection errors were classified into four grades: ≤ 3 mm; 3 to 6 mm; 6 to 9 mm; and > 9 mm or any tumor violation. Student’s t-test was used for statistical comparison of the mean resection errors of the two groups.
Results The mean of 36 resection errors of 18 pelves in the AR-assisted resection group was 1.59 mm (SD, 4.13 mm; 95% confidence interval [CI], 0.24-2.94 mm) and the mean error of the conventional resection group was 4.55 mm (SD, 9.7 mm; 95% CI, 1.38-7.72 mm; p < 0.001). All specimens in the AR-assisted resection group had errors < 6 mm, whereas 78% (28 of 36) of errors in the conventional group were < 6 mm.
Conclusions In this in vitro simulated tumor model, we demonstrated that AR assistance could help to achieve the planned margin. Our model was designed as a proof of concept; although our findings do not justify a clinical trial in humans, they do support continued investigation of this system in a live animal model, which will be our next experiment.
Clinical Relevance The AR-based navigation system provides additional information of the tumor extent and may help surgeons during pelvic bone cancer surgery without the need for more complex and cumbersome conventional navigation systems.
We present our work investigating the feasibility of combining intraoperative ultrasound for brain shift correction and augmented reality (AR) visualization for intraoperative interpretation of patient-specific models in image-guided neurosurgery (IGNS) of brain tumors. We combine two imaging technologies for image-guided brain tumor neurosurgery. Throughout surgical interventions, AR was used to assess different surgical strategies using three-dimensional (3-D) patient-specific models of the patient’s cortex, vasculature, and lesion. Ultrasound imaging was acquired intraoperatively, and preoperative images and models were registered to the intraoperative data. The quality and reliability of the AR views were evaluated with both qualitative and quantitative metrics. A pilot study of eight patients demonstrates the feasible combination of these two technologies and their complementary features. In each case, the AR visualizations enabled the surgeon to accurately visualize the anatomy and pathology of interest for an extended period of the intervention. Inaccuracies associated with misregistration, brain shift, and AR were improved in all cases. These results demonstrate the potential of combining ultrasound-based registration with AR to become a useful tool for neurosurgeons to improve intraoperative patient-specific planning by improving the understanding of complex 3-D medical imaging data and prolonging the reliable use of IGNS.
Flowchart of the intraoperative workflow and how surgical tasks are related to IGNS tasks. A-Patient-to-image registration. After the patient’s head is immobilized a tracking reference is attached to the clamp and 8 facial landmarks are chosen that correspond to identical landmarks on the preoperative images to create a mapping between the two spaces. B Augmented reality visualization on the skull is being qualitatively assessed by comparing the tumor contour as defined by the preoperative guidance images and the overlay of the augmented image. C– A series of US images are acquired once the craniotomy has been performed on the dura and then reconstructed and registered with the preoperative MRI images using the gradient orientation alignment algorithm. D-Augmented reality visualization on the cortex showing the location of the tumor (green) and a vessel of interest (blue). E– The AR accuracy is quantitatively evaluated by having the surgeon choose an identifiable landmark on the physical patient, recording the coordinates, and then choosing the corresponding landmark on the augmented image, recording the coordinates and measuring the two-dimensional distance between the coordinates.
Workflow diagram showing the processes involved in AR content productio
Abstract:
Precision and planning are key to reconstructive surgery. Augmented reality (AR) can bring the information within preoperative computed tomography angiography (CTA) imaging to life, allowing the surgeon to ‘see through’ the patient’s skin and appreciate the underlying anatomy without making a single incision. This work has demonstrated that AR can assist the accurate identification, dissection and execution of vascular pedunculated flaps during reconstructive surgery. Separate volumes of osseous, vascular, skin, soft tissue structures and relevant vascular perforators were delineated from preoperative CTA scans to generate three-dimensional images using two complementary segmentation software packages. These were converted to polygonal models and rendered by means of a custom application within the HoloLens™ stereo head-mounted display. Intraoperatively, the models were registered manually to their respective subjects by the operating surgeon using a combination of tracked hand gestures and voice commands; AR was used to aid navigation and accurate dissection. Identification of the subsurface location of vascular perforators through AR overlay was compared to the positions obtained by audible Doppler ultrasound. Through a preliminary HoloLens-assisted case series, the operating surgeon was able to demonstrate precise and efficient localisation of perforating vessels.
a. Case 5 CTA imaging showing the location of perforating arteries with yellow arrows. b. Case 2 example HoloLens rendering of segmented polygonal models
Read also the detailed news at Imperial College London.