Glenoid reaming is a technically challenging step during shoulder arthroplasty that could possibly be learned during simulation training. Creation of a realistic simulation using vibration feedback in this context is innovative. Our study focused on the development and internal validation of a novel glenoid reaming simulator for potential use as a training tool.
Vibration and force profiles associated with glenoid reaming were quantified during a cadaveric experiment. Subsequently, a simulator was fabricated utilizing a haptic vibration transducer with high- and low-fidelity amplifiers; system calibration was performed matching vibration peak–peak values for both amplifiers. Eight experts performed simulated reaming trials. The experts were asked to identify isolated layer profiles produced by the simulator. Additionally, experts’ efficiency to successfully perform a simulated glenoid ream based solely on vibration feedback was recorded.
Cadaveric experimental cartilage reaming produced lower vibrations compared to subchondral and cancellous bones ( p≤0.03). Gain calibration of a lower-fidelity (3.5 gpk−pk,0.36grms) and higher-fidelity (3.4 gpk−pk,0.33grms) amplifier resulted in values similar to the cadaveric experimental benchmark (3.5 gpk−pk,0.30grms). When identifying random tissue layer samples, experts were correct 52±9% of the time and success rate varied with tissue type ( p=0.003). During simulated reaming, the experts stopped at the targeted subchondral bone with a success rate of 78±24%. The fidelity of the simulation did not have an effect on accuracy, applied force, or reaming time ( p>0.05). However, the applied force tended to increase with trial number ( p=0.047).
Development of the glenoid reaming simulator, coupled with expert evaluation furthered our understanding of the role of haptic vibration feedback during glenoid reaming. This study was the first to (1) propose, develop and examine simulated glenoid reaming, and (2) explore the use of haptic vibration feedback in the realm of shoulder arthroplasty.
The data which is available to surgeons before, during and after surgery is steadily increasing in quantity as well as diversity. When planning a patient’s treatment, this large amount of information can be difficult to interpret. To aid in processing the information, new methods need to be found to present multimodal patient data, ideally combining textual, imagery, temporal and 3D data in a holistic and context-aware system.
We present an open-source framework which allows handling of patient data in a virtual reality (VR) environment. By using VR technology, the workspace available to the surgeon is maximized and 3D patient data is rendered in stereo, which increases depth perception. The framework organizes the data into workspaces and contains tools which allow users to control, manipulate and enhance the data. Due to the framework’s modular design, it can easily be adapted and extended for various clinical applications.
The framework was evaluated by clinical personnel (77 participants). The majority of the group stated that a complex surgical situation is easier to comprehend by using the framework, and that it is very well suited for education. Furthermore, the application to various clinical scenarios—including the simulation of excitation propagation in the human atrium—demonstrated the framework’s adaptability. As a feasibility study, the framework was used during the planning phase of the surgical removal of a large central carcinoma from a patient’s liver.
The clinical evaluation showed a large potential and high acceptance for the VR environment in a medical context. The various applications confirmed that the framework is easily extended and can be used in real-time simulation as well as for the manipulation of complex anatomical structures.