Influence of the quality of intraoperative fluoroscopic images on the spatial positioning accuracy of a CAOS system

Influence of the quality of intraoperative fluoroscopic images on the spatial positioning accuracy of a CAOS system, by Wang et al. MRCAS (2018) e1898.

Abstract:

Spatial positioning accuracy is a key issue in a computer-assisted orthopaedic surgery (CAOS) system. Since intraoperative fluoroscopic images are one of the most important input data to the CAOS system, the quality of these images should have a significant influence on the accuracy of the CAOS system. But the regularities and mechanism of the influence of the quality of intraoperative images on the accuracy of a CAOS system have yet to be studied.Two typical spatial positioning methods – a C-arm calibration-based method and a bi-planar positioning method – are used to study the influence of different image quality parameters, such as resolution, distortion, contrast and signal-to-noise ratio, on positioning accuracy. The error propagation rules of image error in different spatial positioning methods are analyzed by the Monte Carlo method.Correlation analysis showed that resolution and distortion had a significant influence on spatial positioning accuracy. In addition the C-arm calibration-based method was more sensitive to image distortion, while the bi-planar positioning method was more susceptible to image resolution. The image contrast and signal-to-noise ratio have no significant influence on the spatial positioning accuracy. The result of Monte Carlo analysis proved that generally the bi-planar positioning method was more sensitive to image quality than the C-arm calibration-based method.The quality of intraoperative fluoroscopic images is a key issue in the spatial positioning accuracy of a CAOS system. Although the 2 typical positioning methods have very similar mathematical principles, they showed different sensitivities to different image quality parameters. The result of this research may help to create a realistic standard for intraoperative fluoroscopic images for CAOS systems.

Combining intraoperative ultrasound brain shift correction and augmented reality visualizations

brain-AR

Combining intraoperative ultrasound brain shift correction and augmented reality visualizations: a pilot study of eight cases by Gerard et al., J. of Medical Imaging, 5(2), 021210 (2018).

NOTE: You can read or download the paper at ResearchGate.

Abstract:

We present our work investigating the feasibility of combining intraoperative ultrasound for brain shift correction and augmented reality (AR) visualization for intraoperative interpretation of patient-specific models in image-guided neurosurgery (IGNS) of brain tumors. We combine two imaging technologies for image-guided brain tumor neurosurgery. Throughout surgical interventions, AR was used to assess different surgical strategies using three-dimensional (3-D) patient-specific models of the patient’s cortex, vasculature, and lesion. Ultrasound imaging was acquired intraoperatively, and preoperative images and models were registered to the intraoperative data. The quality and reliability of the AR views were evaluated with both qualitative and quantitative metrics. A pilot study of eight patients demonstrates the feasible combination of these two technologies and their complementary features. In each case, the AR visualizations enabled the surgeon to accurately visualize the anatomy and pathology of interest for an extended period of the intervention. Inaccuracies associated with misregistration, brain shift, and AR were improved in all cases. These results demonstrate the potential of combining ultrasound-based registration with AR to become a useful tool for neurosurgeons to improve intraoperative patient-specific planning by improving the understanding of complex 3-D medical imaging data and prolonging the reliable use of IGNS.

brain-AR-flowchart
Flowchart of the intraoperative workflow and how surgical tasks are related to IGNS tasks. A-Patient-to-image registration. After the patient’s head is immobilized a tracking reference is attached to the clamp and 8 facial landmarks are chosen that correspond to identical landmarks on the preoperative images to create a mapping between the two spaces. B Augmented reality visualization on the skull is being qualitatively assessed by comparing the tumor contour as defined by the preoperative guidance images and the overlay of the augmented image. C– A series of US images are acquired once the craniotomy has been performed on the dura and then reconstructed and registered with the preoperative MRI images using the gradient orientation alignment algorithm. D-Augmented reality visualization on the cortex showing the location of the tumor (green) and a vessel of interest (blue). E– The AR accuracy is quantitatively evaluated by having the surgeon choose an identifiable landmark on the physical patient, recording the coordinates, and then choosing the corresponding landmark on the augmented image, recording the coordinates and measuring the two-dimensional distance between the coordinates.