Designing volume visualizations showing various structures of interest is critical to the exploratory analysis of volumetric data. The last few years have witnessed dramatic advances in the use of convolutional neural networks for identification of objects in large image collections. Whereas such machine learning methods have shown superior performance in a number of applications, their direct use in volume visualization has not yet been explored. In this paper, we present a deep-learning-assisted volume visualization to depict complex structures, which are otherwise challenging for conventional approaches. A significant challenge in designing volume visualizations based on the high-dimensional deep features lies in efficiently handling the immense amount of information that deep-learning methods provide. In this paper, we present a new technique that uses spectral methods to facilitate user interactions with high-dimensional features. We also present a new deep-learning-assisted technique for hierarchically exploring a volumetric dataset. We have validated our approach on two electron microscopy volumes and one magnetic resonance imaging dataset.
The X-ray-based reconstruction methods using anterior-posterior (AP) and lateral (LAT) images assumes that the angle between the AP and LAT images is perpendicular. However, it is difficult to maintain the perfect perpendicular angle between the AP and LAT images when taking those two images sequentially in real situations. In this study, the robustness of a three-dimensional (3D) whole spine reconstruction method using AP and LAT planar X-ray images was analyzed by investigating cases in which the AP and LAT images were not taken perpendicularly. 3D models of the patient-specific spine from five subjects were reconstructed using AP and LAT X-Ray images and the 3D template models of C1 to L5 vertebrae based on B-spline free-form deformation (FFD) technology. The shape error, projected area error, and relative error in length were quantified by comparing the reconstructed model (FFD model) to the reference model (CT model). The results indicated that the reconstruction method might be considered robust in case that there is a small angular error, such as 5°, between the AP and LAT images. This study suggested that simple technical indications to obtain the perpendicular AP and LAT images in real situations can improve accuracy of 3D spine reconstruction.
A method for x-ray image-guided robotic instrument positioning is reported and evaluated in preclinical studies of spinal pedicle screw placement with the aim of improving delivery of transpedicle K-wires and screws. The known-component (KC) registration algorithm was used to register the three-dimensional patient CT and drill guide surface model to intraoperative two-dimensional radiographs. Resulting transformations, combined with offline hand–eye calibration, drive the robotically held drill guide to target trajectories defined in the preoperative CT. The method was assessed in comparison with a more conventional tracker-based approach, and robustness to clinically realistic errors was tested in phantom and cadaver. Deviations from planned trajectories were analyzed in terms of target registration error (TRE) at the tooltip (mm) and approach angle (deg). In phantom studies, the KC approach resulted in TRE = 1.51 ± 0.51 mm and 1.01 deg ± 0.92 deg, comparable with accuracy in tracker-based approach. In cadaver studies with realistic anatomical deformation, the KC approach yielded TRE = 2.31 ± 1.05 mm and 0.66 deg ± 0.62 deg, with statistically significant improvement versus tracker (TRE = 6.09 ± 1.22 mm and 1.06 deg ± 0.90 deg). Robustness to deformation is attributed to relatively local rigidity of anatomy in radiographic views. X-ray guidance offered accurate robotic positioning and could fit naturally within clinical workflow of fluoroscopically guided procedures.
We present our work investigating the feasibility of combining intraoperative ultrasound for brain shift correction and augmented reality (AR) visualization for intraoperative interpretation of patient-specific models in image-guided neurosurgery (IGNS) of brain tumors. We combine two imaging technologies for image-guided brain tumor neurosurgery. Throughout surgical interventions, AR was used to assess different surgical strategies using three-dimensional (3-D) patient-specific models of the patient’s cortex, vasculature, and lesion. Ultrasound imaging was acquired intraoperatively, and preoperative images and models were registered to the intraoperative data. The quality and reliability of the AR views were evaluated with both qualitative and quantitative metrics. A pilot study of eight patients demonstrates the feasible combination of these two technologies and their complementary features. In each case, the AR visualizations enabled the surgeon to accurately visualize the anatomy and pathology of interest for an extended period of the intervention. Inaccuracies associated with misregistration, brain shift, and AR were improved in all cases. These results demonstrate the potential of combining ultrasound-based registration with AR to become a useful tool for neurosurgeons to improve intraoperative patient-specific planning by improving the understanding of complex 3-D medical imaging data and prolonging the reliable use of IGNS.