Deep-learning-assisted volume visualization

visualization-semi-automatic

Deep-learning-assisted Volume Visualization, by Cheng, Cardone, Jain, et al., IEEE TVCG (2018), published ahead of print.

Abstract:

Designing volume visualizations showing various structures of interest is critical to the exploratory analysis of volumetric data. The last few years have witnessed dramatic advances in the use of convolutional neural networks for identification of objects in large image collections. Whereas such machine learning methods have shown superior performance in a number of applications, their direct use in volume visualization has not yet been explored. In this paper, we present a deep-learning-assisted volume visualization to depict complex structures, which are otherwise challenging for conventional approaches. A significant challenge in designing volume visualizations based on the high-dimensional deep features lies in efficiently handling the immense amount of information that deep-learning methods provide. In this paper, we present a new technique that uses spectral methods to facilitate user interactions with high-dimensional features. We also present a new deep-learning-assisted technique for hierarchically exploring a volumetric dataset. We have validated our approach on two electron microscopy volumes and one magnetic resonance imaging dataset.

Robustness of whole spine 3D reconstruction using 2D biplanar X-ray images

2d-3d-spine-3

Robustness of Whole Spine Reconstruction using Anterior-Posterior and Lateral Planar X-ray Images, by Kim, K., Jargalsuren, S., Khuyagbaatar, B. et al. Int. J. Precis. Eng. Manuf. (2018) 19: 281.

Abstract:

The X-ray-based reconstruction methods using anterior-posterior (AP) and lateral (LAT) images assumes that the angle between the AP and LAT images is perpendicular. However, it is difficult to maintain the perfect perpendicular angle between the AP and LAT images when taking those two images sequentially in real situations. In this study, the robustness of a three-dimensional (3D) whole spine reconstruction method using AP and LAT planar X-ray images was analyzed by investigating cases in which the AP and LAT images were not taken perpendicularly. 3D models of the patient-specific spine from five subjects were reconstructed using AP and LAT X-Ray images and the 3D template models of C1 to L5 vertebrae based on B-spline free-form deformation (FFD) technology. The shape error, projected area error, and relative error in length were quantified by comparing the reconstructed model (FFD model) to the reference model (CT model). The results indicated that the reconstruction method might be considered robust in case that there is a small angular error, such as 5°, between the AP and LAT images. This study suggested that simple technical indications to obtain the perpendicular AP and LAT images in real situations can improve accuracy of 3D spine reconstruction.

2d-3d-spine-2
Reconstruction process of a patient-specific 3D model

Robotic drill guide positioning in spine surgery using known-component 3D–2D image registration

Robotic drill guide positioning using known-component 3D–2D image registration, by Yi, Ramchandran, Siewerdsen, and Uneri, J. of Medical Imaging (2018), 5(2):021212.

Abstract:

A method for x-ray image-guided robotic instrument positioning is reported and evaluated in preclinical studies of spinal pedicle screw placement with the aim of improving delivery of transpedicle K-wires and screws. The known-component (KC) registration algorithm was used to register the three-dimensional patient CT and drill guide surface model to intraoperative two-dimensional radiographs. Resulting transformations, combined with offline hand–eye calibration, drive the robotically held drill guide to target trajectories defined in the preoperative CT. The method was assessed in comparison with a more conventional tracker-based approach, and robustness to clinically realistic errors was tested in phantom and cadaver. Deviations from planned trajectories were analyzed in terms of target registration error (TRE) at the tooltip (mm) and approach angle (deg). In phantom studies, the KC approach resulted in TRE = 1.51 ± 0.51 mm and 1.01 deg ± 0.92 deg, comparable with accuracy in tracker-based approach. In cadaver studies with realistic anatomical deformation, the KC approach yielded TRE = 2.31 ± 1.05 mm and 0.66 deg ± 0.62 deg, with statistically significant improvement versus tracker (TRE = 6.09 ± 1.22 mm and 1.06 deg ± 0.90 deg). Robustness to deformation is attributed to relatively local rigidity of anatomy in radiographic views. X-ray guidance offered accurate robotic positioning and could fit naturally within clinical workflow of fluoroscopically guided procedures.

Combining intraoperative ultrasound brain shift correction and augmented reality visualizations

brain-AR

Combining intraoperative ultrasound brain shift correction and augmented reality visualizations: a pilot study of eight cases by Gerard et al., J. of Medical Imaging, 5(2), 021210 (2018).

NOTE: You can read or download the paper at ResearchGate.

Abstract:

We present our work investigating the feasibility of combining intraoperative ultrasound for brain shift correction and augmented reality (AR) visualization for intraoperative interpretation of patient-specific models in image-guided neurosurgery (IGNS) of brain tumors. We combine two imaging technologies for image-guided brain tumor neurosurgery. Throughout surgical interventions, AR was used to assess different surgical strategies using three-dimensional (3-D) patient-specific models of the patient’s cortex, vasculature, and lesion. Ultrasound imaging was acquired intraoperatively, and preoperative images and models were registered to the intraoperative data. The quality and reliability of the AR views were evaluated with both qualitative and quantitative metrics. A pilot study of eight patients demonstrates the feasible combination of these two technologies and their complementary features. In each case, the AR visualizations enabled the surgeon to accurately visualize the anatomy and pathology of interest for an extended period of the intervention. Inaccuracies associated with misregistration, brain shift, and AR were improved in all cases. These results demonstrate the potential of combining ultrasound-based registration with AR to become a useful tool for neurosurgeons to improve intraoperative patient-specific planning by improving the understanding of complex 3-D medical imaging data and prolonging the reliable use of IGNS.

brain-AR-flowchart
Flowchart of the intraoperative workflow and how surgical tasks are related to IGNS tasks. A-Patient-to-image registration. After the patient’s head is immobilized a tracking reference is attached to the clamp and 8 facial landmarks are chosen that correspond to identical landmarks on the preoperative images to create a mapping between the two spaces. B Augmented reality visualization on the skull is being qualitatively assessed by comparing the tumor contour as defined by the preoperative guidance images and the overlay of the augmented image. C– A series of US images are acquired once the craniotomy has been performed on the dura and then reconstructed and registered with the preoperative MRI images using the gradient orientation alignment algorithm. D-Augmented reality visualization on the cortex showing the location of the tumor (green) and a vessel of interest (blue). E– The AR accuracy is quantitatively evaluated by having the surgeon choose an identifiable landmark on the physical patient, recording the coordinates, and then choosing the corresponding landmark on the augmented image, recording the coordinates and measuring the two-dimensional distance between the coordinates.