Unsupervised learning model for deformable medical image registration

Preprint at arXiv An Unsupervised Learning Model for Deformable Medical Image Registration, by Balakrishnan, Zhao, Sabuncu, Guttag, and Dalca.

Abstract:

We present an efficient learning-based algorithm for deformable, pairwise 3D medical image registration. Current registration methods optimize an energy function independently for each pair of images, which can be time-consuming for large data. We define registration as a parametric function, and optimize its parameters given a set of images from a collection of interest. Given a new pair of scans, we can quickly compute a registration field by directly evaluating the function using the learned parameters. We model this function using a CNN, and use a spatial transform layer to reconstruct one image from another while imposing smoothness constraints on the registration field. The proposed method does not require supervised information such as ground truth registration fields or anatomical landmarks. We demonstrate registration accuracy comparable to state-of-the-art 3D image registration, while operating orders of magnitude faster in practice. Our method promises to significantly speed up medical image analysis and processing pipelines, while facilitating novel directions in learning-based registration and its applications. Our code is available at this https URL

Combining intraoperative ultrasound brain shift correction and augmented reality visualizations

brain-AR

Combining intraoperative ultrasound brain shift correction and augmented reality visualizations: a pilot study of eight cases by Gerard et al., J. of Medical Imaging, 5(2), 021210 (2018).

NOTE: You can read or download the paper at ResearchGate.

Abstract:

We present our work investigating the feasibility of combining intraoperative ultrasound for brain shift correction and augmented reality (AR) visualization for intraoperative interpretation of patient-specific models in image-guided neurosurgery (IGNS) of brain tumors. We combine two imaging technologies for image-guided brain tumor neurosurgery. Throughout surgical interventions, AR was used to assess different surgical strategies using three-dimensional (3-D) patient-specific models of the patient’s cortex, vasculature, and lesion. Ultrasound imaging was acquired intraoperatively, and preoperative images and models were registered to the intraoperative data. The quality and reliability of the AR views were evaluated with both qualitative and quantitative metrics. A pilot study of eight patients demonstrates the feasible combination of these two technologies and their complementary features. In each case, the AR visualizations enabled the surgeon to accurately visualize the anatomy and pathology of interest for an extended period of the intervention. Inaccuracies associated with misregistration, brain shift, and AR were improved in all cases. These results demonstrate the potential of combining ultrasound-based registration with AR to become a useful tool for neurosurgeons to improve intraoperative patient-specific planning by improving the understanding of complex 3-D medical imaging data and prolonging the reliable use of IGNS.

brain-AR-flowchart
Flowchart of the intraoperative workflow and how surgical tasks are related to IGNS tasks. A-Patient-to-image registration. After the patient’s head is immobilized a tracking reference is attached to the clamp and 8 facial landmarks are chosen that correspond to identical landmarks on the preoperative images to create a mapping between the two spaces. B Augmented reality visualization on the skull is being qualitatively assessed by comparing the tumor contour as defined by the preoperative guidance images and the overlay of the augmented image. C– A series of US images are acquired once the craniotomy has been performed on the dura and then reconstructed and registered with the preoperative MRI images using the gradient orientation alignment algorithm. D-Augmented reality visualization on the cortex showing the location of the tumor (green) and a vessel of interest (blue). E– The AR accuracy is quantitatively evaluated by having the surgeon choose an identifiable landmark on the physical patient, recording the coordinates, and then choosing the corresponding landmark on the augmented image, recording the coordinates and measuring the two-dimensional distance between the coordinates.