Automated muscle segmentation from CT images of the hip and thigh using a hierarchical multi-atlas method

Automated muscle segmentation from CT images of the hip and thigh using a hierarchical multi-atlas method Yokota, F., Otake, Y., Takao, M. et al. Int J CARS (2018).

Abstract:

Purpose
Patient-specific quantitative assessments of muscle mass and biomechanical musculoskeletal simulations require segmentation of the muscles from medical images. The objective of this work is to automate muscle segmentation from CT data of the hip and thigh.

Method
We propose a hierarchical multi-atlas method in which each hierarchy includes spatial normalization using simpler pre-segmented structures in order to reduce the inter-patient variability of more complex target structures.

Results
The proposed hierarchical method was evaluated with 19 muscles from 20 CT images of the hip and thigh using the manual segmentation by expert orthopedic surgeons as ground truth. The average symmetric surface distance was significantly reduced in the proposed method (1.53 mm) in comparison with the conventional method (2.65 mm).

Conclusion
We demonstrated that the proposed hierarchical multi-atlas method improved the accuracy of muscle segmentation from CT images, in which large inter-patient variability and insufficient contrast were involved.

Registration of 3D freehand ultrasound to a bone model for orthopedic procedures of the forearm

Registration of 3D freehand ultrasound to a bone model for orthopedic procedures of the forearm, Ciganovic, M., Ozdemir, F., Pean, F. et al. Int J CARS (2018).

Abstract

Purpose
For guidance of orthopedic surgery, the registration of preoperative images and corresponding surgical plans with the surgical setting can be of great value. Ultrasound (US) is an ideal modality for surgical guidance, as it is non-ionizing, real time, easy to use, and requires minimal (magnetic/radiation) safety limitations. By extracting bone surfaces from 3D freehand US and registering these to preoperative bone models, complementary information from these modalities can be fused and presented in the surgical realm.

Methods
A partial bone surface is extracted from US using phase symmetry and a factor graph-based approach. This is registered to the detailed 3D bone model, conventionally generated for preoperative planning, based on a proposed multi-initialization and surface-based scheme robust to partial surfaces.

Results
36 forearm US volumes acquired using a tracked US probe were independently registered to a 3D model of the radius, manually extracted from MRI. Given intraoperative time restrictions, a computationally efficient algorithm was determined based on a comparison of different approaches. For all 36 registrations, a mean (± SD) point-to-point surface distance of 0.57(±0.08)mm was obtained from manual gold standard US bone annotations (not used during the registration) to the 3D bone model.

Conclusions
A registration framework based on the bone surface extraction from 3D freehand US and a subsequent fast, automatic surface alignment robust to single-sided view and large false-positive rates from US was shown to achieve registration accuracy feasible for practical orthopedic scenarios and a qualitative outcome indicating good visual image alignment.

radius-segementation-ultrasound
(left) Example registration results with US, MRI, and overlaid slices of corresponding locations with the proposed ICP-based alignment. (right) A tendon insertion (pronator teres) visible in US (top) is projected onto the 3D model (below) using the proposed ICP-based alignment, e.g., to facilitate preoperative planning

A frame of 3D printing data generation method extracted from CT data

ct-data-volume-rendering-3d-print

A Frame of 3D Printing Data Generation Method Extracted from CT Data, Zhao, S., Zhang, W., Sheng, W. et al. Sens Imaging (2018) 19: 12. https://doi.org/10.1007/s11220-018-0197-8

Abstract

3D Printing based on the CT data of measured object can print out the outer surface structure and internal surface structure of the object and can be used for reverse engineering. This paper presents a frame for generating 3D printing data directly from CT data. The frame includes three steps: firstly, surface mesh data with internal structure is extracted from CT data. Secondly, a mesh model recognized by 3D printer is obtained through generating topological information, removing isolated facets and non-manifold facets, filling holes and smoothing surface. Thirdly, for the target surface mesh model with internal structure or exceeding the maximum size of objects that 3D printer can print, the frame splits the mesh model into several parts and print them separately. The proposed frame was evaluated with a simulated CT data and two real CT data. These experiments showed that the proposed frame is effective to generate 3D printing data directly from CT data and preserve the shape analogy with the original object model with high precision.

A deep learning approach for pose estimation from volumetric OCT data

oct-volume-marker

A deep learning approach for pose estimation from volumetric OCT data, by Gesserta, Schlütera, & Schlaefer Medical Image Analysis (2018) 46:162-179

Abstract

Tracking the pose of instruments is a central problem in image-guided surgery. For microscopic scenarios, optical coherence tomography (OCT) is increasingly used as an imaging modality. OCT is suitable for accurate pose estimation due to its micrometer range resolution and volumetric field of view. However, OCT image processing is challenging due to speckle noise and reflection artifacts in addition to the images’ 3D nature. We address pose estimation from OCT volume data with a new deep learning-based tracking framework. For this purpose, we design a new 3D convolutional neural network (CNN) architecture to directly predict the 6D pose of a small marker geometry from OCT volumes. We use a hexapod robot to automatically acquire labeled data points which we use to train 3D CNN architectures for multi-output regression. We use this setup to provide an in-depth analysis on deep learning-based pose estimation from volumes. Specifically, we demonstrate that exploiting volume information for pose estimation yields higher accuracy than relying on 2D representations with depth information. Supporting this observation, we provide quantitative and qualitative results that 3D CNNs effectively exploit the depth structure of marker objects. Regarding the deep learning aspect, we present efficient design principles for 3D CNNs, making use of insights from the 2D deep learning community. In particular, we present Inception3D as a new architecture which performs best for our application. We show that our deep learning approach reaches errors at our ground-truth label’s resolution. We achieve a mean average error of 14.89 ± 9.3 µm and 0.096 ± 0.072° for position and orientation learning, respectively.

deep-learning-pose-estimation-volumetric-oct
Graphical abstract: A deep learning-based method for pose estimation from OCT volumes is proposed. Learning from volumes with 3D CNNs outperforms depth-map approaches for tracking. 3D CNNs learn to exploit features inside of markers for pose estimation. Inception3D combines efficient design principles to achieve high accuracy.

Patient-specific and global convolutional neural networks for robust automatic liver tumor delineation in follow-up CT studies

Patient-specific and global convolutional neural networks for robust automatic liver tumor delineation in follow-up CT studies, Vivanti, R., Joskowicz, L., Lev-Cohain, N. et al. Med Biol Eng Comput (2018).

Abstract:

Radiological longitudinal follow-up of tumors in CT scans is essential for disease assessment and liver tumor therapy. Currently, most tumor size measurements follow the RECIST guidelines, which can be off by as much as 50%. True volumetric measurements are more accurate but require manual delineation, which is time-consuming and user-dependent. We present a convolutional neural networks (CNN) based method for robust automatic liver tumor delineation in longitudinal CT studies that uses both global and patient specific CNNs trained on a small database of delineated images. The inputs are the baseline scan and the tumor delineation, a follow-up scan, and a liver tumor global CNN voxel classifier built from radiologist-validated liver tumor delineations. The outputs are the tumor delineations in the follow-up CT scan. The baseline scan tumor delineation serves as a high-quality prior for the tumor characterization in the follow-up scans. It is used to evaluate the global CNN performance on the new case and to reliably predict failures of the global CNN on the follow-up scan. High-scoring cases are segmented with a global CNN; low-scoring cases, which are predicted to be failures of the global CNN, are segmented with a patient-specific CNN built from the baseline scan. Our experimental results on 222 tumors from 31 patients yield an average overlap error of 17% (std = 11.2) and surface distance of 2.1 mm (std = 1.8), far better than stand-alone segmentation. Importantly, the robustness of our method improved from 67% for stand-alone global CNN segmentation to 100%. Unlike other medical imaging deep learning approaches, which require large annotated training datasets, our method exploits the follow-up framework to yield accurate tumor tracking and failure detection and correction with a small training dataset.

liver-ct-tumor-segmentation
Flow diagram of the
proposed method. In the offline
mode, a global CNN is trained as
a voxel classifier to segment liver
tumor as in [31]. The online mode
(blue) is used for each new case.
The input is baseline scan with
delineation and the follow-up CT
scan to be segmented (color figure
online)

A new approach for photorealistic visualization of rendered CT images

photorealistic-aneurysm

A new approach for photorealistic visualization of rendered CT images, by Glemser et al. World Neurosurgery (2018)

Highlights

  • A new photorealistic CT rendering system for neurosurgical use is presented (CVRT).
  • Photorealism is created by complex light interactions and classical camera effects.
  • CVRT can extract more valuable 3D image impressions than SSD and VRT.
  • Application in presurgical planning, teaching and patient interaction.
photorealistic-ct-neck
Different developmental steps in 3D rendering over time: (A) SSD, (B) VRT, (C)
CVRT of a contrast-enhanced CT of the head and neck.

Direct volume rendering in virtual reality

stereoscopic-image-pairs

Book chapter Scholl I., Suder S., Schiffer S. (2018) Direct Volume Rendering in Virtual Reality. In: Maier A., Deserno T., Handels H., Maier-Hein K., Palm C., Tolxdorff T. (eds) Bildverarbeitung für die Medizin 2018. Informatik aktuell. Springer Vieweg, Berlin, Heidelberg.

Abstract:

Direct Volume Rendering (DVR) techniques are used to visualize surfaces from 3D volume data sets, without computing a 3D geometry. Several surfaces can be classified using a transfer function by assigning optical properties like color and opacity (RGBα) to the voxel data. Finding a good transfer function in order to separate specific structures from the volume data set, is in general a manual and time-consuming procedure, and requires detailed knowledge of the data and the image acquisition technique. In this paper, we present a new Virtual Reality (VR) application based on the HTC Vive headset. Onedimensional transfer functions can be designed in VR while continuously rendering the stereoscopic image pair through massively parallel GPUbased ray casting shader techniques. The usability of the VR application is evaluated.

Digital diagnosis and treatment of mandibular condylar fractures based on Extensible Neuro imaging Archive Toolkit (XNAT)

condylar-measurement

Open access paper Digital diagnosis and treatment of mandibular condylar fractures based on Extensible Neuro imaging Archive Toolkit (XNAT), by Zhou et al. PLoS ONE (2018) 13(2): e0192831.

Abstract

Objectives
The treatment of condylar fractures has long been controversial. In this paper, we established a database for accurate measurement, storage, management and analysis of patients’ data, in order to help determine the best treatment plan.

Methods
First of all, the diagnosis and treatment database was established based on XNAT, including 339 cases of condylar fractures and their related information. Then image segmentation, registration and three-dimensional (3D) measurement were used to measure and analyze the condyle shapes. Statistical analysis was used to analyze the anatomical structure changes of condyle and the surrounding tissues at different stages before and after treatment. The processes of condylar fracture reestablishment at different stages were also dynamically monitored. Finally, based on all these information, the digital diagnosis and treatment plans for condylar fractures were developed.

Results
For the patients less than 18 years old with no significant dislocation, surgical treatment and conservative treatment were equally effective for intracapsular fracture, and had no significant difference for neck and basal fractures. For patients above 18 years old, there was no significant difference between the two treatment methods for intracapsular fractures; but for condylar neck and basal fractures, surgical treatment was better than conservative treatment. When condylar fracture shift angle was greater than 11 degrees, and mandibular ramus height reduction was greater than 4mm, the patients felt the strongest pain, and their mouths opening was severely restricted. There were 170 surgical cases with condylar fracture shift angel greater than 11 degrees, and 118 of them (69.4%) had good prognosis, 52 of them (30.6%) had complications such as limited mouth opening. There were 173 surgical cases with mandibular ramus height reduction more than 4mm, and 112 of them (64.7%) had good prognosis, 61 of them (35.3%) had complications such as limited mouth opening.

Conclusions
The establishment of XNAT condylar fracture database is helpful for establishing a digital diagnosis and treatment workflow for mandibular condylar fractures, providing new theoretical foundation and application basis for diagnosis and treatment of condylar fractures.

condylar-fracture-registration
3D model registration process of one case with condylar fracture received conservative treatment before and after operation (a) Procrustes registration system using 3 anthropometric landmarks; (b) Global registration based on the modified ICP algorithm.

Deep-learning-assisted volume visualization

visualization-semi-automatic

Deep-learning-assisted Volume Visualization, by Cheng, Cardone, Jain, et al., IEEE TVCG (2018), published ahead of print.

Abstract:

Designing volume visualizations showing various structures of interest is critical to the exploratory analysis of volumetric data. The last few years have witnessed dramatic advances in the use of convolutional neural networks for identification of objects in large image collections. Whereas such machine learning methods have shown superior performance in a number of applications, their direct use in volume visualization has not yet been explored. In this paper, we present a deep-learning-assisted volume visualization to depict complex structures, which are otherwise challenging for conventional approaches. A significant challenge in designing volume visualizations based on the high-dimensional deep features lies in efficiently handling the immense amount of information that deep-learning methods provide. In this paper, we present a new technique that uses spectral methods to facilitate user interactions with high-dimensional features. We also present a new deep-learning-assisted technique for hierarchically exploring a volumetric dataset. We have validated our approach on two electron microscopy volumes and one magnetic resonance imaging dataset.

Robustness of whole spine 3D reconstruction using 2D biplanar X-ray images

2d-3d-spine-3

Robustness of Whole Spine Reconstruction using Anterior-Posterior and Lateral Planar X-ray Images, by Kim, K., Jargalsuren, S., Khuyagbaatar, B. et al. Int. J. Precis. Eng. Manuf. (2018) 19: 281.

Abstract:

The X-ray-based reconstruction methods using anterior-posterior (AP) and lateral (LAT) images assumes that the angle between the AP and LAT images is perpendicular. However, it is difficult to maintain the perfect perpendicular angle between the AP and LAT images when taking those two images sequentially in real situations. In this study, the robustness of a three-dimensional (3D) whole spine reconstruction method using AP and LAT planar X-ray images was analyzed by investigating cases in which the AP and LAT images were not taken perpendicularly. 3D models of the patient-specific spine from five subjects were reconstructed using AP and LAT X-Ray images and the 3D template models of C1 to L5 vertebrae based on B-spline free-form deformation (FFD) technology. The shape error, projected area error, and relative error in length were quantified by comparing the reconstructed model (FFD model) to the reference model (CT model). The results indicated that the reconstruction method might be considered robust in case that there is a small angular error, such as 5°, between the AP and LAT images. This study suggested that simple technical indications to obtain the perpendicular AP and LAT images in real situations can improve accuracy of 3D spine reconstruction.

2d-3d-spine-2
Reconstruction process of a patient-specific 3D model