Automated muscle segmentation from CT images of the hip and thigh using a hierarchical multi-atlas method

Automated muscle segmentation from CT images of the hip and thigh using a hierarchical multi-atlas method Yokota, F., Otake, Y., Takao, M. et al. Int J CARS (2018).

Abstract:

Purpose
Patient-specific quantitative assessments of muscle mass and biomechanical musculoskeletal simulations require segmentation of the muscles from medical images. The objective of this work is to automate muscle segmentation from CT data of the hip and thigh.

Method
We propose a hierarchical multi-atlas method in which each hierarchy includes spatial normalization using simpler pre-segmented structures in order to reduce the inter-patient variability of more complex target structures.

Results
The proposed hierarchical method was evaluated with 19 muscles from 20 CT images of the hip and thigh using the manual segmentation by expert orthopedic surgeons as ground truth. The average symmetric surface distance was significantly reduced in the proposed method (1.53 mm) in comparison with the conventional method (2.65 mm).

Conclusion
We demonstrated that the proposed hierarchical multi-atlas method improved the accuracy of muscle segmentation from CT images, in which large inter-patient variability and insufficient contrast were involved.

Registration of 3D freehand ultrasound to a bone model for orthopedic procedures of the forearm

Registration of 3D freehand ultrasound to a bone model for orthopedic procedures of the forearm, Ciganovic, M., Ozdemir, F., Pean, F. et al. Int J CARS (2018).

Abstract

Purpose
For guidance of orthopedic surgery, the registration of preoperative images and corresponding surgical plans with the surgical setting can be of great value. Ultrasound (US) is an ideal modality for surgical guidance, as it is non-ionizing, real time, easy to use, and requires minimal (magnetic/radiation) safety limitations. By extracting bone surfaces from 3D freehand US and registering these to preoperative bone models, complementary information from these modalities can be fused and presented in the surgical realm.

Methods
A partial bone surface is extracted from US using phase symmetry and a factor graph-based approach. This is registered to the detailed 3D bone model, conventionally generated for preoperative planning, based on a proposed multi-initialization and surface-based scheme robust to partial surfaces.

Results
36 forearm US volumes acquired using a tracked US probe were independently registered to a 3D model of the radius, manually extracted from MRI. Given intraoperative time restrictions, a computationally efficient algorithm was determined based on a comparison of different approaches. For all 36 registrations, a mean (± SD) point-to-point surface distance of 0.57(±0.08)mm was obtained from manual gold standard US bone annotations (not used during the registration) to the 3D bone model.

Conclusions
A registration framework based on the bone surface extraction from 3D freehand US and a subsequent fast, automatic surface alignment robust to single-sided view and large false-positive rates from US was shown to achieve registration accuracy feasible for practical orthopedic scenarios and a qualitative outcome indicating good visual image alignment.

radius-segementation-ultrasound
(left) Example registration results with US, MRI, and overlaid slices of corresponding locations with the proposed ICP-based alignment. (right) A tendon insertion (pronator teres) visible in US (top) is projected onto the 3D model (below) using the proposed ICP-based alignment, e.g., to facilitate preoperative planning

A frame of 3D printing data generation method extracted from CT data

ct-data-volume-rendering-3d-print

A Frame of 3D Printing Data Generation Method Extracted from CT Data, Zhao, S., Zhang, W., Sheng, W. et al. Sens Imaging (2018) 19: 12. https://doi.org/10.1007/s11220-018-0197-8

Abstract

3D Printing based on the CT data of measured object can print out the outer surface structure and internal surface structure of the object and can be used for reverse engineering. This paper presents a frame for generating 3D printing data directly from CT data. The frame includes three steps: firstly, surface mesh data with internal structure is extracted from CT data. Secondly, a mesh model recognized by 3D printer is obtained through generating topological information, removing isolated facets and non-manifold facets, filling holes and smoothing surface. Thirdly, for the target surface mesh model with internal structure or exceeding the maximum size of objects that 3D printer can print, the frame splits the mesh model into several parts and print them separately. The proposed frame was evaluated with a simulated CT data and two real CT data. These experiments showed that the proposed frame is effective to generate 3D printing data directly from CT data and preserve the shape analogy with the original object model with high precision.

A deep learning approach for pose estimation from volumetric OCT data

oct-volume-marker

A deep learning approach for pose estimation from volumetric OCT data, by Gesserta, Schlütera, & Schlaefer Medical Image Analysis (2018) 46:162-179

Abstract

Tracking the pose of instruments is a central problem in image-guided surgery. For microscopic scenarios, optical coherence tomography (OCT) is increasingly used as an imaging modality. OCT is suitable for accurate pose estimation due to its micrometer range resolution and volumetric field of view. However, OCT image processing is challenging due to speckle noise and reflection artifacts in addition to the images’ 3D nature. We address pose estimation from OCT volume data with a new deep learning-based tracking framework. For this purpose, we design a new 3D convolutional neural network (CNN) architecture to directly predict the 6D pose of a small marker geometry from OCT volumes. We use a hexapod robot to automatically acquire labeled data points which we use to train 3D CNN architectures for multi-output regression. We use this setup to provide an in-depth analysis on deep learning-based pose estimation from volumes. Specifically, we demonstrate that exploiting volume information for pose estimation yields higher accuracy than relying on 2D representations with depth information. Supporting this observation, we provide quantitative and qualitative results that 3D CNNs effectively exploit the depth structure of marker objects. Regarding the deep learning aspect, we present efficient design principles for 3D CNNs, making use of insights from the 2D deep learning community. In particular, we present Inception3D as a new architecture which performs best for our application. We show that our deep learning approach reaches errors at our ground-truth label’s resolution. We achieve a mean average error of 14.89 ± 9.3 µm and 0.096 ± 0.072° for position and orientation learning, respectively.

deep-learning-pose-estimation-volumetric-oct
Graphical abstract: A deep learning-based method for pose estimation from OCT volumes is proposed. Learning from volumes with 3D CNNs outperforms depth-map approaches for tracking. 3D CNNs learn to exploit features inside of markers for pose estimation. Inception3D combines efficient design principles to achieve high accuracy.

A new approach for photorealistic visualization of rendered CT images

photorealistic-aneurysm

A new approach for photorealistic visualization of rendered CT images, by Glemser et al. World Neurosurgery (2018)

Highlights

  • A new photorealistic CT rendering system for neurosurgical use is presented (CVRT).
  • Photorealism is created by complex light interactions and classical camera effects.
  • CVRT can extract more valuable 3D image impressions than SSD and VRT.
  • Application in presurgical planning, teaching and patient interaction.
photorealistic-ct-neck
Different developmental steps in 3D rendering over time: (A) SSD, (B) VRT, (C)
CVRT of a contrast-enhanced CT of the head and neck.

Robustness of whole spine 3D reconstruction using 2D biplanar X-ray images

2d-3d-spine-3

Robustness of Whole Spine Reconstruction using Anterior-Posterior and Lateral Planar X-ray Images, by Kim, K., Jargalsuren, S., Khuyagbaatar, B. et al. Int. J. Precis. Eng. Manuf. (2018) 19: 281.

Abstract:

The X-ray-based reconstruction methods using anterior-posterior (AP) and lateral (LAT) images assumes that the angle between the AP and LAT images is perpendicular. However, it is difficult to maintain the perfect perpendicular angle between the AP and LAT images when taking those two images sequentially in real situations. In this study, the robustness of a three-dimensional (3D) whole spine reconstruction method using AP and LAT planar X-ray images was analyzed by investigating cases in which the AP and LAT images were not taken perpendicularly. 3D models of the patient-specific spine from five subjects were reconstructed using AP and LAT X-Ray images and the 3D template models of C1 to L5 vertebrae based on B-spline free-form deformation (FFD) technology. The shape error, projected area error, and relative error in length were quantified by comparing the reconstructed model (FFD model) to the reference model (CT model). The results indicated that the reconstruction method might be considered robust in case that there is a small angular error, such as 5°, between the AP and LAT images. This study suggested that simple technical indications to obtain the perpendicular AP and LAT images in real situations can improve accuracy of 3D spine reconstruction.

2d-3d-spine-2
Reconstruction process of a patient-specific 3D model