A frame of 3D printing data generation method extracted from CT data

ct-data-volume-rendering-3d-print

A Frame of 3D Printing Data Generation Method Extracted from CT Data, Zhao, S., Zhang, W., Sheng, W. et al. Sens Imaging (2018) 19: 12. https://doi.org/10.1007/s11220-018-0197-8

Abstract

3D Printing based on the CT data of measured object can print out the outer surface structure and internal surface structure of the object and can be used for reverse engineering. This paper presents a frame for generating 3D printing data directly from CT data. The frame includes three steps: firstly, surface mesh data with internal structure is extracted from CT data. Secondly, a mesh model recognized by 3D printer is obtained through generating topological information, removing isolated facets and non-manifold facets, filling holes and smoothing surface. Thirdly, for the target surface mesh model with internal structure or exceeding the maximum size of objects that 3D printer can print, the frame splits the mesh model into several parts and print them separately. The proposed frame was evaluated with a simulated CT data and two real CT data. These experiments showed that the proposed frame is effective to generate 3D printing data directly from CT data and preserve the shape analogy with the original object model with high precision.

A new approach for photorealistic visualization of rendered CT images

photorealistic-aneurysm

A new approach for photorealistic visualization of rendered CT images, by Glemser et al. World Neurosurgery (2018)

Highlights

  • A new photorealistic CT rendering system for neurosurgical use is presented (CVRT).
  • Photorealism is created by complex light interactions and classical camera effects.
  • CVRT can extract more valuable 3D image impressions than SSD and VRT.
  • Application in presurgical planning, teaching and patient interaction.
photorealistic-ct-neck
Different developmental steps in 3D rendering over time: (A) SSD, (B) VRT, (C)
CVRT of a contrast-enhanced CT of the head and neck.

Robotic drill guide positioning in spine surgery using known-component 3D–2D image registration

Robotic drill guide positioning using known-component 3D–2D image registration, by Yi, Ramchandran, Siewerdsen, and Uneri, J. of Medical Imaging (2018), 5(2):021212.

Abstract:

A method for x-ray image-guided robotic instrument positioning is reported and evaluated in preclinical studies of spinal pedicle screw placement with the aim of improving delivery of transpedicle K-wires and screws. The known-component (KC) registration algorithm was used to register the three-dimensional patient CT and drill guide surface model to intraoperative two-dimensional radiographs. Resulting transformations, combined with offline hand–eye calibration, drive the robotically held drill guide to target trajectories defined in the preoperative CT. The method was assessed in comparison with a more conventional tracker-based approach, and robustness to clinically realistic errors was tested in phantom and cadaver. Deviations from planned trajectories were analyzed in terms of target registration error (TRE) at the tooltip (mm) and approach angle (deg). In phantom studies, the KC approach resulted in TRE = 1.51 ± 0.51 mm and 1.01 deg ± 0.92 deg, comparable with accuracy in tracker-based approach. In cadaver studies with realistic anatomical deformation, the KC approach yielded TRE = 2.31 ± 1.05 mm and 0.66 deg ± 0.62 deg, with statistically significant improvement versus tracker (TRE = 6.09 ± 1.22 mm and 1.06 deg ± 0.90 deg). Robustness to deformation is attributed to relatively local rigidity of anatomy in radiographic views. X-ray guidance offered accurate robotic positioning and could fit naturally within clinical workflow of fluoroscopically guided procedures.

AR system lets doctors see under patients’ skin without the scalpel

New technology is bringing the power of augmented reality into clinical practice.

The system, called ProjectDR, allows medical images such as CT scans and MRI data to be displayed directly on a patient’s body in a way that moves as the patient does.

“We wanted to create a system that would show clinicians a patient’s internal anatomy within the context of the body,” explained Ian Watts, a computing science graduate student and the developer of ProjectDR.

The technology includes a motion-tracking system using infrared cameras and markers on the patient’s body, as well as a projector to display the images. But the really difficult part, Watts explained, is having the image track properly on the patient’s body even as they shift and move. The solution: custom software written by Watts that gets all of the components working together.

ProjectDR was presented last November at the Virtual Reality Software and Technology Symposium in Gothenburg, Sweden.

Read more at University of Alberta.