A frame of 3D printing data generation method extracted from CT data

ct-data-volume-rendering-3d-print

A Frame of 3D Printing Data Generation Method Extracted from CT Data, Zhao, S., Zhang, W., Sheng, W. et al. Sens Imaging (2018) 19: 12. https://doi.org/10.1007/s11220-018-0197-8

Abstract

3D Printing based on the CT data of measured object can print out the outer surface structure and internal surface structure of the object and can be used for reverse engineering. This paper presents a frame for generating 3D printing data directly from CT data. The frame includes three steps: firstly, surface mesh data with internal structure is extracted from CT data. Secondly, a mesh model recognized by 3D printer is obtained through generating topological information, removing isolated facets and non-manifold facets, filling holes and smoothing surface. Thirdly, for the target surface mesh model with internal structure or exceeding the maximum size of objects that 3D printer can print, the frame splits the mesh model into several parts and print them separately. The proposed frame was evaluated with a simulated CT data and two real CT data. These experiments showed that the proposed frame is effective to generate 3D printing data directly from CT data and preserve the shape analogy with the original object model with high precision.

A new approach for photorealistic visualization of rendered CT images

photorealistic-aneurysm

A new approach for photorealistic visualization of rendered CT images, by Glemser et al. World Neurosurgery (2018)

Highlights

  • A new photorealistic CT rendering system for neurosurgical use is presented (CVRT).
  • Photorealism is created by complex light interactions and classical camera effects.
  • CVRT can extract more valuable 3D image impressions than SSD and VRT.
  • Application in presurgical planning, teaching and patient interaction.
photorealistic-ct-neck
Different developmental steps in 3D rendering over time: (A) SSD, (B) VRT, (C)
CVRT of a contrast-enhanced CT of the head and neck.

Direct volume rendering in virtual reality

stereoscopic-image-pairs

Book chapter Scholl I., Suder S., Schiffer S. (2018) Direct Volume Rendering in Virtual Reality. In: Maier A., Deserno T., Handels H., Maier-Hein K., Palm C., Tolxdorff T. (eds) Bildverarbeitung für die Medizin 2018. Informatik aktuell. Springer Vieweg, Berlin, Heidelberg.

Abstract:

Direct Volume Rendering (DVR) techniques are used to visualize surfaces from 3D volume data sets, without computing a 3D geometry. Several surfaces can be classified using a transfer function by assigning optical properties like color and opacity (RGBα) to the voxel data. Finding a good transfer function in order to separate specific structures from the volume data set, is in general a manual and time-consuming procedure, and requires detailed knowledge of the data and the image acquisition technique. In this paper, we present a new Virtual Reality (VR) application based on the HTC Vive headset. Onedimensional transfer functions can be designed in VR while continuously rendering the stereoscopic image pair through massively parallel GPUbased ray casting shader techniques. The usability of the VR application is evaluated.

Virtual reality in advanced medical immersive imaging

renal-cell-cancer

Virtual reality in advanced medical immersive imaging: a workflow for introducing virtual reality as a supporting tool in medical imaging, by Knodel et al. Comput. Visual Sci. (2018).

Abstract:

Radiologic evaluation of images from computed tomography (CT) or magnetic resonance imaging for diagnostic purposes is based on the analysis of single slices, occasionally supplementing this information with 3D reconstructions as well as surface or volume rendered images. However, due to the complexity of anatomical or pathological structures in biomedical imaging, innovative visualization techniques are required to display morphological characteristics three dimensionally. Virtual reality is a modern tool of representing visual data, The observer has the impression of being “inside” a virtual surrounding, which is referred to as immersive imaging. Such techniques are currently being used in technical applications, e.g. in the automobile industry. Our aim is to introduce a workflow realized within one simple program which processes common image stacks from CT, produces 3D volume and surface reconstruction and rendering, and finally includes the data into a virtual reality device equipped with a motion head tracking cave automatic virtual environment system. Such techniques have the potential to augment the possibilities in non-invasive medical imaging, e.g. for surgical planning or educational purposes to add another dimension for advanced understanding of complex anatomical and pathological structures. To this end, the reconstructions are based on advanced mathematical techniques and the corresponding grids which we can export are intended to form the basis for simulations of mathematical models of the pathogenesis of different diseases.