3D Printing based on the CT data of measured object can print out the outer surface structure and internal surface structure of the object and can be used for reverse engineering. This paper presents a frame for generating 3D printing data directly from CT data. The frame includes three steps: firstly, surface mesh data with internal structure is extracted from CT data. Secondly, a mesh model recognized by 3D printer is obtained through generating topological information, removing isolated facets and non-manifold facets, filling holes and smoothing surface. Thirdly, for the target surface mesh model with internal structure or exceeding the maximum size of objects that 3D printer can print, the frame splits the mesh model into several parts and print them separately. The proposed frame was evaluated with a simulated CT data and two real CT data. These experiments showed that the proposed frame is effective to generate 3D printing data directly from CT data and preserve the shape analogy with the original object model with high precision.
Book chapter Scholl I., Suder S., Schiffer S. (2018) Direct Volume Rendering in Virtual Reality. In: Maier A., Deserno T., Handels H., Maier-Hein K., Palm C., Tolxdorff T. (eds) Bildverarbeitung für die Medizin 2018. Informatik aktuell. Springer Vieweg, Berlin, Heidelberg.
Direct Volume Rendering (DVR) techniques are used to visualize surfaces from 3D volume data sets, without computing a 3D geometry. Several surfaces can be classified using a transfer function by assigning optical properties like color and opacity (RGBα) to the voxel data. Finding a good transfer function in order to separate specific structures from the volume data set, is in general a manual and time-consuming procedure, and requires detailed knowledge of the data and the image acquisition technique. In this paper, we present a new Virtual Reality (VR) application based on the HTC Vive headset. Onedimensional transfer functions can be designed in VR while continuously rendering the stereoscopic image pair through massively parallel GPUbased ray casting shader techniques. The usability of the VR application is evaluated.
Radiologic evaluation of images from computed tomography (CT) or magnetic resonance imaging for diagnostic purposes is based on the analysis of single slices, occasionally supplementing this information with 3D reconstructions as well as surface or volume rendered images. However, due to the complexity of anatomical or pathological structures in biomedical imaging, innovative visualization techniques are required to display morphological characteristics three dimensionally. Virtual reality is a modern tool of representing visual data, The observer has the impression of being “inside” a virtual surrounding, which is referred to as immersive imaging. Such techniques are currently being used in technical applications, e.g. in the automobile industry. Our aim is to introduce a workflow realized within one simple program which processes common image stacks from CT, produces 3D volume and surface reconstruction and rendering, and finally includes the data into a virtual reality device equipped with a motion head tracking cave automatic virtual environment system. Such techniques have the potential to augment the possibilities in non-invasive medical imaging, e.g. for surgical planning or educational purposes to add another dimension for advanced understanding of complex anatomical and pathological structures. To this end, the reconstructions are based on advanced mathematical techniques and the corresponding grids which we can export are intended to form the basis for simulations of mathematical models of the pathogenesis of different diseases.