3D Printing based on the CT data of measured object can print out the outer surface structure and internal surface structure of the object and can be used for reverse engineering. This paper presents a frame for generating 3D printing data directly from CT data. The frame includes three steps: firstly, surface mesh data with internal structure is extracted from CT data. Secondly, a mesh model recognized by 3D printer is obtained through generating topological information, removing isolated facets and non-manifold facets, filling holes and smoothing surface. Thirdly, for the target surface mesh model with internal structure or exceeding the maximum size of objects that 3D printer can print, the frame splits the mesh model into several parts and print them separately. The proposed frame was evaluated with a simulated CT data and two real CT data. These experiments showed that the proposed frame is effective to generate 3D printing data directly from CT data and preserve the shape analogy with the original object model with high precision.
Designing volume visualizations showing various structures of interest is critical to the exploratory analysis of volumetric data. The last few years have witnessed dramatic advances in the use of convolutional neural networks for identification of objects in large image collections. Whereas such machine learning methods have shown superior performance in a number of applications, their direct use in volume visualization has not yet been explored. In this paper, we present a deep-learning-assisted volume visualization to depict complex structures, which are otherwise challenging for conventional approaches. A significant challenge in designing volume visualizations based on the high-dimensional deep features lies in efficiently handling the immense amount of information that deep-learning methods provide. In this paper, we present a new technique that uses spectral methods to facilitate user interactions with high-dimensional features. We also present a new deep-learning-assisted technique for hierarchically exploring a volumetric dataset. We have validated our approach on two electron microscopy volumes and one magnetic resonance imaging dataset.
The X-ray-based reconstruction methods using anterior-posterior (AP) and lateral (LAT) images assumes that the angle between the AP and LAT images is perpendicular. However, it is difficult to maintain the perfect perpendicular angle between the AP and LAT images when taking those two images sequentially in real situations. In this study, the robustness of a three-dimensional (3D) whole spine reconstruction method using AP and LAT planar X-ray images was analyzed by investigating cases in which the AP and LAT images were not taken perpendicularly. 3D models of the patient-specific spine from five subjects were reconstructed using AP and LAT X-Ray images and the 3D template models of C1 to L5 vertebrae based on B-spline free-form deformation (FFD) technology. The shape error, projected area error, and relative error in length were quantified by comparing the reconstructed model (FFD model) to the reference model (CT model). The results indicated that the reconstruction method might be considered robust in case that there is a small angular error, such as 5°, between the AP and LAT images. This study suggested that simple technical indications to obtain the perpendicular AP and LAT images in real situations can improve accuracy of 3D spine reconstruction.