Computer-assisted surgery prevents complications during peri-acetabular osteotomy


Computer-assisted surgery prevents complications during peri-acetabular osteotomy, Hayashi, S., Hashimoto, S., Matsumoto, T. et al. International Orthopaedics (SICOT) (2018).


The aim of study is to evaluate the accuracy of a navigation system during curved peri-acetabular osteotomy (CPO).

Forty-seven patients (53 hips) with hip dysplasia were enrolled and underwent CPO with or without navigation during surgery. Clinical and radiographical evaluations were performed and compared between the navigation group and non-navigation group, post-operatively.

The clinical outcomes were not significantly different between the navigation and non-navigation groups. Furthermore, post-operative reorientation of the acetabular fragment was similar between the navigation and non-navigation groups. However, the discrepancy between the pre-operative planning line and post-operative osteotomy line was significantly improved in the navigation group compared with that in the non-navigation group (p < 0.05). Further, the complication rate was significantly improved in the navigation group (p < 0.001). Conclusion The accuracy of the osteotomy’s position was significantly improved by using the navigation. Therefore, the use of navigation during peri-acetabular osteotomy can avoid complications.

The measurement of the distance between the 100-mm radius sphere line that was determined during pre-operative planning and the post-operative iliac bone surface (the error distance) on the a coronal and b axial planes. a The error distance outside the pelvis on the coronal plane (50–45.5 mm= 4.5 mm). b The error of the distance inside the pelvis on the axial plane (57.8–50 mm= 7.8 mm)

Total knee arthroplasties from the origin to navigation: history, rationale, indications

Review article Total knee arthroplasties from the origin to navigation: history, rationale, indications, Saragaglia, Rubens-Duval, Gaillot, et al. International Orthopaedics (SICOT) (2018).


Since the early 1970s, total knee arthroplasties have undergone many changes in both their design and their surgical instrumentation. It soon became apparent that to improve prosthesis durability, it was essential to have instruments which allowed them to be fitted reliably and consistently. Despite increasingly sophisticated surgical techniques, preoperative objectives were only met in 75% of cases, which led to the development, in the early 1990s, in Grenoble (France), of computer-assisted orthopaedic surgery for knee prosthesis implantation. In the early 2000s, many navigation systems emerged, some including pre-operative imagery (“CT-based”), others using intra-operative imagery (“fluoroscopy-based”), and yet others with no imagery at all (“imageless”), which soon became the navigation “gold standard”. They use an optoelectronic tracker, markers which are fixed solidly to the bones and instruments, and a navigation workstation (computer), with a control system (e.g. pedal). Despite numerous studies demonstrating the benefit of computer navigation in meeting preoperative objectives, such systems have not yet achieved the success they warrant, for various reasons we will be covering in this article. If the latest navigation systems prove to be as effective as the older systems, they should give this type of technology a well-deserved boost.

Distal femoral cutting guide, fitted with marker

Iatrogenic bone and soft tissue trauma in robotic-arm assisted TKA compared to conventional jig-based TKA

Iatrogenic bone and soft tissue trauma in robotic-arm assisted total knee arthroplasty compared to conventional jig-based total knee arthroplasty: A prospective cohort study and validation of a new classification system, by Kayani et al. Arthroplasty (2018) in press, accepted manuscript.


The objective of this study was to compare macroscopic bone and soft tissue injury between robotic-arm assisted total knee arthroplasty (RA-TKA) and conventional jig-based total knee arthroplasty (CJ-TKA), and create a validated classification system for reporting iatrogenic bone and periarticular soft tissue injury following TKA.

This study included 30 consecutive CJ-TKAs followed by 30 consecutive RA-TKAs performed by a single-surgeon. Intraoperative photographs of the femur, tibia, and periarticular soft tissues were taken prior to implantation of prostheses. Using these outcomes, a macroscopic soft tissue injury (MASTI) classification system was developed to grade iatrogenic soft tissue injuries. Inter- and intra-observer validity of the proposed classification system was assessed.

Patients undergoing RA-TKA had reduced medial soft tissue injury in both passively correctible (p<0.05) and non-correctible varus deformities (p<0.05); more pristine femoral (p<0.05) and tibial (p<0.05) bone resection cuts; and improved MASTI scores compared to CJ-TKA (p<0.05). There was high inter-observer (ICC 0.92 [95% CI: 0.88-0.96], p<0.05) and intra-observer agreement (ICC 0.94 [95% CI: 0.92- 0.97], p<0.05) of the proposed MASTI classification system. Conclusion There is reduced bone and periarticular soft tissue injury in patients undergoing RA-TKA compared to CJ-TKA. The proposed MASTI classification system is a reproducible grading scheme for describing iatrogenic bone and soft tissue injury in TKA.

A frame of 3D printing data generation method extracted from CT data


A Frame of 3D Printing Data Generation Method Extracted from CT Data, Zhao, S., Zhang, W., Sheng, W. et al. Sens Imaging (2018) 19: 12.


3D Printing based on the CT data of measured object can print out the outer surface structure and internal surface structure of the object and can be used for reverse engineering. This paper presents a frame for generating 3D printing data directly from CT data. The frame includes three steps: firstly, surface mesh data with internal structure is extracted from CT data. Secondly, a mesh model recognized by 3D printer is obtained through generating topological information, removing isolated facets and non-manifold facets, filling holes and smoothing surface. Thirdly, for the target surface mesh model with internal structure or exceeding the maximum size of objects that 3D printer can print, the frame splits the mesh model into several parts and print them separately. The proposed frame was evaluated with a simulated CT data and two real CT data. These experiments showed that the proposed frame is effective to generate 3D printing data directly from CT data and preserve the shape analogy with the original object model with high precision.

Accuracy analysis of CAS Stryker ADAPT® system for femoral trochanteric fracture using a fluoroscopic navigation system

Open access Accuracy analysis of computer-assisted surgery for femoral trochanteric fracture using a fluoroscopic navigation system: Stryker ADAPT® system, by Takai et al. Injury (2018).


ADAPT is a fluoroscopic computer-assisted surgery system which intraoperatively shows the distance from the tip of the screw to the surface of the femoral head, tip-to-head-surface distance (TSD), and the tip-apex distance (TAD) advocated by Baumgaertner et al. The study evaluated the accuracy of ADAPT.

Patients and Methods
A total of 55 patients operated with ADAPT between August 2016 and March 2017 were included as subjects. TSD and TAD were measured postoperatively using computed tomography (CT) and X-rays. The intraclass correlation coefficient (ICC) was checked in advance. The error was defined as the difference between postoperative and intraoperative measurement values of ADAPT. Summary statistics, root mean square errors (RMSEs), and correlations were evaluated.

ICC was 0.94 [95% CI: 0.90–0.96] in TSD and 0.99 [95% CI: 0.98–0.99] in TAD. The error was −0.35 mm (−1.83 mm to 1.12 mm) in TSD and +0.63 mm (−5.65 mm to 4.59 mm) in TAD. RMSE was 0.63 mm in TSD and 1.53 mm in TAD. Pearson’s correlation coefficient was 0.79 [95% CI: 0.66–0.87] in TSD and 0.83 [95% CI: 0.72–0.89] in TAD. There were no adverse events with ADAPT use.

ADAPT is highly accurate and useful in guiding surgeons in properly positioning the screws.

Development of a vibration haptic simulator for shoulder arthroplasty


Development of a vibration haptic simulator for shoulder arthroplasty, Kusins, J.R., Strelzow, J.A., LeBel, ME. et al. Int J CARS (2018).


Glenoid reaming is a technically challenging step during shoulder arthroplasty that could possibly be learned during simulation training. Creation of a realistic simulation using vibration feedback in this context is innovative. Our study focused on the development and internal validation of a novel glenoid reaming simulator for potential use as a training tool.

Vibration and force profiles associated with glenoid reaming were quantified during a cadaveric experiment. Subsequently, a simulator was fabricated utilizing a haptic vibration transducer with high- and low-fidelity amplifiers; system calibration was performed matching vibration peak–peak values for both amplifiers. Eight experts performed simulated reaming trials. The experts were asked to identify isolated layer profiles produced by the simulator. Additionally, experts’ efficiency to successfully perform a simulated glenoid ream based solely on vibration feedback was recorded.

Cadaveric experimental cartilage reaming produced lower vibrations compared to subchondral and cancellous bones ( p≤0.03). Gain calibration of a lower-fidelity (3.5 gpk−pk,0.36grms) and higher-fidelity (3.4 gpk−pk,0.33grms) amplifier resulted in values similar to the cadaveric experimental benchmark (3.5 gpk−pk,0.30grms). When identifying random tissue layer samples, experts were correct 52±9% of the time and success rate varied with tissue type ( p=0.003). During simulated reaming, the experts stopped at the targeted subchondral bone with a success rate of 78±24%. The fidelity of the simulation did not have an effect on accuracy, applied force, or reaming time ( p>0.05). However, the applied force tended to increase with trial number ( p=0.047).

Development of the glenoid reaming simulator, coupled with expert evaluation furthered our understanding of the role of haptic vibration feedback during glenoid reaming. This study was the first to (1) propose, develop and examine simulated glenoid reaming, and (2) explore the use of haptic vibration feedback in the realm of shoulder arthroplasty.

deconstructed view of components that constitute the glenoid reaming simulator (GRS).Ahaptic vibration transducer produced vibrations at the glenoid–reamer interface. A load cell was instrumented between the transducer’s housing unit and its base to detect the force applied by the user while interacting with the GRS. The surgical reamer was fitted with a non-fluted non-wearing reaming tip to eliminate the reamer’s material removal functionality.Collectively, these components allow the user to ‘ream’ the rapid-prototyped glenoid component while the haptic vibration transducer generates the specific vibration profiles dependent on the force applied and the layer that the user is reaming

IMHOTEP: virtual reality framework for surgical applications


IMHOTEP: virtual reality framework for surgical applications, Pfeiffer, M., Kenngott, H., Preukschas, A. et al. Int J CARS (2018).


The data which is available to surgeons before, during and after surgery is steadily increasing in quantity as well as diversity. When planning a patient’s treatment, this large amount of information can be difficult to interpret. To aid in processing the information, new methods need to be found to present multimodal patient data, ideally combining textual, imagery, temporal and 3D data in a holistic and context-aware system.

We present an open-source framework which allows handling of patient data in a virtual reality (VR) environment. By using VR technology, the workspace available to the surgeon is maximized and 3D patient data is rendered in stereo, which increases depth perception. The framework organizes the data into workspaces and contains tools which allow users to control, manipulate and enhance the data. Due to the framework’s modular design, it can easily be adapted and extended for various clinical applications.

The framework was evaluated by clinical personnel (77 participants). The majority of the group stated that a complex surgical situation is easier to comprehend by using the framework, and that it is very well suited for education. Furthermore, the application to various clinical scenarios—including the simulation of excitation propagation in the human atrium—demonstrated the framework’s adaptability. As a feasibility study, the framework was used during the planning phase of the surgical removal of a large central carcinoma from a patient’s liver.

The clinical evaluation showed a large potential and high acceptance for the VR environment in a medical context. The various applications confirmed that the framework is easily extended and can be used in real-time simulation as well as for the manipulation of complex anatomical structures.

IMHOTEP architecture overview. The framework builds on the Unity3D engine. It uses ITK to parse DICOM data, reads Blender3D surface meshes and communicates withVRhardware through OpenVR. The various data types can be manipulated by tools and visualized using various methods. Demanding tasks can be outsourced to separate threads. Asynchronous tasks can communicate across modules by firing events. Tested with Unity3D Version 2017.2, SimpleITK Version 1.0.1, OpenVR/SteamVR Version 1.2.1 and Blender3D Version 2.78

Smartphone-assisted minimally invasive neurosurgery


Open access paper Smartphone-assisted minimally invasive neurosurgery, by Mandel et al. Journal of Neurosurgery (2018)


Advances in video and fiber optics since the 1990s have led to the development of several commercially available high-definition neuroendoscopes. This technological improvement, however, has been surpassed by the smartphone revolution. With the increasing integration of smartphone technology into medical care, the introduction of these high-quality computerized communication devices with built-in digital cameras offers new possibilities in neuroendoscopy. The aim of this study was to investigate the usefulness of smartphone-endoscope integration in performing different types of minimally invasive neurosurgery.

The authors present a new surgical tool that integrates a smartphone with an endoscope by use of a specially designed adapter, thus eliminating the need for the video system customarily used for endoscopy. The authors used this novel combined system to perform minimally invasive surgery on patients with various neuropathological disorders, including cavernomas, cerebral aneurysms, hydrocephalus, subdural hematomas, contusional hematomas, and spontaneous intracerebral hematomas.

The new endoscopic system featuring smartphone-endoscope integration was used by the authors in the minimally invasive surgical treatment of 42 patients. All procedures were successfully performed, and no complications related to the use of the new method were observed. The quality of the images obtained with the smartphone was high enough to provide adequate information to the neurosurgeons, as smartphone cameras can record images in high definition or 4K resolution. Moreover, because the smartphone screen moves along with the endoscope, surgical mobility was enhanced with the use of this method, facilitating more intuitive use. In fact, this increased mobility was identified as the greatest benefit of the use of the smartphone-endoscope system compared with the use of the neuroendoscope with the standard video set.

Minimally invasive approaches are the new frontier in neurosurgery, and technological innovation and integration are crucial to ongoing progress in the application of these techniques. The use of smartphones with endoscopes is a safe and efficient new method of performing endoscope-assisted neurosurgery that may increase surgeon mobility and reduce equipment costs.

Intraventricular neuroendoscopy with the use of a smartphone for aqueductal stenosis in a 6-month-old male infant, in whom an abnormal progressive increase in head circumference had developed since birth. A: Photograph obtained during surgery with an MR image of the brain demonstrating hydrocephalus due to idiopathic aqueductal stenosis (red arrow in inset). The endoscopic procedure is performed in a standard manner except for the addition of the iPhone attached to the endoscope, which provided a viewing screen that allowed the surgeon’s gaze to remain on the surgical field. B–E: The iPhone provided the surgeon with endoscopic views of the foramen of Monro (B), an ideal spot for fenestration (C and D), and the final aspect of the field (E). The 4K camera resolution allows adequate visualization of perforating arteries, thus allowing the surgeon to avoid causing inadvertent injuries. After this surgery the patient progressed well with no need for additional procedures.

Guided pelvic resections in tumor surgery


Guided Pelvic Resections in Tumor Surgery, Alexander, John, H.; Mayerson, Joel, L.; Scharschmidt, Thomas, J. Techniques in Orthopaedics (Publish Ahead of Print)


Primary bone sarcoma of the pelvis is one of the more challenging pathologies treated by orthopedic oncologists. In particular, their anatomic complexity contributes to delays in diagnosis and high rates of positive margins with associated high rates of local recurrence, all contributing to poor outcomes in this patient population. Computer-assisted surgery in the form of navigation and patient-specific instrumentation has shown promise in other fields of orthopedics. Intuitively, in an effort to improve tumor resections and improve oncologic outcomes, surgeons have been working to apply these advances to orthopedic oncology. Early studies have demonstrated benefits from guided pelvic resections, with studies demonstrating improved resection accuracy, fewer positive margins and decreased rates of local recurrence. Although these techniques are promising and will likely become an essential tool for orthopedic oncologist, surgeons must understand the limitations and costs associated with each technology before blind adoption.

The preoperative computed tomography and magnetic resonance imaging were utilized to create a virtual resection plane and representative 3-dimensional model (3D Systems, Rock Hills, SC). A, Preoperative virtual 3-dimensional model of the sacral-based chondrosarcoma with the associated resection planes and planned postresection defect. B, Virtual representation of 3D-printed resection guides and their intraoperative placement. C, Stereolithographic 3D-printed pelvic model (3D Systems, Rock Hills, SC) made from a photopolymer resin with patient-specific sacral mass, vascular anatomy, and resection planes.

A deep learning approach for pose estimation from volumetric OCT data


A deep learning approach for pose estimation from volumetric OCT data, by Gesserta, Schlütera, & Schlaefer Medical Image Analysis (2018) 46:162-179


Tracking the pose of instruments is a central problem in image-guided surgery. For microscopic scenarios, optical coherence tomography (OCT) is increasingly used as an imaging modality. OCT is suitable for accurate pose estimation due to its micrometer range resolution and volumetric field of view. However, OCT image processing is challenging due to speckle noise and reflection artifacts in addition to the images’ 3D nature. We address pose estimation from OCT volume data with a new deep learning-based tracking framework. For this purpose, we design a new 3D convolutional neural network (CNN) architecture to directly predict the 6D pose of a small marker geometry from OCT volumes. We use a hexapod robot to automatically acquire labeled data points which we use to train 3D CNN architectures for multi-output regression. We use this setup to provide an in-depth analysis on deep learning-based pose estimation from volumes. Specifically, we demonstrate that exploiting volume information for pose estimation yields higher accuracy than relying on 2D representations with depth information. Supporting this observation, we provide quantitative and qualitative results that 3D CNNs effectively exploit the depth structure of marker objects. Regarding the deep learning aspect, we present efficient design principles for 3D CNNs, making use of insights from the 2D deep learning community. In particular, we present Inception3D as a new architecture which performs best for our application. We show that our deep learning approach reaches errors at our ground-truth label’s resolution. We achieve a mean average error of 14.89 ± 9.3 µm and 0.096 ± 0.072° for position and orientation learning, respectively.

Graphical abstract: A deep learning-based method for pose estimation from OCT volumes is proposed. Learning from volumes with 3D CNNs outperforms depth-map approaches for tracking. 3D CNNs learn to exploit features inside of markers for pose estimation. Inception3D combines efficient design principles to achieve high accuracy.