Abstract:
Introduction: Photogrammetric surface scans provide a radiation-free option to assess and classify craniosynostosis. Due to the low prevalence of craniosynostosis and high patient restrictions, clinical data is rare. Synthetic data could support or even replace clinical data for the classification of craniosynostosis, but this has never been studied systematically. Methods: We test the combinations of three different synthetic data sources: a statistical shape model (SSM), a generative adversarial network (GAN), and image-based principal component analysis for a convolutional neural network (CNN)-based classification of craniosynostosis. The CNN is trained only on synthetic data, but validated and tested on clinical data. Results: The combination of a SSM and a GAN achieved an accuracy of more than 0.96 and a F1-score of more than 0.95 on the unseen test set. The difference to training on clinical data was smaller than 0.01. Including a second image modality improved classification performance for all data sources. Conclusion: Without a single clinical training sample, a CNN was able to classify head deformities as accurate as if it was trained on clinical data. Using multiple data sources was key for a good classification based on synthetic data alone. Synthetic data might play an important future role in the assessment of craniosynostosis.
Abstract:
PURPOSE: Synthetic realistic-looking bronchoscopic videos are needed to develop and evaluate depth estimation methods as part of investigating vision-based bronchoscopic navigation system. To generate these synthetic videos under the circumstance where access to real bronchoscopic images/image sequences is limited, we need to create various realistic-looking image textures of the airway inner surface with large size using a small number of real bronchoscopic image texture patches. METHODS: A generative adversarial networks-based method is applied to create realistic-looking textures of the airway inner surface by learning from a limited number of small texture patches from real bronchoscopic images. By applying a purely convolutional architecture without any fully connected layers, this method allows the production of textures with arbitrary size. RESULTS: Authentic image textures of airway inner surface are created. An example of the synthesized textures and two frames of the thereby generated bronchoscopic video are shown. The necessity and sufficiency of the generated textures as image features for further depth estimation methods are demonstrated. CONCLUSIONS: The method can generate textures of the airway inner surface that meet the requirements for the texture itself and for the thereby generated bronchoscopic videos, including "realistic-looking," "long-term temporal consistency," "sufficient image features for depth estimation," and "large size and variety of synthesized textures." Besides, it also shows advantages with respect to the easy accessibility to required data source. A further validation of this approach is planned by utilizing the realistic-looking bronchoscopic videos with textures generated by this method as training and test data for some depth estimation networks.
Abstract:
The application of machine learning approachesin medical technology is gaining more and more attention.Due to the high restrictions for collecting intraoperative patientdata, synthetic data is increasingly used to support the trainingof artificial neural networks. We present a pipeline to createa statistical shape model (SSM) using 28 segmented clinicalliver CT scans. Our pipeline consists of four steps: data pre-processing, rigid alignment, template morphing, and statisti-cal modeling. We compared two different template morphingapproaches: Laplace-Beltrami-regularized projection (LBRP)and nonrigid iterative closest points translational (N-ICP-T)and evaluated both morphing approaches and their corre-sponding shape model performance using six metrics. LBRPachieved a smaller mean vertex-to-nearest-neighbor distances(2.486±0.897 mm) than N-ICP-T (5.559±2.413 mm). Gen-eralizationand specificity errors for LBRP were consistentlylower than those of N-ICP-T. The first principal componentsof the SSM showed realistic anatomical variations. The perfor-mance of the SSM was comparable to a state-of-the-art model.
Abstract:
Introduction: 3D surface scan-based diagnosis of craniosynostosis is a promising radiation-free alternative to traditional diagnosis using computed tomography. The cra- nial index (CI) and the cranial vault asymmetry index (CVAI) are well-established clinical parameters that are widely used. However, they also have the benefit of being easily adaptable for automatic diagnosis without the need of extensive prepro- cessing.Methods: We propose a multi-height-based classification ap- proach that uses CI and CVAI in different height layers and compare it to the initial approach using only one layer. We use ten-fold cross-validation and test seven different classi- fiers. The dataset of 504 patients consists of three types of craniosynostosis and a control group consisting of healthy and non-synostotic subjects.Results: The multi-height-based approach improved classifica- tion for all classifiers. The k-nearest neighbors classifier scored best with a mean accuracy of 89 % and a mean F1-score of 0.75.Conclusion: Taking height into account is beneficial for the classification. Based on accepted and widely used clinical pa- rameters, this might be a step towards an easy-to-understand and transparent classification approach for both physicians and patients.
Abstract:
Optical Coherence Tomography (OCT) is a stan- dard imaging procedure in ophthalmology. OCT Angiography is a promising extension, allowing for fast and non-invasive imaging of the retinal vasculature analyzing multiple OCT scans at the same place. Local variance is examined and highlighted. Despite its introduction in the clinic, unanswered questions remain when it comes to signal generation. Multi- phase fluids like intralipid, milk-water solutions and human blood cells were applied in phantom studies shedding light on some of the mechanisms. The use of hydrogel beads allows for the generation of alternative blood models for OCT and OCT Angiography. Beads were produced in Hannover, their size was measured and their long term stability was assessed. Then, beads were shipped to Karlsruhe, where OCT imaging resulted in first insights. The hydrogel acts as a diffusion barrier, which enables a clear distinction of bead and fluid when scattering particles were added. Further on, the scattering medium be- low the bead showed increased signal intensity. We conclude that the inside of the bead structure shows enhanced transmis- sion compared to the plasma substitute with dissolved TiO2 surrounding it. Beads were found clumped and deformed af- ter shipping, an issue to be addressed in further investigations. Nevertheless, hydrogel beads are promising as a blood model for OCT Angiography investigations, offering tunable optical parameters within the blood substitute solution.
Abstract:
During cerebral revascularization surgeries, blood flow values help surgeons to monitor the quality of the pro- cedure, e.g., to avoid cerebral hyperperfusion syndrome due to excessively enhanced perfusion. The state-of-the-art technique is the ultrasonic flow probe that has to be placed around the blood vessel. This causes contact between probe and vessel, which, in the worst case, leads to rupture. The recently devel- oped intraoperative indocyanine green (ICG) Quantitative Flu- orescence Angiography (QFA) is an alternative technique that overcomes this risk. However, it has been shown by the devel- oper that the calculated flow has deviations. After determining the bolus transit time as the most sensitive parameter in flow calculation, we propose a new two-step uncertainty reduction method for flow calculation. The first step is to generate more data in each measurement that results in functions of the pa- rameters. Noise can then be reduced in a second step. Two methods for this step are compared. The first method fits the model for each parameter function separately and calculates flow from models, while the second one fits multiple parame- ter functions together. The latter method is proven to perform best by in silico tests. Besides, this method reduces the de- viation of flow comparing to original QFA as expected. Our approach can be generally used in all QFA applications using two-point theory. Further development is possible if number of dimensions of the achieved parameter data are broadened that results in even more data for processing in the second step.
Abstract:
PURPOSE: Depth estimation is the basis of 3D reconstruction of airway structure from 2D bronchoscopic scenes, which can be further used to develop a vision-based bronchoscopic navigation system. This work aims to improve the performance of depth estimation directly from bronchoscopic images by training a depth estimation network on both synthetic and real datasets. METHODS: We propose a cGAN-based network Bronchoscopic-Depth-GAN (BronchoDep-GAN) to estimate depth from bronchoscopic images by translating bronchoscopic images into depth maps. The network is trained in a supervised way learning from synthetic textured bronchoscopic image-depth pairs and virtual bronchoscopic image-depth pairs, and simultaneously, also in an unsupervised way learning from unpaired real bronchoscopic images and depth maps to adapt the model to real bronchoscopic scenes. RESULTS: Our method is tested on both synthetic data and real data. However, the tests on real data are only qualitative, as no ground truth is available. The results show that our network obtains better accuracy in all cases in estimating depth from bronchoscopic images compared to the well-known cGANs pix2pix. CONCLUSIONS: Including virtual and real bronchoscopic images in the training phase of the depth estimation networks can improve depth estimation's performance on both synthetic and real scenes. Further validation of this work is planned on 3D clinical phantoms. Based on the depth estimation results obtained in this work, the accuracy of locating bronchoscopes with corresponding pre-operative CTs will also be evaluated in comparison with the current clinical status.
Abstract:
Purpose: To evaluate the impact of lens opacity on the reliability of optical coherence tomog- raphy angiography metrics and to find a vessel caliber threshold that is reproducible in cataract patients.Methods: A prospective cohort study of 31 patients, examining one eye per patient, by applying 33mm macular optical coherence tomography angiography before (18.94±12.22days) and 3 months (111 ± 23.45 days) after uncomplicated cataract surgery. We extracted superficial (SVC) and deep vascular plexuses (DVC) for further analysis and evaluated changes in image contrast, vessel metrics (perfusion density, flow deficit and vessel-diameter index) and foveal avascular area (FAZ). Results: After surgery, the blood flow signal in smaller capillaries was enhanced as image contrast improved. Signal strength correlated to average lens density defined by objective measurement in Scheimpflug images (Pearson’s r: –.40, p: .027) and to flow deficit (r1⁄4 –.70, p<.001). Perfusion density correlated to the signal strength index (r1⁄4.70, p<.001). Vessel metrics and FAZ area, except for FAZ area in DVC, were significantly different after cataract surgery, but the mean change was approximately 3–6%. A stepwise approach in extracting vessels according to their pixel caliber showed a threshold of > 6 pixels caliber ($20–30 mm) was comparable before and after lens removal.Conclusion: In patients with cataract, OCTA vessel metrics should be interpreted with caution. In addition to signal strength, contrast and pixel properties can serve as supplementary quality met- rics to improve the interpretation of OCTA metrics. Vessels with $20–30 mm in caliber seem to be reproducible.
Abstract:
Imaging distributions of radioactive sources plays a substantial role in nuclear medicine as well as in monitoring nuclear waste and its deposit. Coded Aperture Imaging (CAI) has been proposed as an alternative to parallel or pinhole collimators, but requires image reconstruction as an extra step. Multiple reconstruction methods with varying run time and computational complexity have been proposed. Yet, no quantitative comparison between the different reconstruction methods has been carried out so far. This paper focuses on a comparison based on three sets of hot-rod phantom images captured with an experimental γ-camera consisting of a Tungsten-based MURA mask with a 2 mm thick 256 × 256 pixelated CdTe semiconductor detector coupled to a Timepix© readout circuit. Analytical reconstruction methods, MURA Decoding, Wiener Filter and a convolutional Maximum Likelihood Expectation Maximization (MLEM) algorithm were compared to data-driven Convolutional Encoder-Decoder (CED) approaches. The comparison is based on the contrast-to-noise ratio as it has been previously used to assess reconstruction quality. For the given set-up, MURA Decoding, the most commonly used CAI reconstruction method, provides robust reconstructions despite the assumption of a linear model. For single image reconstruction, however, MLEM performed best of all analytical reconstruction methods, but took on average 45 times longer than MURA Decoding. The fastest reconstruction method is the Wiener Filter with a run time 4.3 times faster compared to MURA Decoding and a mediocre quality. The CED with a specifically tailored training set was able to succeed the most commonly used MURA decoding on average by a factor between 1.37 and 2.60 and an equal run time.
Abstract:
Artificial intelligence technology is trending in nearly every medical area. It offers the possibility for improving analytics, therapy outcome, and user experience during therapy. In dialysis, the application of artificial intelligence as a therapy-individualization tool is led more by start-ups than consolidated players, and innovation in dialysis seems comparably stagnant. Factors such as technical requirements or regulatory processes are important and necessary but can slow down the implementation of artificial intelligence due to missing data infrastructure and undefined approval processes. Current research focuses mainly on analyzing health records or wearable technology to add to existing health data. It barely uses signal data from treatment devices to apply artificial intelligence models. This article, therefore, discusses requirements for signal processing through artificial intelligence in health care and compares these with the status quo in dialysis therapy. It offers solutions for given barriers to speed up innovation with sensor data, opening access to existing and untapped sources, and shows the unique advantage of signal processing in dialysis compared to other health care domains. This research shows that even though the combination of different data is vital for improving patients' therapy, adding signal-based treatment data from dialysis devices to the picture can benefit the understanding of treatment dynamics, improving and individualizing therapy.
Abstract:
Objective: Diagnosis of craniosynostosis using photogrammetric 3D surface scans is a promising radiation-free alternative to traditional computed tomography. We propose a 3D surface scan to 2D distance map conversion enabling the usage of the first convolutional neural networks (CNNs)-based classification of craniosynostosis. Benefits of using 2D images include preserving patient anonymity, enabling data augmentation during training, and a strong under-sampling of the 3D surface with good classification performance.Methods: The proposed distance maps sample 2D images from 3D surface scans using a coordinate transformation, ray casting, and distance extraction. We introduce a CNNbased classification pipeline and compare our classifier to alternative approaches on a dataset of 496 patients. We investigate into low-resolution sampling, data augmentation, and attribution mapping.Results: Resnet18 outperformed alternative classifiers on our dataset with an F1-score of 0.964 and an accuracy of 98.4 %. Data augmentation on 2D distance maps increased performance for all classifiers. Under-sampling allowed 256-fold computation reduction during ray casting while retaining an F1-score of 0.92. Attribution maps showed high amplitudes on the frontal head.Conclusion: We demonstrated a versatile mapping approach to extract a 2D distance map from the 3D head geometry increasing classification performance, enabling data augmentation during training on 2D distance maps, and the usage of CNNs. We found that low-resolution images were sufficient for a good classification performance.Significance: Photogrammetric surface scans are a suitable craniosynostosis diagnosis tool for clinical practice. Domain transfer to computed tomography seems likely and can further contribute to reducing ionizing radiation exposure for infants.