Abstract:
In recent years, synthetic Computed Tomography (CT) images generated from Magnetic Resonance (MR) or Cone Beam Computed Tomography (CBCT) acquisitions have been shown to be comparable to real CT images in terms of dose computation for radiotherapy simulation. However, until now, there has been no independent strategy to assess the quality of each synthetic image in the absence of ground truth. In this work, we propose a Deep Learning (DL)-based framework to predict the accuracy of synthetic CT in terms of Mean Absolute Error (MAE) without the need for a ground truth (GT). The proposed algorithm generates a volumetric map as an output, informing clinicians of the predicted MAE slice-by-slice. A cascading multi-model architecture was used to deal with the complexity of the MAE prediction task. The workflow was trained and tested on two cohorts of head and neck cancer patients with different imaging modalities: 27 MR scans and 33 CBCT. The algorithm evaluation revealed an accurate HU prediction (a median absolute prediction deviation equal to 4 HU for CBCT-based synthetic CTs and 6 HU for MR-based synthetic CTs), with discrepancies that do not affect the clinical decisions made on the basis of the proposed estimation. The workflow exhibited no systematic error in MAE prediction. This work represents a proof of concept about the feasibility of synthetic CT evaluation in daily clinical practice, and it paves the way for future patient-specific quality assessment strategies.
Abstract:
Limited medical image data hinders the training of deep learning (DL) models in the biomedical field. Image augmentation can reduce the data-scarcity problem by generating variations of existing images. However, currently implemented methods require coding, excluding non-programmer users from this opportunity.We therefore present ImageAugmenter, an easy-to-use and open-source module for 3D Slicer imaging computing platform. It offers a simple and intuitive interface for applying over 20 simultaneous MONAI Transforms (spatial, intensity, etc.) to medical image datasets, all without programming.ImageAugmenter makes accessible medical image augmentation, enabling a wider range of users to improve the performance of DL models in medical image analysis by increasing the number of samples available for training.
Abstract:
Monocular depth estimation is an important topic in minimally invasive surgery, providing valuable information for downstream application, like navigation systems. Deep learning for this task requires high amount of training data for an accurate and robust model. Especially in the medical field acquiring ground truth depth information is rarely possible due to patient security and technical limitations. This problem is being tackled by many approaches including the use of syn- thetic data. This leads to the question, how well does the syn- thetic data allow the prediction of depth information on clini- cal data. To evaluate this, the synthetic data is used to train and optimize a U-Net, including hyperparameter tuning and aug- mentation. The trained model is then used to predict the depth on clinical image and analyzed in quality, consistency over the same scene, time and color. The results demonstrate that syn- thetic data sets can be used for training, with an accuracy of over 77% and a RMSE below 10 mm on the synthetic data set, do well on resembling clinical data, but also have limitations due to the complexity of clinical environments. Synthetic data sets are a promising approach allowing monocular depth esti- mation in fields with otherwise lacking data.
Abstract:
Efficient personalized ablation strategies for treating atrial arrhythmias remain challenging. Discrepancies in identifying arrhythmogenic areas using characterization methods, such as late gadolinium enhanced magnetic resonance imaging (LGE-MRI) and electroanatomical mapping, require a comparative analysis of local impedance (LI) and LGE-MRI data. This study aims to analyze correlations as basis for im- provement of treatment strategies. 16 patients undergoing left atrium (LA) ablation with LGE-MRI acquisition and LI data recording were recruited. LGE-MRI data and LI measurements were normalized to patient- and modality-specific blood pool references. A global mean shape was generated based on all patient geometries, and normalized local impedance (LIN) and LGE-MRI image intensity ratio (IIR) data points were co- registered for comparison.Data analysis comprised intra-patient and inter-patient assess- ments, evaluating differences in LIN values among datasets categorized by their IIR. Due to substantial deviations in LIN values, even within the same patient and IIR-category, discern- ing the presence or absence of a correlation was challenging, and no statistically significant correlation could be identified. Our findings underscore the necessity for standardized proto- cols in data acquisition, processing, and comparison, to mini- mize unquantified confounding effects. While immediate substitution of LI for LGE-MRI seems improbable given the significant LIN variations, this preliminary study lays the ground- work for systematic data acquisition. By ensuring data quality, a meaningful comparison between LI and LGE-MRI data can be facilitated, potentially shaping future strategies for atrial ar- rhythmia treatment.
Abstract:
In laparoscopic liver surgery, image-guided navigation systems provide crucial support to surgeons by supply- ing information about tumor and vessel positions. For this purpose, these information from a preoperative CT or MRI scan is overlaid onto the laparoscopic video. One option is performing a registration of preoperative 3D data and 3D reconstructed laparoscopic data. A robust registration is challenging due to factors like limited field of view, liver deformations, and 3D reconstruction errors. Since in reality various influencing factors always intertwine, it is crucial to analyze their combined effects. This paper assesses registration accuracy under various synthetically simulated influences: patch size, spatial dis- placement, Gaussian deformations, holes, and downsampling. The objective is to provide insights into the required quality of the intraoperative 3D surface patches. LiverMatch serves as the feature descriptor, and registration employs the RANSAC algorithm. The results of this paper show that ensuring a large field of view of at least 15-20% of the liver surface is necessary, allowing tolerance for less accurate depth estimation.
Abstract:
The lack of labeled, intraoperative patient data in medical scenarios poses a relevant challenge for machine learning applications. Given the apparent power of machine learning, this study examines how synthetically-generated data can help to reduce the amount of clinical data needed for robust liver surface segmentation in laparoscopic images. Here, we report the results of three experiments, using 525 annotated clinical images from 5 patients alongside 20,000 synthetic photo-realistic images from 10 patient models. The effectiveness of the use of synthetic data is compared to the use of data augmentation, a traditional performance-enhancing technique. For training, a supervised approach employing the U-Net architecture was chosen. The results of these experiments show a progressive increase in accuracy. Our base experiment on clinical data yielded an F1 score of 0.72. Applying data augmentation to this model increased the F1 score to 0.76. Our model pre-trained on synthetic data and fine-tuned with augmented data achieved an F1 score of 0.80, a 4% increase. Additionally, a model evaluation involving k-fold cross validation highlighted the dependency of the result on the test set. These results demonstrate that leveraging synthetic data has the ability of limiting the need for more patient data to increase the segmentation performance.
Abstract:
We investigate the force-frequency relationship (FFR) representation of human ventricular cardiomyocyte models and point out shortcomings, motivated by discrepancies in whole-heart simulations at increased pacing rates. Utilizing the openCARP simulator, simulations across frequencies ranging from 1 Hz to 3 Hz were conducted. Experimental data on healthy human ventricular cardiomyocytes were collected and compared against simulated results. Results show deviations for all models, with Tomek et al. modeling time sensitive biomarkers the best. For example, the ratio of time to peak tension at 2 Hz and 1 Hz is around 85% for experiments, 82% for hybrid data, 95% for Tomek et al., 98% for O’Hara et al. and 138% for ten Tusscher et al. These discrepancies, highlight not only the need for careful selection of ionic models, but also the importance of refining ventricular cardiomyocyte models for advancing in-silico cardiac research.
Abstract:
AIMS: The effective refractory period (ERP) is one of the main electrophysiological properties governing arrhythmia, yet ERP personalization is rarely performed when creating patient-specific computer models of the atria to inform clinical decision-making. This study evaluates the impact of integrating clinical ERP measurements into personalized in silico models on arrhythmia vulnerability. METHODS AND RESULTS: Clinical ERP measurements were obtained in seven patients from multiple locations in the atria. Atrial geometries from the electroanatomical mapping system were used to generate personalized anatomical atrial models. The Courtemanche M. et al. cellular model was adjusted to reproduce patient-specific ERP. Four modeling approaches were compared: homogeneous (A), heterogeneous (B), regional (C), and continuous (D) ERP distributions. Non-personalized approaches (A and B) were based on literature data, while personalized approaches (C and D) were based on patient measurements. Modeling effects were assessed on arrhythmia vulnerability and tachycardia cycle length, with sensitivity analysis on ERP measurement uncertainty. Mean vulnerability was 3.4 ± 4.0%, 7.7 ± 3.4%, 9.0 ± 5.1%, and 7.0 ± 3.6% for scenarios A-D, respectively. Mean tachycardia cycle length was 167.1 ± 12.6 ms, 158.4 ± 27.5 ms, 265.2 ± 39.9 ms, and 285.9 ± 77.3 ms for scenarios A-D, respectively. Incorporating perturbations to the measured ERP in the range of 2, 5, 10, 20, and 50 ms changed the vulnerability of the model to 5.8 ± 2.7%, 6.1 ± 3.5%, 6.9 ± 3.7%, 5.2 ± 3.5%, and 9.7 ± 10.0%, respectively. CONCLUSION: Increased ERP dispersion had a greater effect on re-entry dynamics than on vulnerability. Inducibility was higher in personalized scenarios compared with scenarios with uniformly reduced ERP; however, this effect was reversed when incorporating fibrosis informed by low-voltage areas. Effective refractory period measurement uncertainty up to 20 ms slightly influenced vulnerability. Electrophysiological personalization of atrial in silico models appears essential and requires confirmation in larger cohorts.
Abstract:
The human heart is subject to highly variable amounts of strain during day-to-day activities and needs to adapt to a wide range of physiological demands. This adaptation is driven by an autoregulatory loop that includes both electrical and the mechanical components. In particular, mechanical forces are known to feed back into the cardiac electrophysiology system, which can result in pro- and anti-arrhythmic effects. Despite the widespread use of computational modelling and simulation for cardiac electrophysiology research, the majority of in silico experiments ignore this mechano-electric feedback entirely due to the high computational cost associated with solving cardiac mechanics. In this study, we therefore use an electromechanically coupled whole-heart model to investigate the differential and combined effects of electromechanical feedback mechanisms with a focus on their physiological relevance during sinus rhythm. In particular, we consider troponin-bound calcium, the effect of deformation on the tissue diffusion tensor, and stretch-activated channels. We found that activation of the myocardium was only significantly affected when including deformation into the diffusion term of the monodomain equation. Repolarization, on the other hand, was influenced by both troponin-bound calcium and stretch-activated channels and resulted in steeper repolarization gradients in the atria. The latter also caused afterdepolarizations in the atria. Due to its central role for tension development, calcium bound to troponin affected stroke volume and pressure. In conclusion, we found that mechano-electric feedback changes activation and repolarization patterns throughout the heart during sinus rhythm and lead to a markedly more heterogeneous electrophysiological substrate. KEY POINTS: The electrophysiological and mechanical function of the heart are tightly interrelated by excitation-contraction coupling (ECC) in the forward direction and mechano-electric feedback (MEF) in the reverse direction. While ECC is considered in many state-of-the-art computational models of cardiac electromechanics, less is known about the effect of different MEF mechanisms. Accounting for calcium bound to troponin increases stroke volume and delays repolarization. Geometry-mediated MEF leads to more heterogeneous activation and repolarization with steeper gradients. Both effects combine in an additive way. Non-selective stretch-activated channels as an additional MEF mechanism lead to heterogeneous diastolic transmembrane voltage, higher developed tension and delayed repolarization or afterdepolarizations in highly stretched parts of the atria. The differential and combined effects of these three MEF mechanisms during sinus rhythm activation in a human four-chamber heart model may have implications for arrhythmogenesis, both in terms of substrate (repolarization gradients) and triggers (ectopy).
Abstract:
Objective. 3D-localization of gamma sources has the potential to improve the outcome of radio-guided surgery. The goal of this paper is to analyze the localization accuracy for point-like sources with a single coded aperture camera. Approach. We both simulated and measured a point-like 241Am source at 17 positions distributed within the field of view of an experimental gamma camera. The setup includes a 0.11mm thick Tungsten sheet with a MURA mask of rank 31 and pinholes of 0.08 mm in diameter and a detector based on the photon counting readout circuit Timepix3. Two methods, namely an iterative search including either a symmetric Gaussian fitting or an exponentially modified Gaussian fitting (EMG) and a center of mass method were compared to estimate the 3D source position. Main results. Considering the decreasing axial resolution with source-to-mask distance, the EMG improved the results by a factor of 4 compared to the Gaussian fitting based on the simulated data. Overall, we obtained a mean localization error of 0.77 mm on the simulated and 2.64 mm on the experimental data in the imaging range of 20−100 mm. Significance. This paper shows that despite the low axial resolution, point-like sources in the nearfield can be localized as well as with more sophisticated imaging devices such as stereo cameras. The influence of the source size and the photon count on the imaging and localization accuracy remains an important issue for further research.
Abstract:
Background: Computer models for simulating cardiac electrophysiology are valuable tools for research and clinical applications. Traditional reaction-diffusion (RD) models used for these purposes are computationally expensive. While eikonal models offer a faster alternative, they are not well-suited to study cardiac arrhythmias driven by reentrant activity. The present work extends the diffusion-reaction eikonal alternant model (DREAM), incorporating conduction velocity (CV) restitution for simulating complex cardiac arrhythmias. Methods: The DREAM modifies the fast iterative method to model cyclical behavior, dynamic boundary conditions, and frequency-dependent anisotropic CV. Additionally, the model alternates with an approximated RD model, using a detailed ionic model for the reaction term and a triple-Gaussian to approximate the diffusion term. The DREAM and monodomain models were compared, simulating reentries in 2D manifolds with different resolutions. Results: The DREAM produced similar results across all resolutions, while experiments with the monodomain model failed at lower resolutions. CV restitution curves obtained using the DREAM closely approximated those produced by the monodomain simulations. Reentry in 2D slabs yielded similar results in vulnerable window and mean reentry duration for low CV in both models. In the left atrium, most inducing points identified by the DREAM were also present in the high-resolution monodomain model. DREAM's reentry simulations on meshes with an average edge length of 1600$\mu$m were 40x faster than monodomain simulations at 200$\mu$m. Conclusion: This work establishes the mathematical foundation for using the accelerated DREAM simulation method for cardiac electrophysiology. Cardiac research applications are enabled by a publicly available implementation in the openCARP simulator.
Abstract:
Background and Aims Patients with persistent atrial fibrillation (AF) experience 50% recurrence despite pulmonary vein isolation (PVI), and no consensus is established for second treatments. The aim of our i-STRATIFICATION study is to provide evidence for stratifying patients with AF recurrence after PVI to optimal pharmacological and ablation therapies, through in-silico trials.Methods A cohort of 800 virtual patients, with variability in atrial anatomy, electrophysiology, and tissue structure (low voltage areas, LVA), was developed and validated against clinical data from ionic currents to ECG. Virtual patients presenting AF post-PVI underwent 13 secondary treatments.Results Sustained AF developed in 522 virtual patients after PVI. Second ablation procedures involving left atrial ablation alone showed 55% efficacy, only succeeding in small right atria (<60mL). When additional cavo-tricuspid isthmus ablation was considered, Marshall-Plan sufficed (66% efficacy) for small left atria (<90mL). For bigger left atria, a more aggressive ablation approach was required, such as anterior mitral line (75% efficacy) or posterior wall isolation plus mitral isthmus ablation (77% efficacy). Virtual patients with LVA greatly benefited from LVA ablation in the left and right atria (100% efficacy). Conversely, in the absence of LVA, synergistic ablation and pharmacotherapy could terminate AF. In the absence of ablation, the patient’s ionic current substrate modulated the response to antiarrhythmic drugs, being the inward currents critical for optimal stratification to amiodarone or vernakalant.Conclusion In-silico trials identify optimal strategies for AF treatment based on virtual patient characteristics, evidencing the power of human modelling and simulation as a clinical assisting tool.
Abstract:
Simulation models and artificial intelligence (AI) are largely used to address healthcare and biomedical engineering problems. Both approaches showed promising results in the analysis and optimization of healthcare processes. Therefore, the combination of simulation models and AI could provide a strategy to further boost the quality of health services. In this work, a systematic review of studies applying a hybrid simulation models and AI approach to address healthcare management challenges was carried out. Scopus, Web of Science, and PubMed databases were screened by independent reviewers. The main strategies to combine simulation and AI as well as the major healthcare application scenarios were identified and discussed. Moreover, tools and algorithms to implement the proposed approaches were described. Results showed that machine learning appears to be the most employed AI strategy in combination with simulation models, which mainly rely on agent-based and discrete-event systems. The scarcity and heterogeneity of the included studies suggested that a standardized framework to implement hybrid machine learning-simulation approaches in healthcare management is yet to be defined. Future efforts should aim to use these approaches to design novel intelligent in-silico models of healthcare processes and to provide effective translation to the clinics.
Abstract:
Purpose: Handheld gamma cameras with coded aperture collimators are under inves- tigation for intraoperative imaging in nuclear medicine. Coded apertures are a promis- ing collimation technique for applications such as lymph node localization due to their high sensitivity and the possibility of 3D imaging. We evaluated the axial resolutionand computational performance of two reconstruction methods.Methods: An experimental gamma camera was set up consisting of the pixelated semiconductor detector Timepix3 and MURA mask of rank 31 with round holesof 0.08 mm in diameter in a 0.11 mm thick Tungsten sheet. A set of measurements was taken where a point-like gamma source was placed centrally at 21 different positions within the range of 12–100 mm. For each source position, the detector image was reconstructed in 0.5 mm steps around the true source position, resulting in an image stack. The axial resolution was assessed by the full width at half maximum (FWHM) of the contrast-to-noise ratio (CNR) profile along the z-axis of the stack. Two reconstruction methods were compared: MURA Decoding and a 3D maximum likeli- hood expectation maximization algorithm (3D-MLEM).Results: While taking 4400 times longer in computation, 3D-MLEM yielded a smaller axial FWHM and a higher CNR. The axial resolution degraded from 5.3 mm and 1.8 mm at 12 mm to 42.2 mm and 13.5 mm at 100 mm for MURA Decoding and 3D-MLEM respectively.Conclusion: Our results show that the coded aperture enables the depth estimation of single point-like sources in the near field. Here, 3D-MLEM offered a better axial reso- lution but was computationally much slower than MURA Decoding, whose reconstruc- tion time is compatible with real-time imaging.
Abstract:
Background: The global coronavirus disease 2019 (COVID-19) pandemic has posed substantial challenges for healthcare systems, notably the increased demand for chest computed tomography (CT) scans, which lack automated analysis. Our study addresses this by utilizing artificial intelligence-supported automated computer analysis to investigate lung involvement distribution and extent in COVID-19 patients. Additionally, we explore the association between lung involvement and intensive care unit (ICU) admission, while also comparing computer analysis performance with expert radiologists’ assessments.Methods: A total of 81 patients from an open-source COVID database with confirmed COVID-19 infection were included in the study. Three patients were excluded. Lung involvement was assessed in 78 patients using CT scans, and the extent of infiltration and collapse was quantified across various lung lobes and regions. The associations between lung involvement and ICU admission were analysed. Additionally, the computer analysis of COVID-19 involvement was compared against a human rating provided by radiological experts.Results: The results showed a higher degree of infiltration and collapse in the lower lobes compared to the upper lobes (P<0.05). No significant difference was detected in the COVID-19-related involvement of the left and right lower lobes. The right middle lobe demonstrated lower involvement compared to the right lower lobes (P<0.05). When examining the regions, significantly more COVID-19 involvement was found when comparing the posterior vs. the anterior halves and the lower vs. the upper half of the lungs. Patients, who required ICU admission during their treatment exhibited significantly higher COVID-19 involvement in their lung parenchyma according to computer analysis, compared to patients who remained in general wards. Patients with more than 40% COVID-19 involvement were almost exclusively treated in intensive care. A high correlation was observed between computer detection of COVID-19 affections and the rating by radiological experts.Conclusions: The findings suggest that the extent of lung involvement, particularly in the lower lobes, dorsal lungs, and lower half of the lungs, may be associated with the need for ICU admission in patients with COVID-19. Computer analysis showed a high correlation with expert rating, highlighting its potential utility in clinical settings for assessing lung involvement. This information may help guide clinical decision-making and resource allocation during ongoing or future pandemics. Further studies with larger sample sizes are warranted to validate these findings.
Abstract:
Feature importance methods promise to provide a ranking of features according to importance for a given classification task. A wide range of methods exist but their rankings often disagree and they are inherently difficult to evaluate due to a lack of ground truth beyond synthetic datasets. In this work, we put feature importance methods to the test on real-world data in the domain of cardiology, where we try to distinguish three specific pathologies from healthy subjects based on ECG features comparing to features used in cardiologists' decision rules as ground truth. We found that the SHAP and LIME methods and Chi-squared test all worked well together with the native Random forest and Logistic regression feature rankings. Some methods gave inconsistent results, which included the Maximum Relevance Minimum Redundancy and Neighbourhood Component Analysis methods. The permutation-based methods generally performed quite poorly. A surprising result was found in the case of left bundle branch block, where T-wave morphology features were consistently identified as being important for diagnosis, but are not used by clinicians.
Abstract:
We investigate the properties of static mechanical and dynamic electro-mechanical models for the deformation of the human heart. Numerically this is realized by a staggered scheme for the coupled partial/ordinary differential equation (PDE-ODE) system. First, we consider a static and purely mechanical benchmark configuration on a realistic geometry of the human ventricles. Using a penalty term for quasi-incompressibility, we test different parameters and mesh sizes and observe that this approach is not sufficient for lowest order conforming finite elements. Then, we compare the approaches of active stress and active strain for cardiac muscle contraction. Finally, we compare in a coupled anatomically realistic electro-mechanical model numerical Newmark damping with a visco-elastic model using Rayleigh damping. Nonphysiological oscillations can be better mitigated using viscosity.
Abstract:
ntercellular heterogeneity is fundamental to most biological tissues. For some cell types, heterogeneity is thought to be responsible for distinct cellular phenotypes and functional roles. In the pancreatic islet, subsets of phenotypically distinct beta cells (hub and leader cells) are thought to coordinate electrical activity of the beta cell network. This hypothesis has been addressed by experimental and computational approaches, but none have attempted to reconstruct functional specialization directly from measured heterogeneity. To evaluate if electrophysiologic heterogeneity alone can explain these specialized functional roles, we created a population of human beta cell models (via genetic algorithm optimization) recapitulating the heterogeneity in an extensive patch clamp dataset (1021 pancreatic cells). Then we applied the simplified Kirchhoff network (SKNM) formalism to simulate activity of that population in a connected beta cell network. We could not immediately observe cells with obvious hub or leader phenotypes within the heterogeneous network. However, with this study we built the basis for further ''ground-up'' investigation of relationships between beta cell heterogeneity and human islet function. Moreover, our workflow may be translated to other tissues where large electrophysiologic data sets become available, and heterogeneity is thought to influence tissue function e.g. human atria.
Abstract:
IntroductionThe role of the right atrium (RA) in atrial fibrillation (AF) has long been overlooked. Computer models of the atria can aid in assessing how the RA influences arrhythmia vulnerability and in studying the role of RA drivers in the induction of AF, both aspects challenging to assess in living patients until now. It remains unclear whether the incorporation of the RA influences the propensity of the model to reentry induction. As personalized ablation strategies rely on non-inducibility criteria, the adequacy of left atrium (LA)-only models for developing such ablation tools is uncertain.AimTo evaluate the effect of incorporating the RA in 3D patient-specific computer models on arrhythmia vulnerability.MethodsImaging data from 8 subjects were obtained to generate patient-specific computer models. For each subject, we created 2 models: one monoatrial consisting only of the LA, and one biatrial model consisting of both the RA and LA. We considered 3 different states of substrate remodeling: healthy (H), mild (M), and severe (S). The Courtemanche et al. cellular model was modified from control conditions to a setup representing AF-induced remodeling with 0%, 50%, and 100% changes for H, M, and S, respectively. Conduction velocity along the myocyte preferential direction was set to 1.2, 1.0, and 0.8m/s for each remodeling level. To incorporate fibrotic substrate, we manually placed six seeds on each biatrial model, 3 in the LA and 3 in the RA, corresponding to regions with the most frequent enhancement (IIR>1.2) in LGE-MRI. The extent of the fibrotic substrate corresponded to the Utah 2 (5-20%) and Utah 4 (>35%) stages, for M and S respectively, while the H state was modeled without fibrosis. Electrical propagation in the atria was modeled using the monodomain equation solved with openCARP. Arrhythmia vulnerability was assessed by virtual S1S2 pacing from different points separated by 2cm. A point was classified as inducing arrhythmia if reentry was initiated and maintained for at least 1s. The vulnerability ratio was defined as the number of inducing points divided by the number of stimulation points. The mean tachycardia cycle length (TCL) of the induced arrhythmia was assessed at the stimulation site. We compared the vulnerability ratio of the LA in monoatrial and biatrial configurations.ResultsThe incorporation of the RA increased the mean LA vulnerability ratio by 115.79% (0.19±0.13 to 0.41±0.22, p=0.033) in state M, and 29.03% in state S (0.31±0.14 to 0.40±0.15, p=0.219) as illustrated in Figure 1. No arrhythmia was induced in the H models. RA inclusion increased the TCL of LA reentries by 5.51% (186.9±13.3ms to 197.2±18.3ms, p=0.006) in M scenario, and decreased it by 7.17% (224.3±27.6ms to 208.2±34.8ms, p=0.010) in scenario S. RA inclusion resulted in an elevated LA inducibility, revealing 4.9±3.3 additional points per patient in the LA for the biatrial model that did not induce reentry in the monoatrial model. ConclusionsThe LA vulnerability in a biatrial model differs from the LA vulnerability in a monoatrial model. Incorporating the RA in patient-specific computational models unmasked potential inducing points in the LA. The RA had a substrate-dependent effect on reentry dynamics, altering the TCL of LA-induced reentries. Our results provide evidence for an important role of the RA in the maintenance and induction of arrhythmia in patient-specific computational models, thus suggesting the use of biatrial models.
Abstract:
Mitral regurgitation is a structural heart condition affecting the mitral valve that can be treated with the MitraClip system ™, a device that allows a percutaneous intervention for the deployment of a catheter-embedded clip on the valve leaflets to prevent blood-backflow from the left ventricle to the left atrium. Despite its efficacy, the procedure presents technical challenges, relying on fluoroscopy guidance and surgeon expertise. In the context of human-machine interface for autonomous robotic catheter cardiac intervention, this study aims to evaluate the effectiveness of Augmented Reality (AR) for training surgeons in the MitraClip procedure. Users using an AR Interface demonstrated better performance compared to those using an interface emulating traditional visualization methodologies (fluoroscopy and transesophageal echocardiogra-phy). AR-based training offers a more engaging and effective learning experience, leading to improved surgical dexterity and safety in the procedure.
Abstract:
Atrial fibrillation is the most prevalent cardiac arrhythmia in the adult population associated with an elevated risk of cardiovascular events and sudden cardiac death. In 2020, more than 50 million people worldwide were estimated to have atrial fibrillation, and its prevalence is expected to double by 2060. Despite significant progress in diagnosing and treating atrial fibrillation, current therapies often fail to prevent adverse outcomes due to their one-size-fits-all approach, ignoring patient variability. Patient-specific atrial computer models, also known as atrial digital twins, have emerged to improve our understanding of the pathophysiology of atrial fibrillation and to address the growing public health burden posed by atrial fibrillation today. The vision of atrial digital twins is to serve as a tool supporting the evaluation of different treatment strategies and selecting the most appropriate one to address the specific needs of each patient.Personalization refers to the process of incorporating patient data, such as anatomical, functional, and substrate-related, into model parameters that reflect specific physical properties of the cardiac cells, tissue, or heart of the individual. Currently, there is no consensus on the methodology for constructing a digital twin to inform atrial fibrillation treatment. Some studies have developed methodologies using only non-invasive pre-procedural data, while others have employed invasive procedural data or a combination of both. The overall effect of the selected input data on the behavior of the patient-specific model is currently unknown.In this thesis, arrhythmia vulnerability and tachycardia cycle length were quantified to assess the impact of different input data on the behavior of the patient-specific model. Arrhythmia vulnerability was defined as the ratio of the number of inducing points divided by the number of stimulation points on the atrial surface. Tachycardia cycle length was measured at the stimulation location and defined as the average time between peaks of the dV/dt of induced reentries. In particular, the effect of three types of clinical data was evaluated: 1) anatomical personalization by comparing monoatrial versus biatrial models, 2) functional personalization by comparing models with personalized refractory period versus non-personalized models, and 3) functional and substrate personalization by comparing pre-procedural versus procedural data. Finally, a larger cohort of 22 patient-specific biatrial computer models was developed to train a machine learning classifier model for predicting arrhythmia vulnerability and evaluating the importance of personalized features on the prediction.The results of the first project showed that incorporating the right atrium increased the mean vulnerability of the left atrium and revealed new induction points per patient model, which did not induce reentry in the monoatrial model. The right atrium had a substrate state-dependent effect on arrhythmia dynamics.In the second project, the non-personalized scenario with homogeneous effective refractory period distribution was the least vulnerable to arrhythmia, while the regional personalized scenario was the most vulnerable. Heterogeneities in the form of regions promote unidirectional blocks, thereby increasing vulnerability, while the homogeneous scenario makes it less likely to induce reentry even with a shorter effective refractory period. Incorporating the effective refractory period as a continuous distribution slightly decreased vulnerability compared to the state-of-the-art heterogeneous non-personalized scenario. Increased dispersion of the effective refractory period in personalized scenarios has a greater effect on reentry dynamics than on the absolute value of vulnerability. Tachycardia cycle length of the personalized vs the non-personalized scenarios was significantly slower.In the third project, total activation times and patterns were markedly different between invasive and non-invasive modalities. Arrhythmia vulnerability was more influenced by the extent of fibrosis than by the activation patterns. Finally, the machine learning classifier achieved a moderate accuracy for the prediction of arrhythmia vulnerability. Fibrosis density measured at 10\,mm from the stimulation points and global conduction velocity were the features showing the highest impact on point inducibility prediction.The results presented in this thesis provide evidence that the selection of input data affects the behavior of the patient-specific computer model. The right atrium plays an important role in the maintenance and induction of arrhythmia, thus the use of biatrial models seems advisable. Personalization of the effective refractory period has a greater effect on reentry dynamics than on the absolute value of vulnerability. Substrate-related personalization was the feature with the highest influence on vulnerability, therefore, further detection methods are needed to ensure its correct representation. The machine learning classifier may offer a fast alternative reducing the need of expensive computations of virtual pacing protocols, thus aiding the transition to clinical applications. The use of patient-specific models with highly detailed anatomy, function, and substrate may improve the development of tools for therapy planning for atrial fibrillation.
Abstract:
Atrial fibrillation is a heart condition that causes an irregular and abnormally fast heart rate, as well as a multifactorial and progressive cardiac disease with different manifestations in each patient. The treatment of such illness remains a challenge, especially on those patients showing severe remodelling of the cardiac substrate. Regions of scar and fibrotic tissue have been identified as a potential driving region of arrhythmic activity during AF, so the ablation of these areas represents a standard treatment. High density mapping techniques can provide important information about low voltage and slow conduction zones, both characteristics of arrhythmogenic areas. However, these modalities remain showing discordances in the location and extent of arrhythmogenic areas.Recently, local impedance (LI) measurements have gained attention as they are expected to distinguish between healthy and scar tissue independently from the atrial rhythm, which can improve the understanding of underlying substrate modifications. A new generation of ablation catheters incorporates the option of LI recordings as a novel technique to char- acterize the process of lesion formation. To extend this technology towards a mapping system implementation, a better understanding on how different factors are influencing LI measurements is needed. For that, two approaches were followed during this work: in silico investigations with different catheters and tissue settings, and in vitro experiments to support simulated studies.By performing in silico studies that can relate to commonly seen clinical scenarios, we were able to predict and understand how different factors contribute to measured LI values. A 3D model of the ablation catheters with the DirectSenseTM technology was employed in different scenarios reported in the clinics, such as the introduction of the catheter in a steerable sheath, or the variation of the catheter-tissue distance, angle, and force. LI data from recruited patients at the Städtisches Klinikum Karlsruhe allowed the validation of the simulation setting.Later, this in silico setting was extended to multielectrode catheters. Simulating the impact of several design parameters in LI, such as stimulation and measurement bipolar pairs, inter-electrode distance, or electrode shape and size, tissue conductivities were reconstructed to account for scarred tissue patterns.Lately, in vitro experiments with a mapping catheter were performed built on the previous simulated findings. Various contact impedance recordings in tissue phantom demonstrated statistical significance when comparing the measurements between electrodes in direct catheter-tissue contact and floating in saline. During this work, the potential capabilities of LI measurements were proven and paved the way towards its use as a surrogate for detection of fibrotic areas in cardiac mapping, complementing commonly used techniques based on electrogram (EGM) analysis.
Abstract:
Despite offering many advantages, magnetic resonance (MR)-only radiotherapy (RT) workflows are not widely used in the clinic. A major obstacle to the spread of MR-only radiotherapy treatment planning (RTP) has been the poor generalization capability of available computed tomography (CT) synthesis methods. Due to ethical concerns and regulations, centralizing medical data to train a model with high generalization capability is difficult.A plausible solution to this problem is the use of federated learning (FL) to train a neural network with data from multiple institutions without the need for centralization.This work assesses the feasibility of training complex neural networks for CT synthesis using FL. To achieve this, different state-of-the-art models were trained in a federated environment that simulates the cooperation of multiple healthcare institutions without sharing their data.The results indicate, that the most common models in the state of the art, namely U-Net and generative adversarial networks (GANs) can be trained for CT synthesis using FL. The results of experiments with some novel approaches such as 3D networks and models with transformers suggest that these models can not achieve the desired performance in reasonable time when trained with FL.
Abstract:
This thesis investigates federated learning (FL) techniques for generating synthetic computed tomography (sCT) images from magnetic resonance imaging (MRI) data, aiming to support MRI-only workflows in radiotherapy planning. Traditionally, CT scans are necessary for calculating radiation doses, but these scans expose patients to additional ionizing radiation.An MRI-only approach offers superior soft tissue contrast, reduced radiation exposure, and streamlined clinical workflows. To achieve this, sCT images must be generated from MRI to provide CT-equivalent information safely and robustly.Centralized deep learning (DL) has shown promise for sCT generation, but clinical adoption is limited due to the scarcity of large, diverse datasets, as data sharing across institutionsraises privacy concerns. FL mitigates this by enabling collaborative model training across institutions without centralizing data, thus utilizing diverse datasets while preserving patient privacy.The primary research question is: How can federated model aggregation be optimized for performance and computational efficiency in MRI-to-sCT translation? The first hypothesis suggests that due to the complexity of medical image translation and inter-institutional data heterogeneity, more advanced aggregation strategies may be necessary for robust generalization.This study leverages MRI and CT datasets from multiple institutions with variations in imaging protocols, scanners, and patient demographics to simulate realistic clinical diversity.A robust pre-processing pipeline standardizes data, aligning image dimensions, intensity ranges, and anatomical landmarks to reduce inter-institutional variability and support model convergence. Multiple FL aggregation strategies—FedAvg, FedMedian, FedTrimmedAvg,FedAvgM, optimization-based methods, FedBN, and FedProx—were benchmarked for their ability to manage non-uniform data distributions and improve model generalization.Importantly, data remains on each client, and the global model’s generalization is tested onunseen data from an external institution. Model quality was evaluated using key performance metrics: masked mean absolute error, peak signal-to-noise ratio, and structural similarity index.The findings show that (i) FedAvg exceeded performance expectations by outperforming more complex tsrategies, (ii) FedMedian’s simplistic outlier filtering led to information loss, (iii) FedTrimmedAvg ranked between FedAvg and FedMedian, (iv) FedAvgM provided enhanced stability but has slower convergence, and (v) optimization-based strategies were instable and outperformed by simpler methods. The combination of FedAvg with FedProx and FedBN produced the best results, achieving a median masked mean absolute error of 96 HU on 23 unseen test patients.Contrary to the initial hypothesis, simpler aggregation strategies outperformed more complex methods for MRI-to-sCT translation. This may be attributed to the extensive pre-processing pipeline, which effectively reduced data heterogeneity, allowing FedAvg to perform well.These findings underscore FL’s potential for enabling MRI-only radiotherapy by facilitating sCT generation across decentralized datasets, preserving privacy while maintaining model performance. By demonstrating effective data harmonization and adaptable FL strategies in a multi-institutional setting, this work contributes to developing secure, generalizable DL applications in medical imaging, paving the way for broader clinical implementation.While this study demonstrates the feasibility of FL for privacy-preserving sCT generation, the main goal was to provide a first comprehensive benchmark analysis of various aggregation strategies for this task. Future work should explore additional pre-processing techniques and further refine FL approaches, such as combining FedAvg with emerging strategies like FedDG and FedCE. Moreover, practical challenges, including privacy-preserving aggregation, communication costs, and device variability in real-world federated learning settings, must be addressed to optimize federated learning’s effectiveness and scalability in clinical applications.
Abstract:
Laparoscopic liver surgery holds many advantages for patients, among other things less pain and a quicker recovery time. Since it is technically more challenging, an intraoperative navigation system should be implemented to support the surgeon. This requires a reliable segmentation of the liver in laparoscopic images. The segmentation of RGB images can be difficult, when observing similar color and texture. As this occurs frequently in laparoscopic images, we aim to include depth maps as additional data. Moreover, depth maps are unaffected by environmental disturbances like reflections, lightning or pollution of the camera lens, which were reported to be challenging for an accurate segmentation. In our work we want to show, how depth maps can be used as additional input to an artificial neural network, supporting the segmentation of the liver in laparoscopic images.The data used in this work is divided into a synthetic and a clinical dataset. The synthetic dataset consists out of 20,000 image-depth pairs from ten patients. The clinical dataset is smaller, containing 525 image-depth pairs, from five patients. It is important to mention, that the depth maps of the synthetic dataset are ground truth depth maps, whereas the clinical depth maps are estimated by an artificial neural network. There is a visible gap between them, which was attempted to bridge by different pre-processing methods.Both datasets were utilized for the training and testing of an artificial neural network, more precisely the DFormer, which is an encoder-decoder network with a backbone pre-trained on RGB-D data and two attention fusion modules.In the first experiment the network is trained on synthetic data and tested on clinical data, through what the transferability of the weights from the synthetic to clinical data is explored. The second experiment investigates the changes, when fine-tuning the synthetic weights on clinical data and performing a 5-fold cross-validation. Lastly, experiment 3 and 4 analyse the impact of depth maps on the prediction by using ideal and non-informative clinical depth maps for experiment 3 and noisy synthetic depth maps for experiment 4.Through the four performed experiments we have found, that depth maps can be helpful with imprecise RGB images, as they are able to compensate the quality of the laparoscopic images. Moreover, we have shown, that the quality of the prediction is dependent on the quality of the depth maps. Nonetheless, even if we achieved interesting and promising results as described beforehand, we have found limitations due to the quality of the depth maps and the limited available data, on particular laparoscopic scenarios, for example incisions of the liver. Through these limitations, future opportunities and challenges arise, including further experiments, the optimization of depth maps and the increase of data.
Abstract:
Der Erfolg von Bypass-Operationen kann durch intraoperative Blutflussmessungen erhöht werden. Die quantitative Fluoreszenzangiographie stellt eine Möglichkeit dar, diese Messun- gen in Zukunft kontaktlos durchzuführen, um Kontamination und Verletzungen zu vermeiden. Neben der Bestimmung der Flussgeschwindigkeit ist die Extraktion des Gefäßinnendurchmes- sers aus einer Fluoreszenzangiographie-Aufnahme zentral für die korrekte Bestimmung des Volumenflusses. Bisher gibt es unterschiedliche Aussagen dazu, ob eine exakte Bestimmung des Innendurchmessers aus diesen Aufnahmen möglich ist. In dieser Arbeit wird deshalb untersucht, inwieweit die optischen Eigenschaften der Gefäßwand den bei der Durchmesser- bestimmung auftretenden Fehler beeinflussen. Der Fokus liegt dabei auf der Betrachtung der Vergrößerung, die durch die Lichtbrechung an der Außenseite der Gefäßwand auftritt. Dabei werden basierend auf dem Brechungsgesetz und der paraxialen Näherung der Lichtbrechung an einer sphärischen Oberfläche zwei optische Modelle entwickelt. Zudem werden Experi- mente an einem Flussphantom durchgeführt, mit dem Fluoreszenzangiographie-Aufnahmen von Silikon- und Glasgefäßen bekannter Durchmesser gemacht werden können. Bei der Auswertung der Bilddaten werden zwei Methoden der digitalen Durchmesserbestimmung verglichen. Sowohl mit den Modellen als auch experimentell wird der Einfluss der Differenz der Brechungsindizes der Gefäßwand und des Außenmediums sowie des Wall-to-Lumen Ratio der Gefäße auf die Bestimmung des Innendurchmessers untersucht.Die Ergebnisse ermöglichen eine qualitative Bestätigung der These, dass der Innendurchmes- ser eines Gefäßes, dessen Gefäßwand einen größeren Brechungsindex als das umgebende Medium hat, vergrößert dargestellt wird. Diese Beobachtung zeigt sich sowohl bei der Be- rechnung über die optischen Modelle als auch bei der experimentellen Bestimmung der Durchmesser. Der über die Modelle berechnete relative Fehler auf den Innendurchmesser ist größer für größere Wall-to-Lumen Ratios und beträgt im klinisch relevanten Bereich zwischen 16% und 33%.Die Validierung der entwickelten optischen Modelle ist aufgrund der Qualität der Messergeb- nisse, die mit den im Rahmen dieser Arbeit durchgeführten Experimenten ermittelt wurden, nicht möglich. Dabei zeigt sich vor allem bei der auf einem eindimensionalen Gradientenfilter beruhenden Methode zur digitalen Durchmesserbestimmung die Notwendigkeit zur Anpas- sung, wenn die Methode im Bereich der quantitativen Fluoreszenzangiographie Anwendung finden soll. Dennoch zeigt die Arbeit die Relevanz der Betrachtung der Fehleranalyse der Durchmesserbestimmung bei der quantitativen Fluoreszenzangiographie.
Abstract:
Als eine Alternative zum Einsatz von ICG zur Perfusionskontrolle bei endoskopi- schen Operationen soll die direkte Erkennung von durchblutetem Gewebe erfolgen. Dieses ist durch arterielles Blut gekennzeichnet, welches sich in seinen optischen Ei- genschaften von veno ̈sem Blut unterscheidet. Dabei sind in den Absorptions- bzw. Reflexionsspektren vor allem die Wellenla ̈ngenbereiche bei 650 nm und 900 nm von Interesse, da sich dort die jeweiligen Spektren stark unterscheiden.Dies erfordert eine Darstellung auf drei Bildsensoren. Eine Alternative zu den rechtkostenintensiven und unflexiblen dichroitischen Prismen kann in der Entwicklungder Einsatz von Strahlteilerwu ̈rfeln und dichroitischen Spiegeln sein.Ein großer Nachteil an dieser Anwendung ist der hohe Signalverlust durch den Strahl-teilerwu ̈rfel und damit einhergehend ein Verlust im Signal-zu-Rausch-Verha ̈ltnis(SNR). Theoretisch wa ̈re hier durch 50% Signalverlust, verursacht durch den Strahl-teilerwu ̈rfel, eine Verminderung des SNR um den Faktor 1/ 2 zu erwarten.Ziel dieser Arbeit soll es sein, die tatsa ̈chlichen Verluste zu vermessen und zu evalu- ieren, um wie viel die optische Leistung einer Lichtquelle erho ̈ht werden mu ̈sste, um diese Verluste im SNR auszugleichen.Dazu soll zuna ̈chst die, bei einer u ̈blichen laparoskopischen Anwendung, verfu ̈gbare Lichtmenge modelliert werden. Davon ausgehend sollen bei verschiedenen Leistungs- leveln das SNR der verschiedenen Strahlteileraufbauten vermessen werden, jeweils im 650 nm-Bereich und im 900 nm-Bereich. Basierend auf den erbrachten Ergebnis- sen soll dann evaluiert werden, ob die no ̈tige Leistungserho ̈hung der Lichtquelle zum Ausgleich der SNR-Verluste in einem realistischen, umsetzbaren Bereich liegt.Die Ergebnisse zeigen, dass die Verluste im SNR ho ̈her sind als von der Theorie vorhergesagt. Die Verluste im Spektrum bei 650 nm waren dabei in einem Bereich, die durch die Verwendung moderner Lichtquellen gut ausgleichbar sind. Im Infrarot- bereich um 900 nm ist eine Abscha ̈tzung dabei schwieriger, da dieser Spektralbereich bisher in der endoskopischen Bildgebung kaum genutzt wird. Hier kommt erschwe- rend hinzu, dass die Strahlung im Infrarotbereich aufgrund der Hitzeentwicklung sta ̈rker limitiert ist. Hier sind im medizinischen Bereich normative Grenzen gesetzt.Die Arbeit zeigt, dass die hier untersuchte Methode der Strahlteilung in einem Ent- wicklungskontext eingesetzt werden kann, allerdings in einem tatsa ̈chlichen Produkt wohl keine Anwendung finden kann, da hier noch viele Herausforderungen u.a. im Bereich der Hitzeentwicklung und in optomechanischen Belangen vorliegen.
Abstract:
While atrial fibrillation (AF) is currently the most common arrhythmia worldwide, finding theoptimal treatment for each patient remains a challenge. One option for treating AF patientsis pulmonary vein isolation (PVI). PVI is an established rhythm control strategy. However,PVI often fails for some patients, leading to high recurrence rates, especially for those withpersistent AF. To treat these patients in whom PVI failed, ablation outside the pulmonaryveins can be performed. However, finding the right spots for ablation is demanding and time-consuming. To better identify suitable regions for ablation, electrophysiological simulationscan be used to assess AF vulnerability before the ablation procedure. These simulationscurrently require substantial computational power, making them unfeasible for smallerinstitutions or hospitals. Therefore, this thesis uses machine learning (ML) to assess andaccelerate the vulnerability assessment of AF patient-specific models.The ML pipeline consists of a feature-based approach where global clinical featuresof the atrium were calculated based on late gadolinium enhancement magnetic resonanceimaging (LGE-MRI). In addition, local features around single stimulation points werecalculated. Data from 22 patients with AF were used to create personalized patient models.AF vulnerability of these 22 patients was then calculated using the PEERP pacing protocolwith the electrophysiological simulator openCARP.A random forest (RF) classifier was used with the previously calculated global and localfeatures to predict the results from the pacing protocol for inducibility. This algorithm wasfurther improved using hyperparameter tuning and different optimization strategies. Thetuned algorithm achieved a mean area under the curve (AUC) receiver operating charac-teristic (ROC) of 0.75 ± 0.03 in predicting point inducibility with 10-fold cross-validation.Discarding 50% of the test samples based on RF certainty analysis achieved a false negativerate of 5%. Using this classifier configuration, up to 10% of all stimulation points areexcluded, resulting in a 10% reduction in simulation time.To better assess the influence of different features on the RF classifier, we used SHAPto get an insight into feature importance. The analysis showed that especially local fibrosisdensity around stimulation points had the highest impact on the prediction (SHAP valuesfrom −0.15 to 0.1 ). Additionally, SHAP showed that the inducibility for a stimulation pointdecreases if it is moved to the atrial appendages.This work not only provides an approach to saving 10% of simulation time for assessingatrial fibrillation vulnerability but also uses SHAP to explain and assess predictions of theunderlying random forest classifier.
Abstract:
The current assessment and standardization of microsurgical skills are subjective, which presents a challenge in reliably evaluating the proficiency of practitioners due to the potential for bias and inconsistency in the evaluation process. This study aims to quantitatively evaluate the psychomotor skills of surgeons performing microsurgical anastomosis operations using electromyography (EMG) activity. Data were recorded under controlled conditions with participants of varying proficiency levels: students, residents, and fully-trained surgeons. Surgical skill was assessed through two primary approaches: proficiency level based on years of medical training, and the number of errors in the final suture determined by the Anastomosis Lapse Index (ALI) score. The surgery process was divided into five segments corresponding to key suturing steps, and relevant EMG features were extracted in both time and frequency domains, including Root-Mean-Square (RMS) amplitude, median frequency, and bandwidth.Our analysis revealed distinct patterns correlating EMG activity with skill level. Notably, proficient surgeons exhibited lower EMG amplitude and higher median frequency in specific muscles, suggesting better motor control and efficiency. Conversely, during needle insertion, experts demonstrated higher amplitude and lower median frequency and bandwidth, reflecting the complex nature of the task. Weak cross-correlation coefficients indicated significant inter-subject variability, necessitating further refinement of the analysis by focusing on shorter movement segments.The correlation between EMG activity and the number of errors was less clear, highlighting the limitations of using the ALI score alone due to its potential bias and the lack of dif- ferentiation in error severity. Future research will investigate specific error types and their relationship with particular movements to improve the robustness of skill assessment. While preliminary, these findings suggest that sEMG sensors can be instrumental in assessing surgical skills, particularly in wrist stabilization and precision tasks. The adoption of a multimodal sensor approach, including motion tracking and tactile force measurement, is recommended for enhancing data quality and interpretability. This study sets the groundwork for more detailed investigations into individual movements and the incorporation of advanced sensor technologies to advance the objective evaluation of surgical proficiency.
Abstract:
In this work, we registered retinal diagnostic Optical Coherence Tomography (OCT) to instument-integrated OCT (iiOCT). High registration accuracy and distortion correction of the OCT was demonstrated, which has the potential to support advanced tasks in robotic eye surgery.Robotic eye surgery has been investigated in the near past to assist in challenging opera- tions, but it requires comprehensive anatomical information. The robot-mounted distance sensor, namely the iiOCT, cannot provide this on its own, but diagnostic OCT can sup- ply this essential data. However, the integration of diagnostic OCT with the robotic iiOCT has not been previously achieved, which is why this registration was investigated in this work.We designed an accurate, fast, and automatable pipeline that registered real OCT to real iiOCT data. The pipeline was composed of three stages: feature extraction, coarse reg- istration, and fine registration. During the feature extraction phase, both modalities were transformed into point clouds. In the coarse registration stage, point features including the locations of the fovea and Optic Nerve Head (ONH) were utilized to perform a coarse align- ment. The final stage, fine registration, utilized nonrigid transformation based on Coherent Point Drift (CPD) to refine the coarse alignment.Extensive experiments evaluated different pipeline combinations with real data acquired from a realistic eye model. The influence of nonrigidity, efficiency, and first experiments with porcine eyes were assessed in detail. Our results showed an impressive accuracy of 131 μm in the macular region with short computation time. Furthermore, the nonrigid deformation significantly improved accuracy by 21% by correcting the curvature in the OCT data.This research underscores that precise OCT-iiOCT registration is possible which enables advanced tasks in robotic eye surgery. This could enhance efficiency and patient outcomes in eye care, marking a significant stride for both surgeons and patients.
Abstract:
Physicians use information from medical images like Computed Tomography (CT) scans to decide about the diagnosis and treatment. Image segmenta- tion describes the division of an image into meaningful areas. It is a crucial step in the processing of medical images. However, it is time-consuming and labor-intensive. Therefore, interactive segmentation should simplify the procedure. It combines automatic segmentation with user interactions. As an initial segmentation mask is often not satisfying, user interactions are used to refine the initial mask. Currently, this is only possible with phys- ical interactions like clicks and scribbles but not with text. In situations where physicians need their hands, e.g. for surgery, they can not create segmentation masks. In this work, we developed a segmentation pipeline that predicted segmentation masks with text prompts for CT scans. User interactions with text could be used to refine the masks. We set up the whole pipeline for written text, but it can be adapted to spoken language in the future and, therefore, to a system without physical interaction. The seg- mentation pipeline consisted of an object detector, Grounding DINO, and the newly published segmentation model Segment Anything Model (SAM). The combination of these two models is called Grounded SAM. The pre- diction of Grounding DINO, a bounding box, was used as input for SAM. SAM predicted a segmentation mask. We adapted Grounding DINO to the medical domain using Low-Rank Adaptation (LoRA). Instead of the orig- inal SAM, we used MedSAM and ScribblePrompt. These two versions of SAM are adapted to various medical imaging modalities. Grounded SAM’s predicted segmentation mask could be refined with user interactions that e.g. offered the possibility to adjust the predicted bounding box, change the image normalization, and add clicks, all through text commands. We performed a user study to evaluate how the pipeline worked and whether users could improve the initial segmentation masks. Results showed that the adapted segmentation pipeline could predict initial masks for all categories, though performance varied. For example, results were of high quality for the liver but less accurate for fragmented structures such as the colon. The user study confirmed that the developed user interactions enabled the users to improve upon these initial segmentation masks. The approach allowed the users to incorporate their knowledge and opinions into the segmentation masks. Fully automatic segmentation pipelines like nnU-Net do not offer them this option.
Abstract:
Accurately predicting the difficulty of wisdom tooth extraction is paramount in oral surgery, guiding treatment planning and ensuring patient safety. This study focuses on developing a predictive model for classifying the difficulty of wisdom tooth extraction using advanced deep learning techniques. Leveraging the insights from previous research and clinical exper- tise, we integrate Cone Beam Computed Tomography (CBCT) imaging and deep learning methodologies to automate the assessment of extraction complexity. Through a series of experimental stages, we refine our model by evaluating different annotation techniques, incorporation of edge information, feature fusion methods, backbone architectures, and projection-based classification approaches. Our findings demonstrate the efficacy of incorpo- rating segmented data from CBCT imaging, utilizing Principal Component Analysis (PCA) for feature extraction, and optimizing projection-based classification methods. By automating the identification of tooth positions and surrounding structures, our model provides valuable decision-making support to clinicians, enhancing surgical planning and risk assessment. This interdisciplinary approach to predicting wisdom tooth extraction difficulty represents a significant advancement in oral surgery, promising to improve surgical outcomes and patient care in clinical practice.
Abstract:
This master thesis investigates the complexities of transit time measurement in quantita- tive fluorescence angiography (QFA), a critical factor affecting the precision of blood flow assessment during neurosurgical procedures. Motivated by the clinical need for accurate intraoperative blood flow measurement to improve surgical outcomes and patient safety, the study delves into the factors contributing to the variability of transit time measurements, essential for quantifying cerebral blood flow. Utilizing a laboratory setup that simulates cerebral blood flow dynamics, ICG-fluorescence-angiography was employed to explore the impact of different parameters, such as vessel diameter, flow pulsatility, and the selection of the region of interest (ROI), on transit time accuracy.Through a series of carefully designed experiments using silicone tubes of varying diameters to model cerebrovascular structures, the relationship between transit time measurements and factors such as flow rate, pulsatility, and geometric characteristics of the blood vessels was examined. Advanced imaging techniques and software analysis tools were utilized to capture and quantify the fluorescent signal of ICG, allowing for precise measurement of transit times under different flow conditions.The findings reveal significant insights into the factors influencing transit time measurements. Notably, the study demonstrates that vessel diameter and the choice of ROI substantially affect transit time accuracy, with smaller diameters and strategically selected ROIs yielding more reliable measurements. Furthermore, the research highlights the critical role of flow pulsatility, underscoring the importance of accounting for the dynamic nature of blood flow in the accuracy of transit time measurements.This work contributes to the advancement of QFA by providing a deeper understanding of the variables that affect transit time measurement. The insights gained from this research are expected to contribute to more accurate and reliable QFA techniques, ultimately enhancing the efficacy of intraoperative blood flow assessment and supporting improved surgical decision-making and patient outcomes.
Abstract:
In recent decades, surgical navigation systems have undergone significant change, evolving from two-dimensional (2D) imaging techniques to more complex three-dimensional (3D) imaging modalities. While these advances have improved the precision and effectiveness of surgical procedures, they have also brought with them a challenge: increased radiation exposure to patients. To address this issue, this research is exploring the potential of integrating 2D fan beam scout scans from the AIRO intraoperative CT scanner into surgical navigation systems. This approach aims to minimize radiation exposure while maintaining and potentially improving the functionality and versatility of the AIRO scanner through the addition of 2D imaging capabilitiesThis work presents a novel approach to developing a navigation model specifically designed for scout scans. An in-depth analysis of the scout scan characteristics is performed, which forms the basis for a novel calibration method using the linear pushbroom camera model. This model is essential for accurately describing the scout scan acquisition process and is further refined to approximate a pinhole camera model. The adaptation aims to seamlessly integrate scout scans into the surgical navigation scene.The results demonstrate that the linear pushbroom camera model provides a robust mathe- matical foundation for understanding and implementing scout scans in surgical navigation. The effectiveness of the proposed approximation approach was evaluated through its appli- cation in 2D navigation and 2D/3D registration tasks and revealed error margins beyond current requirements. Despite these initial findings, there is still considerable potential for improvement. By incorporating the linear pushbroom model more directly, it is expected that the deviations associated with approximating a pinhole camera model can be significantly reduced. This potential method offers a promising way to improve the accuracy of the system and suggests that, with further refinement, the approach could meet the precision requirements of surgical navigation.
Abstract:
This Bachelor’s thesis investigates the use of Convolutional Neural Networks (CNNs) for the segmentation of the Foveal Avascular Zone (FAZ) in Optical Coherence Tomography Angiography (OCTA) images. Given the challenge of directly comparing results across different research groups, this work focuses on analyzing three key factors that could influence comparability: dataset size, model complexity, and the evaluation metrics used. By generating four datasets of varying sizes through data augmentation and training state-of-the- art segmentation architectures with uniform hyperparameters, these factors were evaluated using a 6-fold cross-validation approach.Initial findings highlighted performance disparities among networks across different datasets, suggesting that factors beyond dataset size and model complexity might play significant roles. Despite these disparities, it was observed that neither larger dataset sizes nor increased model complexity necessarily lead to better segmentation performance. Moreover, reliance solely on technical metrics like the Dice coefficient was found inadequate for a comprehensive assessment of segmentation outcomes. In response, a new benchmark design was proposed to provide a consistent and transparent basis for evaluating and comparing future research findings, aiming to account for the observed performance variations.This work offers insights into the challenges of comparability in FAZ segmentation with CNNs and proposes ways to address these challenges. It aims to stimulate discussion on standardized evaluation methods in this field.
Abstract:
Atrial fibrillation (AF) is one of the most common arrhythmias with a high mortality and morbidity rate associated with it. Due to its age-dependent incidence rate and the aging population, the incidences and thus the financial costs to the health system will be likely to rise significantly in the future. Further, the current treatment of AF patients is suboptimal with success rates of only 20% to 60% for patients in advanced stages of AF.In the last decades, various factors have been investigated to enhance the understanding of AF pathophysiology, including anatomical, electrophysiological, and tissue-related factors. Conducting in-vivo and clinical studies to investigate the complex interplay of these factors during AF is challenging due to patient variability, data collection complexity, ethical con- straints, among others. Therefore, studies comparing these factors against each other are still sparse.Computer models of the atria serve as a valuable tool to address some of these challenges, of- fering a controlled and reproducible environment. This thesis aimed to give some additional quantitative insights into the influences of biatrial anatomical factors, such as volume and sphericity, and tissue-related factors, including fibrosis extent and fibrosis density, on arrhyth- mia vulnerability. Through in-silico experiments, a vulnerability assessment to reentries was conducted by applying an S1S2 protocol to pacing points distributed around both atria. The vulnerability was defined as the number of pacing points inducing reentries divided by the total number of pacing points on both atria.In total, sixteen biatrial bilayer models, derived from instances of a statistical shape model (SSM) were generated. To investigate the influence of fibrosis on arrhythmia vulnerability, a fibrosis atlas was generated from late gadolinium enhancement magnetic resonance imaging (LGE-MRI) data obtained from fifty-four patients from two independent institutions. Four- teen fibrotic distributions were generated and were mapped to the biatrial models.In addition, the mean fibrosis density in the boundary zone between fibrotic and non-fibrotic tissue was varied across five different values, ranging from 26% to 75%. Using the Utah classification score, the models were further categorized into one of four Utah stages.To compare the influence of anatomical factors and fibrosis factors, the vulnerability assess- ment was run with the same biatrial models with and without fibrosis. This comparison allowed for the calculation of a vulnerability factor between both scenarios.In the sixteen biatrial simulations without fibrosis and varying sphericity and volume values, sphericity exhibited a higher correlation to the resulting arrhythmia vulnerability of the biatrial simulations without fibrosis than volume, with 0.37 being the highest correlation coefficient for the combined sphericity of both atria. The RA volume obtained the highest correlation coefficient of 0.18 among all volume factors. With the addition of fibrosis, Utah 3 stage (20-30% of fibrosis) models showed the highest vulnerability (0.76). Further, the Utah 3 stage models got the highest vulnerability factor between simulations with and without fibrosis equal to 3.28. Apart from Utah 2 stage (10-20% of fibrosis), the these factors ranged from 2 or slightly below the highest factor counting. Moreover, using the same fibrosis map of Utah 3 and Utah 4 on four different biatrial models revealed a variation of the factor from 1.73 to 3.28 (Utah 3) and 2.0 to 2.62 (Utah 4).The change in fibrosis density led to a 20% vulnerability increase, exhibiting a steady in- crease in vulnerability from less dense to more dense fibrotic tissue in the boundary zone. The results obtained in this thesis suggest a higher influence of sphericity compared to volume on arrhythmia vulnerability. After the addition of fibrosis, the vulnerability doubled or in one case, even tripled (3.28). Thus, the influence of fibrosis extent is at least double that of the anatomical factors. However, a high impact of the geometry on the influence of the fibrosis extent was observed.
Abstract:
Cardiovascular diseases are responsible for an estimated 17.9 million deaths each year, according to WHO. The electrocardiogram (ECG) stands out as the most widely employed method for assessing the heart’s condition, given its non-invasive nature and ease of recording. While there are millions of signals recorded every day, ECG data available for research and development of new analyzation tools is rare due to patient protection and the limited time of healthcare professionals. Therefore, much of today’s software development relies on a few dozen publicly available data sets, offering only a limited representation of the ECG signals recorded daily in clinics.In an effort to address the challenge of limited ECG data for research, this thesis explores a method for encoding and augmenting ECG data using machine learning. An autoencoder is trained on existing ECG data to acquire a compressed representation of ECG signals and extract essential components from them. These extracted elements serve as a condensed yet informative summary of the original signals. Subsequently, the aim is to adjust and augment given ECG signals by modifying the extracted features. For simplification of the used data, the vectorcardiographic (VCG) representation of ECG signals is employed, reducing the required leads from twelve to three. The generated signals are heartbeats aligned at the R-peak with a fixed time window size. The intention is for these individual heartbeats to be stitched together to create a longer signal in a subsequent step.During this project it was firstly aimed to find a Variational Autoencoder (VAE) model of low complexity, which is able to achieve an acceptable reconstruction loss. Consequently, the VAE was trained on a large data set to learn a compressed representation in the latent space, capable of reconstructing the signal to high accuracy. Afterwards, transfer learning was employed to retrain the VAE on signals from a single patient, to incorporate learned morphologies from other patients into this specific patient’s signal.I achieved a latent space representation in which each variable represents specific morpholo- gies of the VCG, with little correlation between variables. The research has shown that this machine learning approach has improved capabilities compared the principal component analysis (PCA) regarding the ability to construct and augment unseen VCG signals. Transfer learning proved to be helpful for achieving better generalization and faster convergence of the model but the transfer of morphologies between patients did not meet the desired expec- tations. Lastly, a suggestion of potential improvements and further methods is presented, aiming to improve various aspects of the latent space.
Abstract:
The most common arrhythmia worldwide is atrial fibrillation (AF), recognized as a substantialpublic health burden due to its rising incidence. Patients affected by AF face elevated risksfor stroke, myocardial infarction, and mortality. Moreover, current treatment approachesoften prove ineffective, resulting in a high recurrence rate. Hence, there is an urgent need forfurther investigation into the mechanisms underlying AF to advance treatment strategies.The objective of this study was to assess the impact of the morphology of the conductionvelocity (CV) restitution curve on reentry events. We evaluated this influence using metricssuch as the vulnerability window, the average reentry duration, and the dominant frequency.By conducting this vulnerability assessment, the aim was to establish correlations betweenthe morphology of the CV restitution curve and these key features.We investigated the impact of using the pacing cycle length (PCL) and the diastolic interval(DI) on the restitution curve through simulations in the monodomain model. Additionally, theinfluence of the maximum longitudinal CV on the CV restitution curve was analyzed. ClinicalCV restitution curves of 13 patients with persistent AF, measured at various atrial locations,were employed in simulations on a 2D tissue slab utilizing the diffusion reaction eikonalalternant model (DREAM) to simulate electrical wave propagation with a personalizedionic model (Courtemanche) for the action potential (AP). The vulnerability assessmentwas done using an S1-S2 protocol. The experiments encompassed diverse morphologies ofrestitution curves and varying maximal longitudinal CV values. Moreover, experiments withheterogeneous meshes using two different restitution curve morphologies were conducted.No notable influence of the maximal longitudinal CV on the morphology of the CV restitutioncurve was identified. Moreover, the ionic model was successfully personalized using afunction that interpolated conductance values between healthy and AF tissue. Additionally, acorrelation between the steepness of the CV restitution curve and the vulnerability window,average reentry duration, and dominant frequency was established.Nevertheless, this work has limitations regarding the data acquisition and the model usedfor the electrophysiological simulations although it was shown that a shallow CV restitutioncurve is more vulnerable to AF and maintains it longer. Summarizing, the CV restitutioncurve proved to be a crucial factor for reentry events, promising to improve vulnerabilityassessment and treatment outcomes of AF.
Abstract:
Premature ventricular contractions (PVC) are a common heart arrhythmia that can be observed in 40\% to 75\% in the general population under 24- or 48-hour Holter monitoring \cite{Ahn2013-mf}. In severe cases these can lead to a restriction of the quality of life or even result in fatal complications. A possible therapy is ablation therapy for which the exact site of origin of the extrasystoles needs to be known. In this thesis it was investigated whether the class of conditional invertible neural networks can be used to predict the locations of the \acp{SOO} and whether the additional information that is provided by the special network architecture through a predicted likelihood is beneficial. As input only body surface potential maps were used. This means that the process on a real patient to get all necessary data is noninvasive and does not require imaging.The network was trained on a simulated dataset that consists of 1.8 million different simulated extrasystoles. On the test set a median geodesic error of 1.76 mm was measured. The network predicts the location as patient-agnostic cobiveco coordinates. As such no additional medical imaging data is necessary.In addition, several necessary additions to the network that enable stable and reliable training were documented. The effect that several tunable hyper parameters have on the prediction and the calibration are also shown. This is important because so far only a small amount of work using conditional Invertible Neural Networks has been published so far.Several methods to summarize and visualize the information gained by a sampling based inference are presented, one of which visualizes the predicted negative log-likelihood on the heart surface to enable an intuitive understanding of potentially affected areas of the heart.