Abstract:
Simulation models and artificial intelligence (AI) are largely used to address healthcare and biomedical engineering problems. Both approaches showed promising results in the analysis and optimization of healthcare processes. Therefore, the combination of simulation models and AI could provide a strategy to further boost the quality of health services. In this work, a systematic review of studies applying a hybrid simulation models and AI approach to address healthcare management challenges was carried out. Scopus, Web of Science, and PubMed databases were screened by independent reviewers. The main strategies to combine simulation and AI as well as the major healthcare application scenarios were identified and discussed. Moreover, tools and algorithms to implement the proposed approaches were described. Results showed that machine learning appears to be the most employed AI strategy in combination with simulation models, which mainly rely on agent-based and discrete-event systems. The scarcity and heterogeneity of the included studies suggested that a standardized framework to implement hybrid machine learning-simulation approaches in healthcare management is yet to be defined. Future efforts should aim to use these approaches to design novel intelligent in-silico models of healthcare processes and to provide effective translation to the clinics.
Abstract:
Purpose: Handheld gamma cameras with coded aperture collimators are under inves- tigation for intraoperative imaging in nuclear medicine. Coded apertures are a promis- ing collimation technique for applications such as lymph node localization due to their high sensitivity and the possibility of 3D imaging. We evaluated the axial resolutionand computational performance of two reconstruction methods.Methods: An experimental gamma camera was set up consisting of the pixelated semiconductor detector Timepix3 and MURA mask of rank 31 with round holesof 0.08 mm in diameter in a 0.11 mm thick Tungsten sheet. A set of measurements was taken where a point-like gamma source was placed centrally at 21 different positions within the range of 12–100 mm. For each source position, the detector image was reconstructed in 0.5 mm steps around the true source position, resulting in an image stack. The axial resolution was assessed by the full width at half maximum (FWHM) of the contrast-to-noise ratio (CNR) profile along the z-axis of the stack. Two reconstruction methods were compared: MURA Decoding and a 3D maximum likeli- hood expectation maximization algorithm (3D-MLEM).Results: While taking 4400 times longer in computation, 3D-MLEM yielded a smaller axial FWHM and a higher CNR. The axial resolution degraded from 5.3 mm and 1.8 mm at 12 mm to 42.2 mm and 13.5 mm at 100 mm for MURA Decoding and 3D-MLEM respectively.Conclusion: Our results show that the coded aperture enables the depth estimation of single point-like sources in the near field. Here, 3D-MLEM offered a better axial reso- lution but was computationally much slower than MURA Decoding, whose reconstruc- tion time is compatible with real-time imaging.
Abstract:
Background: The global coronavirus disease 2019 (COVID-19) pandemic has posed substantial challenges for healthcare systems, notably the increased demand for chest computed tomography (CT) scans, which lack automated analysis. Our study addresses this by utilizing artificial intelligence-supported automated computer analysis to investigate lung involvement distribution and extent in COVID-19 patients. Additionally, we explore the association between lung involvement and intensive care unit (ICU) admission, while also comparing computer analysis performance with expert radiologists’ assessments.Methods: A total of 81 patients from an open-source COVID database with confirmed COVID-19 infection were included in the study. Three patients were excluded. Lung involvement was assessed in 78 patients using CT scans, and the extent of infiltration and collapse was quantified across various lung lobes and regions. The associations between lung involvement and ICU admission were analysed. Additionally, the computer analysis of COVID-19 involvement was compared against a human rating provided by radiological experts.Results: The results showed a higher degree of infiltration and collapse in the lower lobes compared to the upper lobes (P<0.05). No significant difference was detected in the COVID-19-related involvement of the left and right lower lobes. The right middle lobe demonstrated lower involvement compared to the right lower lobes (P<0.05). When examining the regions, significantly more COVID-19 involvement was found when comparing the posterior vs. the anterior halves and the lower vs. the upper half of the lungs. Patients, who required ICU admission during their treatment exhibited significantly higher COVID-19 involvement in their lung parenchyma according to computer analysis, compared to patients who remained in general wards. Patients with more than 40% COVID-19 involvement were almost exclusively treated in intensive care. A high correlation was observed between computer detection of COVID-19 affections and the rating by radiological experts.Conclusions: The findings suggest that the extent of lung involvement, particularly in the lower lobes, dorsal lungs, and lower half of the lungs, may be associated with the need for ICU admission in patients with COVID-19. Computer analysis showed a high correlation with expert rating, highlighting its potential utility in clinical settings for assessing lung involvement. This information may help guide clinical decision-making and resource allocation during ongoing or future pandemics. Further studies with larger sample sizes are warranted to validate these findings.
Abstract:
Feature importance methods promise to provide a ranking of features according to importance for a given classification task. A wide range of methods exist but their rankings often disagree and they are inherently difficult to evaluate due to a lack of ground truth beyond synthetic datasets. In this work, we put feature importance methods to the test on real-world data in the domain of cardiology, where we try to distinguish three specific pathologies from healthy subjects based on ECG features comparing to features used in cardiologists' decision rules as ground truth. We found that the SHAP and LIME methods and Chi-squared test all worked well together with the native Random forest and Logistic regression feature rankings. Some methods gave inconsistent results, which included the Maximum Relevance Minimum Redundancy and Neighbourhood Component Analysis methods. The permutation-based methods generally performed quite poorly. A surprising result was found in the case of left bundle branch block, where T-wave morphology features were consistently identified as being important for diagnosis, but are not used by clinicians.
Abstract:
The human heart is subject to highly variable amounts of strain during day-to-day activities and needs to adapt to a wide range of physiological demands. This adaptation is driven by an autoregulatory loop that includes both electrical and the mechanical components. In particular, mechanical forces are known to feed back into the cardiac electrophysiology system, which can result in pro- and anti-arrhythmic effects. Despite the widespread use of computational modelling and simulation for cardiac electrophysiology research, the majority of in silico experiments ignore this mechano-electric feedback entirely due to the high computational cost associated with solving cardiac mechanics. In this study, we therefore use an electromechanically coupled whole-heart model to investigate the differential and combined effects of electromechanical feedback mechanisms with a focus on their physiological relevance during sinus rhythm. In particular, we consider troponin-bound calcium, the effect of deformation on the tissue diffusion tensor, and stretch-activated channels. We found that activation of the myocardium was only significantly affected when including deformation into the diffusion term of the monodomain equation. Repolarization, on the other hand, was influenced by both troponin-bound calcium and stretch-activated channels and resulted in steeper repolarization gradients in the atria. The latter also caused afterdepolarizations in the atria. Due to its central role for tension development, calcium bound to troponin affected stroke volume and pressure. In conclusion, we found that mechano-electric feedback changes activation and repolarization patterns throughout the heart during sinus rhythm and lead to a markedly more heterogeneous electrophysiological substrate.
Abstract:
We investigate the properties of static mechanical and dynamic electro-mechanical models for the deformation of the human heart. Numerically this is realized by a staggered scheme for the coupled partial/ordinary differential equation (PDE-ODE) system. First, we consider a static and purely mechanical benchmark configuration on a realistic geometry of the human ventricles. Using a penalty term for quasi-incompressibility, we test different parameters and mesh sizes and observe that this approach is not sufficient for lowest order conforming finite elements. Then, we compare the approaches of active stress and active strain for cardiac muscle contraction. Finally, we compare in a coupled anatomically realistic electro-mechanical model numerical Newmark damping with a visco-elastic model using Rayleigh damping. Nonphysiological oscillations can be better mitigated using viscosity.
Abstract:
IntroductionThe role of the right atrium (RA) in atrial fibrillation (AF) has long been overlooked. Computer models of the atria can aid in assessing how the RA influences arrhythmia vulnerability and in studying the role of RA drivers in the induction of AF, both aspects challenging to assess in living patients until now. It remains unclear whether the incorporation of the RA influences the propensity of the model to reentry induction. As personalized ablation strategies rely on non-inducibility criteria, the adequacy of left atrium (LA)-only models for developing such ablation tools is uncertain.AimTo evaluate the effect of incorporating the RA in 3D patient-specific computer models on arrhythmia vulnerability.MethodsImaging data from 8 subjects were obtained to generate patient-specific computer models. For each subject, we created 2 models: one monoatrial consisting only of the LA, and one biatrial model consisting of both the RA and LA. We considered 3 different states of substrate remodeling: healthy (H), mild (M), and severe (S). The Courtemanche et al. cellular model was modified from control conditions to a setup representing AF-induced remodeling with 0%, 50%, and 100% changes for H, M, and S, respectively. Conduction velocity along the myocyte preferential direction was set to 1.2, 1.0, and 0.8m/s for each remodeling level. To incorporate fibrotic substrate, we manually placed six seeds on each biatrial model, 3 in the LA and 3 in the RA, corresponding to regions with the most frequent enhancement (IIR>1.2) in LGE-MRI. The extent of the fibrotic substrate corresponded to the Utah 2 (5-20%) and Utah 4 (>35%) stages, for M and S respectively, while the H state was modeled without fibrosis. Electrical propagation in the atria was modeled using the monodomain equation solved with openCARP. Arrhythmia vulnerability was assessed by virtual S1S2 pacing from different points separated by 2cm. A point was classified as inducing arrhythmia if reentry was initiated and maintained for at least 1s. The vulnerability ratio was defined as the number of inducing points divided by the number of stimulation points. The mean tachycardia cycle length (TCL) of the induced arrhythmia was assessed at the stimulation site. We compared the vulnerability ratio of the LA in monoatrial and biatrial configurations.ResultsThe incorporation of the RA increased the mean LA vulnerability ratio by 115.79% (0.19±0.13 to 0.41±0.22, p=0.033) in state M, and 29.03% in state S (0.31±0.14 to 0.40±0.15, p=0.219) as illustrated in Figure 1. No arrhythmia was induced in the H models. RA inclusion increased the TCL of LA reentries by 5.51% (186.9±13.3ms to 197.2±18.3ms, p=0.006) in M scenario, and decreased it by 7.17% (224.3±27.6ms to 208.2±34.8ms, p=0.010) in scenario S. RA inclusion resulted in an elevated LA inducibility, revealing 4.9±3.3 additional points per patient in the LA for the biatrial model that did not induce reentry in the monoatrial model. ConclusionsThe LA vulnerability in a biatrial model differs from the LA vulnerability in a monoatrial model. Incorporating the RA in patient-specific computational models unmasked potential inducing points in the LA. The RA had a substrate-dependent effect on reentry dynamics, altering the TCL of LA-induced reentries. Our results provide evidence for an important role of the RA in the maintenance and induction of arrhythmia in patient-specific computational models, thus suggesting the use of biatrial models.
Abstract:
This Bachelor’s thesis investigates the use of Convolutional Neural Networks (CNNs) for the segmentation of the Foveal Avascular Zone (FAZ) in Optical Coherence Tomography Angiography (OCTA) images. Given the challenge of directly comparing results across different research groups, this work focuses on analyzing three key factors that could influence comparability: dataset size, model complexity, and the evaluation metrics used. By generating four datasets of varying sizes through data augmentation and training state-of-the- art segmentation architectures with uniform hyperparameters, these factors were evaluated using a 6-fold cross-validation approach.Initial findings highlighted performance disparities among networks across different datasets, suggesting that factors beyond dataset size and model complexity might play significant roles. Despite these disparities, it was observed that neither larger dataset sizes nor increased model complexity necessarily lead to better segmentation performance. Moreover, reliance solely on technical metrics like the Dice coefficient was found inadequate for a comprehensive assessment of segmentation outcomes. In response, a new benchmark design was proposed to provide a consistent and transparent basis for evaluating and comparing future research findings, aiming to account for the observed performance variations.This work offers insights into the challenges of comparability in FAZ segmentation with CNNs and proposes ways to address these challenges. It aims to stimulate discussion on standardized evaluation methods in this field.
Abstract:
Atrial fibrillation (AF) is one of the most common arrhythmias with a high mortality and morbidity rate associated with it. Due to its age-dependent incidence rate and the aging population, the incidences and thus the financial costs to the health system will be likely to rise significantly in the future. Further, the current treatment of AF patients is suboptimal with success rates of only 20% to 60% for patients in advanced stages of AF.In the last decades, various factors have been investigated to enhance the understanding of AF pathophysiology, including anatomical, electrophysiological, and tissue-related factors. Conducting in-vivo and clinical studies to investigate the complex interplay of these factors during AF is challenging due to patient variability, data collection complexity, ethical con- straints, among others. Therefore, studies comparing these factors against each other are still sparse.Computer models of the atria serve as a valuable tool to address some of these challenges, of- fering a controlled and reproducible environment. This thesis aimed to give some additional quantitative insights into the influences of biatrial anatomical factors, such as volume and sphericity, and tissue-related factors, including fibrosis extent and fibrosis density, on arrhyth- mia vulnerability. Through in-silico experiments, a vulnerability assessment to reentries was conducted by applying an S1S2 protocol to pacing points distributed around both atria. The vulnerability was defined as the number of pacing points inducing reentries divided by the total number of pacing points on both atria.In total, sixteen biatrial bilayer models, derived from instances of a statistical shape model (SSM) were generated. To investigate the influence of fibrosis on arrhythmia vulnerability, a fibrosis atlas was generated from late gadolinium enhancement magnetic resonance imaging (LGE-MRI) data obtained from fifty-four patients from two independent institutions. Four- teen fibrotic distributions were generated and were mapped to the biatrial models.In addition, the mean fibrosis density in the boundary zone between fibrotic and non-fibrotic tissue was varied across five different values, ranging from 26% to 75%. Using the Utah classification score, the models were further categorized into one of four Utah stages.To compare the influence of anatomical factors and fibrosis factors, the vulnerability assess- ment was run with the same biatrial models with and without fibrosis. This comparison allowed for the calculation of a vulnerability factor between both scenarios.In the sixteen biatrial simulations without fibrosis and varying sphericity and volume values, sphericity exhibited a higher correlation to the resulting arrhythmia vulnerability of the biatrial simulations without fibrosis than volume, with 0.37 being the highest correlation coefficient for the combined sphericity of both atria. The RA volume obtained the highest correlation coefficient of 0.18 among all volume factors. With the addition of fibrosis, Utah 3 stage (20-30% of fibrosis) models showed the highest vulnerability (0.76). Further, the Utah 3 stage models got the highest vulnerability factor between simulations with and without fibrosis equal to 3.28. Apart from Utah 2 stage (10-20% of fibrosis), the these factors ranged from 2 or slightly below the highest factor counting. Moreover, using the same fibrosis map of Utah 3 and Utah 4 on four different biatrial models revealed a variation of the factor from 1.73 to 3.28 (Utah 3) and 2.0 to 2.62 (Utah 4).The change in fibrosis density led to a 20% vulnerability increase, exhibiting a steady in- crease in vulnerability from less dense to more dense fibrotic tissue in the boundary zone. The results obtained in this thesis suggest a higher influence of sphericity compared to volume on arrhythmia vulnerability. After the addition of fibrosis, the vulnerability doubled or in one case, even tripled (3.28). Thus, the influence of fibrosis extent is at least double that of the anatomical factors. However, a high impact of the geometry on the influence of the fibrosis extent was observed.
Abstract:
Cardiovascular diseases are responsible for an estimated 17.9 million deaths each year, according to WHO. The electrocardiogram (ECG) stands out as the most widely employed method for assessing the heart’s condition, given its non-invasive nature and ease of recording. While there are millions of signals recorded every day, ECG data available for research and development of new analyzation tools is rare due to patient protection and the limited time of healthcare professionals. Therefore, much of today’s software development relies on a few dozen publicly available data sets, offering only a limited representation of the ECG signals recorded daily in clinics.In an effort to address the challenge of limited ECG data for research, this thesis explores a method for encoding and augmenting ECG data using machine learning. An autoencoder is trained on existing ECG data to acquire a compressed representation of ECG signals and extract essential components from them. These extracted elements serve as a condensed yet informative summary of the original signals. Subsequently, the aim is to adjust and augment given ECG signals by modifying the extracted features. For simplification of the used data, the vectorcardiographic (VCG) representation of ECG signals is employed, reducing the required leads from twelve to three. The generated signals are heartbeats aligned at the R-peak with a fixed time window size. The intention is for these individual heartbeats to be stitched together to create a longer signal in a subsequent step.During this project it was firstly aimed to find a Variational Autoencoder (VAE) model of low complexity, which is able to achieve an acceptable reconstruction loss. Consequently, the VAE was trained on a large data set to learn a compressed representation in the latent space, capable of reconstructing the signal to high accuracy. Afterwards, transfer learning was employed to retrain the VAE on signals from a single patient, to incorporate learned morphologies from other patients into this specific patient’s signal.I achieved a latent space representation in which each variable represents specific morpholo- gies of the VCG, with little correlation between variables. The research has shown that this machine learning approach has improved capabilities compared the principal component analysis (PCA) regarding the ability to construct and augment unseen VCG signals. Transfer learning proved to be helpful for achieving better generalization and faster convergence of the model but the transfer of morphologies between patients did not meet the desired expec- tations. Lastly, a suggestion of potential improvements and further methods is presented, aiming to improve various aspects of the latent space.
Abstract:
The most common arrhythmia worldwide is atrial fibrillation (AF), recognized as a substantialpublic health burden due to its rising incidence. Patients affected by AF face elevated risksfor stroke, myocardial infarction, and mortality. Moreover, current treatment approachesoften prove ineffective, resulting in a high recurrence rate. Hence, there is an urgent need forfurther investigation into the mechanisms underlying AF to advance treatment strategies.The objective of this study was to assess the impact of the morphology of the conductionvelocity (CV) restitution curve on reentry events. We evaluated this influence using metricssuch as the vulnerability window, the average reentry duration, and the dominant frequency.By conducting this vulnerability assessment, the aim was to establish correlations betweenthe morphology of the CV restitution curve and these key features.We investigated the impact of using the pacing cycle length (PCL) and the diastolic interval(DI) on the restitution curve through simulations in the monodomain model. Additionally, theinfluence of the maximum longitudinal CV on the CV restitution curve was analyzed. ClinicalCV restitution curves of 13 patients with persistent AF, measured at various atrial locations,were employed in simulations on a 2D tissue slab utilizing the diffusion reaction eikonalalternant model (DREAM) to simulate electrical wave propagation with a personalizedionic model (Courtemanche) for the action potential (AP). The vulnerability assessmentwas done using an S1-S2 protocol. The experiments encompassed diverse morphologies ofrestitution curves and varying maximal longitudinal CV values. Moreover, experiments withheterogeneous meshes using two different restitution curve morphologies were conducted.No notable influence of the maximal longitudinal CV on the morphology of the CV restitutioncurve was identified. Moreover, the ionic model was successfully personalized using afunction that interpolated conductance values between healthy and AF tissue. Additionally, acorrelation between the steepness of the CV restitution curve and the vulnerability window,average reentry duration, and dominant frequency was established.Nevertheless, this work has limitations regarding the data acquisition and the model usedfor the electrophysiological simulations although it was shown that a shallow CV restitutioncurve is more vulnerable to AF and maintains it longer. Summarizing, the CV restitutioncurve proved to be a crucial factor for reentry events, promising to improve vulnerabilityassessment and treatment outcomes of AF.