Abstract:
Approximating the fast dynamics of depolarization waves in the human heart described by the monodomain model is numerically challenging. Splitting methods for the PDE-ODE coupling enable the computation with very fine space and time discretizations. Here, we compare different splitting approaches regarding convergence, accuracy, and efficiency. Simulations were performed for a benchmark problem with the Beeler-Reuter cell model on a truncated ellipsoid approximating the left ventricle including a localized stimulation. For this configuration, we provide a reference solution for the transmembrane potential. We found a semi-implicit approach with state variable interpolation to be the most efficient scheme. The results are transferred to a more physiological setup using a bi-ventricular domain with a complex external stimulation pattern to evaluate the accuracy of the activation time for different resolutions in space and time.
Abstract:
Sinus node (SN) pacemaking is based on a coupling between surface membrane ion-channels and intracellular Ca2+-handling. The fundamental role of the inward Na+/Ca2+ exchanger (NCX) is firmly established. However, little is known about the reverse mode exchange. A simulation study attributed important role to reverse NCX activity, however experimental evidence is still missing. Whole-cell and perforated patch-clamp experiments were performed on rabbit SN cells supplemented with fluorescent Ca2+-tracking. We established 2 and 8 mM pipette NaCl groups to suppress and enable reverse NCX. NCX was assessed by specific block with 1 μM ORM-10962. Mechanistic simulations were performed by Maltsev–Lakatta minimal computational SN model. Active reverse NCX resulted in larger Ca2+-transient amplitude with larger SR Ca2+-content. Spontaneous action potential (AP) frequency increased with 8 mM NaCl. When reverse NCX was facilitated by 1 μM strophantin the Ca2+i and spontaneous rate increased. ORM-10962 applied prior to strophantin prevented Ca2+i and AP cycle change. Computational simulations indicated gradually increasing reverse NCX current, Ca2+i and heart rate with increasing Na+i. Our results provide further evidence for the role of reverse NCX in SN pacemaking. The reverse NCX activity may provide additional Ca2+-influx that could increase SR Ca2+-content, which consequently leads to enhanced pacemaking activity.
Abstract:
The sinoatrial node (SAN) is a complex structure that spontaneously depolarizes rhythmically ("pacing") and excites the surrounding non-automatic cardiac cells ("drive") to initiate each heart beat. However, the mechanisms by which the SAN cells can activate the large and hyperpolarized surrounding cardiac tissue are incompletely understood. Experimental studies demonstrated the presence of an insulating border that separates the SAN from the hyperpolarizing influence of the surrounding myocardium, except at a discrete number of sinoatrial exit pathways (SEPs). We propose a highly detailed 3D model of the human SAN, including 3D SEPs to study the requirements for successful electrical activation of the primary pacemaking structure of the human heart. A total of 788 simulations investigate the ability of the SAN to pace and drive with different heterogeneous characteristics of the nodal tissue (gradient and mosaic models) and myocyte orientation. A sigmoidal distribution of the tissue conductivity combined with a mosaic model of SAN and atrial cells in the SEP was able to drive the right atrium (RA) at varying rates induced by gradual If block. Additionally, we investigated the influence of the SEPs by varying their number, length, and width. SEPs created a transition zone of transmembrane voltage and ionic currents to enable successful pace and drive. Unsuccessful simulations showed a hyperpolarized transmembrane voltage (-66 mV), which blocked the L-type channels and attenuated the sodium-calcium exchanger. The fiber direction influenced the SEPs that preferentially activated the crista terminalis (CT). The location of the leading pacemaker site (LPS) shifted toward the SEP-free areas. LPSs were located closer to the SEP-free areas (3.46 ± 1.42 mm), where the hyperpolarizing influence of the CT was reduced, compared with a larger distance from the LPS to the areas where SEPs were located (7.17± 0.98 mm). This study identified the geometrical and electrophysiological aspects of the 3D SAN-SEP-CT structure required for successful pace and drive in silico.
Abstract:
Interatrial conduction block refers to a disturbance in the propagation of electrical impulses in the conduction pathways between the right and the left atrium. It is a risk factor for atrial fibrillation, stroke, and premature death. Clin- ical diagnostic criteria comprise an increased P wave dura- tion and biphasic P waves in lead II, III and aVF due to ret- rograde activation of the left atrium. Machine learning algo- rithms could improve the diagnosis but require a large-scale, well-controlled and balanced dataset. In silico electrocardio- gram (ECG) signals, optimally obtained from a statistical shape model to cover anatomical variability, carry the poten- tial to produce an extensive database meeting the requirements for successful machine learning application. We generated the first in silico dataset including interatrial conduction block of 9,800simulated ECG signals based on a bi-atrial statistical shape model. Automated feature analysis was performed to evaluate P wave morphology, duration and P wave terminal force in lead V1. Increased P wave duration and P wave ter- minal force in lead V1 were found for models with interatrial conduction block compared to healthy models. A wide vari- ability of P wave morphology was detected for models with in- teratrial conduction block. Contrary to previous assumptions, our results suggest that a biphasic P wave morphology seems to be neither necessary nor sufficient for the diagnosis of in- teratrial conduction block. The presented dataset is ready for a classification with machine learning algorithms and can be easily extended.
Abstract:
Atrial fibrillation is responsible for a significant and steadily rising burden. Simultaneously, the treatment options for atrial fibrillation are far from optimal. Personalized simulations of cardiac electrophysiology could assist clinicians in the risk stratification and therapy planning for atrial fibrillation. However, the use of personalized simulations in clinics is currently not possible due to either too high computational costs or non-sufficient accuracy. Eikonal simulations come with low computational costs but cannot replicate the influence of cardiac tissue geometry on the conduction velocity of the wave propagation. Consequently, they currently lack the required accuracy to be applied in clinics. Biophysically detailed simulations on the other hand are accurate but associated with too high computational costs. To tackle this issue, a regression model is created based on biophysically detailed bidomain simulation data. This regression formula calculates the conduction velocity dependent on the thickness and curvature of the heart wall. Afterwards the formula was implemented into the eikonal model with the goal to increase the accuracy of the eikonal model without losing its advantage of computational efficiency. The results of the modified eikonal simulations demonstrate that (i) the local activation times become significantly closer to those of the biophysically detailed bidomain simulations, (ii) the advantage of the eikonal model of a low sensitivity to the resolution of the mesh was reduced further, and (iii) the unrealistic occurrence of endo-epicardial dissociation in simulations was remedied. The results suggest that the accuracy of the eikonal model was significantly increased. At the same time, the additional computational costs caused by the implementation of the regression formula are neglectable. In conclusion, a successful step towards a more accurate and fast computational model of cardiac electrophysiology was achieved.
Abstract:
Craniosynostosis is a congenital disease character-ized by the premature closure of one or multiple sutures of theinfant’s skull. For diagnosis, 3D photogrammetric scans are aradiation-free alternative to computed tomography. However,data is only sparsely available and the role of data augmentation for the classification of craniosynostosis has not yet beenanalyzed.In this work, we use a 2D distance map representation ofthe infants’ heads with a convolutional-neural-network-basedclassifier and employ a generative adversarial network (GAN)for data augmentation. We simulate two data scarcity scenar-ios with 15 % and 10 % training data and test the influence ofdifferent degrees of added synthetic data and balancing under-represented classes. We used total accuracy and F1-score as ametric to evaluate the final classifiers.For 15 % training data, the GAN-augmented dataset showedan increased F1-score up to 0.1 and classification accuracy upto 3 %. For 10 % training data, both metrics decreased. We present a deep convolutional GAN capable of creatingsynthetic data for the classification of craniosynostosis. Us-ing a moderate amount of synthetic data using a GAN showedslightly better performance, but had little effect overall. Thesimulated scarcity scenario of 10 % training data may havelimited the model’s ability to learn the underlying data distribution.
Abstract:
Conduction velocity (CV) slowing is associated with atrial fibrillation (AF) and reentrant ventricular tachycardia (VT). Clinical electroanatomical mapping systems used to localize AF or VT sources as ablation targets remain limited by the number of measuring electrodes and signal processing methods to generate high-density local activation time (LAT) and CV maps of heterogeneous atrial or trabeculated ventricular endocardium. The morphology and amplitude of bipolar electrograms depend on the direction of propagating electrical wavefront, making identification of low-amplitude signal sources commonly associated with fibrotic area difficulty. In comparison, unipolar electrograms are not sensitive to wavefront direction, but measurements are susceptible to distal activity. This study proposes a method for local CV calculation from optical mapping measurements, termed the circle method (CM). The local CV is obtained as a weighted sum of CV values calculated along different chords spanning a circle of predefined radius centered at a CV measurement location. As a distinct maximum in LAT differences is along the chord normal to the propagating wavefront, the method is adaptive to the propagating wavefront direction changes, suitable for electrical conductivity characterization of heterogeneous myocardium. In numerical simulations, CM was validated characterizing modeled ablated areas as zones of distinct CV slowing. Experimentally, CM was used to characterize lesions created by radiofrequency ablation (RFA) on isolated hearts of rats, guinea pig, and explanted human hearts. To infer the depth of RFA-created lesions, excitation light bands of different penetration depths were used, and a beat-to-beat CV difference analysis was performed to identify CV alternans. Despite being limited to laboratory research, studies based on CM with optical mapping may lead to new translational insights into better-guided ablation therapies.
Abstract:
Aims Atrial flutter (AFlut) is a common re-entrant atrial tachycardia driven by self-sustainable mechanisms that cause excitations to propagate along pathways different from sinus rhythm. Intra-cardiac electrophysiological mapping and catheter ablation are often performed without detailed prior knowledge of the mechanism perpetuating AFlut, likely prolonging the procedure time of these invasive interventions. We sought to discriminate the AFlut location [cavotricuspid isthmus-dependent (CTI), peri-mitral, and other left atrium (LA) AFlut classes] with a machine learning-based algorithm using only the non-invasive signals from the 12-lead electrocardiogram (ECG). Methods and results Hybrid 12-lead ECG dataset of 1769 signals was used (1424 in silico ECGs, and 345 clinical ECGs from 115 patients—three different ECG segments over time were extracted from each patient corresponding to single AFlut cycles). Seventy-seven features were extracted. A decision tree classifier with a hold-out classification approach was trained, validated, and tested on the dataset randomly split after selecting the most informative features. The clinical test set comprised 38 patients (114 clinical ECGs). The classifier yielded 76.3% accuracy on the clinical test set with a sensitivity of 89.7%, 75.0%, and 64.1% and a positive predictive value of 71.4%, 75.0%, and 86.2% for CTI, peri-mitral, and other LA class, respectively. Considering majority vote of the three segments taken from each patient, the CTI class was correctly classified at 92%. Conclusion Our results show that a machine learning classifier relying only on non-invasive signals can potentially identify the location of AFlut mechanisms. This method could aid in planning and tailoring patient-specific AFlut treatments.
Abstract:
In laparoscopic liver surgery (LLS) image-guidednavigation systems could support the surgeon by providingsubsurface information such as the positions of tumors andvessels. For this purpose, one option is to perform a registra-tion of preoperative 3D data and 3D surface patches recon-structed from laparoscopic images. Part of an automatic 3Dregistration pipeline is the feature description, which takes intoaccount various geometric and spatial information. Since thereis no leading feature descriptor in the field of LLS, two featuredescriptors are compared in this paper: The Fast Point FeatureHistogram (FPFH) and Triple Orthogonal Local Depth Images(TOLDI). To evaluate their performance, three perturbationswere induced: varying surface patch sizes, spatial displace-ment, and Gaussian deformation. Registration was performedusing the RANSAC algorithm. FPFH outperformed TOLDIfor small surface patches and in case of Gaussian deformationsin terms of registration accuracy. In contrast, TOLDI showedlower registration errors for patches with spatial displacement.While developing a 3D-3D registration pipeline, the choice ofthe feature descriptor is of importance, consequently a carefulchoice suitable for the application in LLS is necessary.
Abstract:
Background: Craniosynostosis is a condition caused by the premature fusion of skull sutures, leading to irregular growth patterns of the head. Three-dimensional photogrammetry is a radiation-free alternative to the diagnosis using computed tomography. While statistical shape models have been proposed to quantify head shape, no shape-model-based classification approach has been presented yet. Methods: We present a classification pipeline that enables an automated diagnosis of three types of craniosynostosis. The pipeline is based on a statistical shape model built from photogrammetric surface scans. We made the model and pathology-specific submodels publicly available, making it the first publicly available craniosynostosis-related head model, as well as the first focusing on infants younger than 1.5 years. To the best of our knowledge, we performed the largest classification study for craniosynostosis to date. Results: Our classification approach yields an accuracy of 97.8 %, comparable to other state-of-the-art methods using both computed tomography scans and stereophotogrammetry. Regarding the statistical shape model, we demonstrate that our model performs similar to other statistical shape models of the human head. Conclusion: We present a state-of-the-art shape-model-based classification approach for a radiation-free diagnosis of craniosynostosis. Our publicly available shape model enables the assessment of craniosynostosis on realistic and synthetic data.
Abstract:
Objective: To investigate cardiac activation maps estimated using electrocardiographic imaging and to find methods reducing line-of-block (LoB) artifacts, while preserving real LoBs. Methods: Body surface potentials were computed for 137 simulated ventricular excitations. Subsequently, the inverse problem was solved to obtain extracellular potentials (EP) and transmembrane voltages (TMV). From these, activation times (AT) were estimated using four methods and compared to the ground truth. This process was evaluated with two cardiac mesh resolutions. Factors contributing to LoB artifacts were identified by analyzing the impact of spatial and temporal smoothing on the morphology of source signals. Results: AT estimation using a spatiotemporal derivative performed better than using a temporal derivative. Compared to deflection-based AT estimation, correlation-based methods were less prone to LoB artifacts but performed worse in identifying real LoBs. Temporal smoothing could eliminate artifacts for TMVs but not for EPs, which could be linked to their temporal morphology. TMVs led to more accurate ATs on the septum than EPs. Mesh resolution had a negligible effect on inverse reconstructions, but small distances were important for cross-correlation-based estimation of AT delays. Conclusion: LoB artifacts are mainly caused by the inherent spatial smoothing effect of the inverse reconstruction. Among the configurations evaluated, only deflection-based AT estimation in combination with TMVs and strong temporal smoothing can prevent LoB artifacts, while preserving real LoBs. Significance: Regions of slow conduction are of considerable clinical interest and LoB artifacts observed in non-invasive ATs can lead to misinterpretations. We addressed this problem by identifying factors causing such artifacts and methods to reduce them.
Abstract:
Computational simulations of cardiac electrophysiology provide detailed information on the depolarization phenomena at different spatial and temporal scales. With the development of new hardware and software, in silico experiments have gained more importance in cardiac electrophysiology research. For plane waves in healthy tissue, in vivo and in silico electrograms at the surface of the tissue demonstrate symmetric morphology and high peak-to-peak amplitude. Simulations provided insight into the factors that alter the morphology and amplitude of the electrograms. The situation is more complex in remodeled tissue with fibrotic infiltrations. Clinically, different changes including fractionation of the signal, extended duration and reduced amplitude have been described. In silico, numerous approaches have been proposed to represent the pathological changes on different spatial and functional scales. Different modeling approaches can reproduce distinct subsets of the clinically observed electrogram phenomena. This review provides an overview of how different modeling approaches to incorporate fibrotic and structural remodeling affect the electrogram and highlights open challenges to be addressed in future research.
Abstract:
Cardiac resynchronization therapy is a valuable tool to restore left ventricular function in patients experiencing dyssynchronous ventricular activation. However, the non-responder rate is still as high as 40%. Recent studies suggest that left ventricular torsion or specifically the lack thereof might be a good predictor for the response of cardiac resynchronization therapy. Since left ventricular torsion is governed by the muscle fiber orientation and the heterogeneous electromechanical activation of the myocardium, understanding the relation between these components and the ability to measure them is vital. To analyze if locally altered electromechanical activation in heart failure patients affects left ventricular torsion, we conducted a simulation study on 27 personalized left ventricular models. Electroanatomical maps and late gadolinium enhanced magnetic resonance imaging data informed our in-silico model cohort. The angle of rotation was evaluated in every material point of the model and averaged values were used to classify the rotation as clockwise or counterclockwise in each segment and sector of the left ventricle. 88% of the patient models (n = 24) were classified as a wringing rotation and 12% (n = 3) as a rigid-body-type rotation. Comparison to classification based on in vivo rotational NOGA XP maps showed no correlation. Thus, isolated changes of the electromechanical activation sequence in the left ventricle are not sufficient to reproduce the rotation pattern changes observed in vivo and suggest that further patho-mechanisms are involved.
Abstract:
Left atrial enlargement (LAE) is one of the risk factors for atrial fibrillation (AF). A non-invasive and automated detection of LAE with the 12-lead electrocardiogram (ECG) could therefore contribute to an improved AF risk stratification and an early detection of new-onset AF incidents. However, one major challenge when applying machine learning techniques to identify and classify cardiac diseases usually lies in the lack of large, reliably labeled and balanced clinical datasets. We therefore examined if the extension of clinical training data by simulated ECGs derived from a novel bi-atrial shape model could improve the automated detection of LAE based on P waves of the 12-lead ECG. We derived 95 volumetric geometries from the bi-atrial statistical shape model with continuously increasing left atrial volumes in the range of 30 ml to 65 ml. Electrophysiological simulations with 10 different conduction velocity settings and 2 different torso models were conducted. Extracting the P waves of the 12-lead ECG thus yielded a synthetic dataset of 1,900 signals. Besides the simulated data, 7,168 healthy and 309 LAE ECGs from a public clinical ECG database were available for training and testing of an LSTM network to identify LAE. The class imbalance of the training data could be reduced from 1:23 to 1:6 when adding simulated data to the training set. The accuracy evaluated on the test dataset comprising a subset of the clinical ECG recordings improved from 0.91 to 0.95 if simulated ECGs were included as an additional input for the training of the classifier. Our results suggest that using a bi-atrial statistical shape model as a basis for ECG simulations can help to overcome the drawbacks of clinical ECG recordings and can thus lead to an improved performance of machine learning classifiers to detect LAE based on the 12-lead ECG.
Abstract:
In laparoscopic surgery image-guided navigation systems could support the surgeon by providing subsurface information such as the positions of tumors and vessels. For this purpose, one option is to perform a reliable registration of preoperative 3D data and a surface patch from laparo-scopic video data. A robust and automatic 3D-3D registration pipeline for the application during laparoscopic surgery has not yet been found due to application-specific challenges. To gain a better insight, we propose a framework enabling a qualitative and quantitative comparison of different registration approaches. The introduced framework is able to evaluate 3D feature descriptors and registration algorithms by generating and modifying synthetic data from clinical examples. Different confounding factors are considered and thus the reality can be reflected in any simplified or more complex way. Two exemplary experiments with a liver model, using the RANSAC algorithm, showed an increasing registration error for a decreasing size of the surface patch size and after introducing modifications. Moreover, the registration accuracy was dependent on the position and structure of the surface patch. The framework helps to quantitatively assess and optimize the registration pipeline, and hereby suggests future software improvements even with only few clinical examples. Clinical relevance - The introduced framework permits a quantitative and comprehensive comparison of different registration approaches which forms the basis for a supportive navigation tool in laparoscopic surgery.
Abstract:
Computational models of the fluid dynamics in the human heart are a powerful tool to investigate disease mechanisms and their impact on the blood flow patterns. These models can for example be used to assess alterations occurring in hypertrophic cardiomyopathy, which is a genetic disease that increases the risk of sudden cardiac death. To overcome the challenges of a moving mesh approach, we modeled the movement of the endocardial surface based on an immersed boundary method. The verification on a simple moving 2D geometry proved plausible results. The application to the dis- eased, hypertrophic heart geometry confirmed that the computation of the mesh movement is made possible with this approach.
Abstract:
The treatment of atrial rhythm disorders such as atrial fibrillation has remained a major challenge predominantly for patients with severely remodeled substrate. Individualized ablation strategies beyond pulmonary vein isolation in combination with real-time assess- ment of ablation lesion formation have been striven for insistently. Current approaches for identifying arrhythmogenic regions predominantly rely on electrogram-based features such as activation time and voltage or electrogram fractionation as a surrogate for tissue pathology. Despite bending every effort, large-scale clinical trials have yielded ambiguous results on the efficacy of various substrate mapping approaches without significant improvement of patient outcomes.This work focuses on enhancing the understanding of electrogram features and local impedance measurements in the atria towards the extraction of clinically relevant and predic- tive substrate characteristics.Features were extracted from intra-atrial electrograms with particular reference to the un- derlying excitation patterns to address morphological alterations caused by structural and functional changes. ... mehr
Abstract:
Mathematical models of the human heart are evolving to become a cornerstone of precision medicine and support clinical decision making by providing a powerful tool to understand the mechanisms underlying pathophysiological conditions.Due to the complexity of the heart, these models require a detailed description of physical processes that interact on different spatial and temporal scales ranging from nanometers to centimeters and from nanoseconds to seconds, respectively.From a mathematical perspective, this poses a variety of challenges such as developing robust numerical schemes for the solution of the model in space and time and parameter identification based on patient specific measurements.In this work, a detailed mathematical description of the electromechanically coupled multi-scale model of the human heart is presented, including the propagation of electrical excitation, large scale deformations, and a model of the circulatory system.Starting from state-of-the-art models of membrane kinetics and active force generation based on human physiology, an atrial and ventricular model of cardiac excitation-contraction coupling is developed and parameterized to match observations from single cell experiments.Furthermore, a segregated and staggered numerical scheme to solve the electromechanically coupled model of the whole heart is established based on already existing software and used to investigate the effects of mechano-electric feedback during sinus rhythm.The numerical results showed that mechano-electric feedback on the cellular level has an impact on the mechanical behavior of the heart due to changes in the active force generation by modulating the interaction between calcium and the binding units of troponin C.Including the effect of deformation on the diffusion of the electrical signal had no significant effect.To verify the different components of the modeling framework, specific problems are designed to cover the most important aspects of electrophysiology and mechanics.Additionally, these problems are used to assess how spatial and temporal discretization affect the numerical solution.The results show that spatial and temporal discretization of the electrophysiology problem dictate the limitations of numerical accuracy while the mechanics problem is more vulnerable to locking effects due to the choice of tetrahedral finite elements.The model is further used to investigate how a dispersion of fiber stress into the sheet and sheetnormal directions changes mechanical biomarkers of the left ventricle.In an idealized model of the left ventricle, additional stress in the sheetnormal direction promoted a more physiological contraction with respect to ejection fraction, longitudinal shortening, wall thickening, and rotation.However, numerical results using the whole heart model revealed contradicting results compared to the idealized left ventricle.In a second project, in vivo measurements of electromechanical parameters in 30 patients suffering from heart failure with reduced ejection fraction and left bundle branch block were integrated into the left ventricular model to shed light on the clinical hypothesis that local electromechanical alterations change the left ventricular rotation pattern.Simulation results could not verify this hypothesis and showed no correlation between the electromechanical parameters and rotation.Next, the impact of standard ablation strategies for the treatment of atrial fibrillation on cardiovascular performance is evaluated in a four-chamber heart model.Due to the scars in the left atrium, the electrical activation and stiffness of the myocardium was altered resulting in a reduction of atrial stroke volume that depends linearly on the amount of inactivated tissue.Additionally, atrial pressure was increased depending on the stiffness of the scar tissue and ventricular function was only affected slightly.Finally, pathological mechanisms related to heart failure in patients with dilated cardiomyopathy are introduced into the whole heart model one by one to differentiate their individual contribution.The numerical results showed that cellular remodeling, especially the one affecting electrophysiology, is mainly responsible for the poor mechanical activity of the heart in patients with dilated cardiomyopathy.Furthermore, structural remodeling and an increased stiffness of the myocardium as well as adaptations of the circulatory system were necessary to replicate in vivo observations.In conclusion, this work presents a numerical framework for the approximation of electromechanical whole heart models including the circulatory system.The framework was verified with the use of simple problem definitions, validated using magnetic resonance imaging data, and used to answer clinical questions that would otherwise be impossible to address in real world scenarios.
Abstract:
In recent years, neural networks (NNs) have achieved remarkable results in event recognition in medical image and video analysis. One of the main limitations of machine learning approaches is the lack of available annotated training data. This lack refers to the number of available datasets and the number of image and video variations in existing datasets. Especially in the medical field, it is hard to extend the number of datasets. The reasons for this are various. For example, legal issues may prevent the publication of the data, or the occurrence of a disease is very rare, making it hard to record it. ... mehr
Abstract:
Atrial fibrillation (AF) is one of the leading health challenges posing a significant burden not only to patients but also to the health care systems. While pulmonary vein isolation (PVI) is an effective therapy for paroxysmal AF patients, the success rate drops for patients with persistent AF. This is thought to be due to patients exhibiting atrial cardiomyopathy (ACM), specifically structural remodelling in the atria occurring during the progression of AF. Therefore, persistent AF patients exhibit additional pathological substrate in the atria, which maintains the arrhythmia. Unfortunately, the current approaches performing PVI plus additionally targeting the pathological substrate are still sub-optimal, with only 50-70\% of patients having long-term freedom from AF after catheter ablation. Hence, the optimal ablation strategy remains an open question demanding further research to identify promising ablation targets. Two approaches that have gained attention over the recent years are electro-anatomical mapping specifically targeting low voltage areas and areas showing contrast in late gadolinium-enhanced magnetic resonance imaging (LGE-MRI). However, both are hindered by the lack of consensus regarding a precise method to identify the pathological substrate. Identification via low voltage mapping is limited due to a lack of understanding of the impact of catheter characteristics that influence the voltage aside from the pathological substrate. Additionally, voltage mapping can be performed during sinus rhythm (SR) or AF. Mapping in the latter case is beneficial as it reduces the need for potentially multiple cardioversions. However, there is no precise statistical evaluation for the cut-off values applied to determine low voltage areas. The advantage of using LGE-MRI instead is that it is a less invasive diagnostic method. However, the spatial resolution of LGE-MRI is limited. Moreover, the degree of accordance between MRI and voltage mapping to detect fibrosis remains disputed. The overall goal of this thesis is to compare mapping modalities to address the fore-mentioned limitations. Therefore, providing more robust and accurate methods to identify pathological substrate areas known for the maintenance of atrial fibrillation.In the first project, 28 persistent AF patients undergoing electro-anatomical mapping were studied. Statistical analysis was then applied, comparing each patient's bipolar and unipolar voltage maps. Specifically, the extent of agreement between methods was identified, finding the optimal unipolar thresholds to locate pathological substrate as determined by the bipolar voltage map. Additionally, the impact of the inter-electrode distances and regional discrepancies on the comparability was explored. For the second part of the project, simulations modelling electrodes of different sizes on a 2D patch and a lasso catheter in a 3D left atrial geometry were performed. This work identified that while the catheter characteristics influence the bipolar voltage values, they do not play a significant role in altering the location of the low voltage areas. The identified unipolar thresholds, which relate the bipolar and unipolar map, can help determine the extent of pathological substrate in an area. Additionally, it was found that larger electrodes deliver smaller voltages, providing techniques to compare results across studies and centres. In the second project, a patient cohort where patients underwent electro-anatomical mapping while in SR and AF was used. The two rhythms could then be compared in each patient, and AF global and regional thresholds relating the rhythms could be identified. Additionally, the effects of inducing AF in patients could be explored and the benefits of different voltage calculation methods analysed. Low voltage thresholds that can better relate mapping in AF with SR were proposed. It was identified that using the regional thresholds proposed in this work could help prevent a false representation of the extent of pathological substrate within an area. Furthermore, using the maximum voltage value in a signal will lead to higher concordance between methods and using a variability measure (sample entropy) can help identify complex propagation patterns distorting the signals in AF. Finally, the last project studied 36 patients who underwent both LGE-MRI and electro-anatomical mapping. Using this cohort, the concordance between different LGE-MRI mapping modalities and voltage and conduction velocity mapping could be investigated. Additionally, a new LGE-MRI analysis method could be developed to improve the agreement between the modalities. Spatial histograms showing typical low voltage and slow conduction regions were created in this work to help clinicians identify important regions to map during a procedure. Moreover, important discrepancies were found between methods, specifically on the posterior wall, which needs further investigation. Lastly, a new LGE-MRI thresholding method was developed, which could be used to identify patients with ACM. Therefore, providing a non-invasive approach which can help to determine whether additional mapping is needed in patients besides performing PVI. The work presented in this thesis provides the clinical community with a deeper understanding of how the different methods to identify pathological substrate compare. Additionally, providing techniques to relate the methods, account for variability between centres and potentially reduce procedure times. Moreover, it was identified that perhaps one-size-fits-all ablation strategies is limited. Thus, this thesis supports the implementation of more personalised ablation approaches.
Abstract:
Cardiovascular diseases are the leading cause of death worldwide, and atrial fibrillation (AF) is the most prevalent cardiac arrhythmia affecting more than 6 million individuals in Europe, with a cost exceeding 1% of the EU health care system budget (13,5 billion annually). New treatment strategies and the progress achieved in research on AF mechanisms and substrate evaluation methods to date have not been commensurate with an equivalent development of the knowledge and technologies required to individually characterize each patient in search of the most efficient therapy. Catheter ablation is the suggested treatment when anti-arrhythmic drugs are not effective. However, the success rates are still not satisfactory, and a large share of AF patients need to undergo multiple ablation procedures. Computational modelling of the atria bears the potential of better understanding AF patho-mechanisms and tailoring ablation therapy. The content of the thesis is split into two projects involving computer models of human atria anatomy and electrophysiology to build a bridge between medicine and engineering.The first project leverages computer models to provide deeper insights into AF vul- nerability and maintenance mechanisms. This work aimed first at evaluating the impact of the choice of the protocol on AF onset and perpetuation in in-silico experiments. An efficient and automated method to standardize the assessment of AF vulnerability in atrial models was proposed. Then, this thesis sought to quantify the influence of heterogeneous anatomical thickness in AF onset and continuation. AF episodes were induced and analyzed on highly detailed 3D atrial models, including various regional myocardial thicknesses. The results of this work confirmed the findings of previous experimental studies proposing highly heterogeneous anatomical thickness as susceptible areas to maintain AF and probable regions to ablate.In the second project, I enter the realm of virtual replicas of patients’ hearts to tailor arrhythmia characterization and treatment, the so called cardiac digital twins. The aim was to develop algorithms delivering atrial models with personalized anatomy augmenting clinical information. This technology was incorporated in a unique pipeline to provide atrial digital twins with patient-specific electrophysiology integrating multiple clinical datasets. The advances described enable the generation of accurate personalized atrial models from medical images and electro-anatomical data. In parallel to advancing the models’ personal- ization, major progress was achieved in translating them into clinical application to support ablation therapy. An automated platform to assess AF vulnerability and identify the optimal ablation targets on anatomical and functional digital twins was realized to provide clinicians personalized ablation plans to stop AF onset and perpetuation. This work highlighted the im- portance of digital twins in optimizing existing and exploring new diagnostic and therapeutic approaches, complementing and enriching clinical information with simulated data.Computational modelling facilitated the multilevel integration of multiple datasets and offered new opportunities for mechanistic understanding, risk prediction and personalized therapy. This thesis changes the paradigm of classification and diagnosis of AF by delivering individualized digital twin models of patients to personalize medicine, unraveling fundamen- tal physiological and pathological mechanisms with implications for clinical treatments.
Abstract:
Atrial fibrillation is the most common cardiac arrhythmia. During atrial fibrillation, the atrial substrate undergoes a series of electrical and structural remodeling processes. The electrical remodeling is characterized by the alteration of specific ionic channels, which changes the morphology of the transmembrane voltage known as action potential. Structural remodeling is a complex process involving the interaction of several signalling pathways, cellular interaction, and changes in the extracellular matrix. During structural remodeling, fibroblasts, abundant in the cardiac tissue, start to differentiate into myofibroblasts, which are responsible for maintaining the extracellular matrix structure by depositing collagen. Additionally, myofibroblasts paracrine signalling with surrounding myocytes will also affect ionic channels. Highly detailed computational models at different scales were used to study the effect of structural remodeling induced at the cellular and tissue levels. At the cellular level, a human fibroblast model was adapted to reproduce the myofibroblast electrophsyiology during atrial fibrillation. Additionally, the calcium handling in myofibroblast electrophysiology was assessed by fitting a calcium ion channel to experimental data. . At the tissue level, myofibroblast infiltration was studied to quantify the increase of vulnerability to cardiac arrhythmia. Myofibroblasts alter the dynamics of reentry. A low density of myofibroblasts allows the propagation through the fibrotic area and creates focal activity exit points and wave breaks inside this area. Moreover, fibrosis composition plays a key role in the alteration of the propagation pattern. The alteration of the propagation pattern affects the electrograms computed at the surface of the tissue. Electrogram morphology was altered depending on the arrangement and composition of the fibrotic tissue. Detailed cardiac tissue models were combined with realistic models of the commercially available mapping catheters to understand the clinically recorded signals. A noise model from clinical signals was generated to reproduce the signal artifacts in the model. Electrograms from highly detailed bidomain models were used to train a machine learning algorithm to characterize the atrial fibrotic substrate. Features that quantify the complexity of the signals were extracted to identify fibrotic density and fibrotic transmurality. Subsequently, fibrosis maps were generated using patient recordings as a proof of concept. A fibrosis map provides information about the fibrotic substrate without using a single cut-off voltage value of 0.5 mV. Furthermore, in this study, using information theory measurements such as transfer entropy combined with directed graphs, the wave propagation direction was tracked. Transfer entropy with directed graphs provides crucial information during electrophysiology to understand wave propagation dynamics during atrial fibrillation. In conclusion, this thesis presents a multiscale in silico study of atrial fibrillation mechanisms providing insight into the cellular mediators responsible for the extracellular matrix remodeling and its electrophysiology. Additionally, it provides a realistic setup to create in silico data that can be translated to clinical applications that could support ablation treatment.
Abstract:
17 million deaths a year worldwide are linked to cardiovascular diseases. Sudden cardiac death is one result in approximately 25% of all patients with cardiovascular diseases and may be connected to ventricular tachycardia. When treating the ventricular tachycardia with a catheter intervention, the detection of the so-called exit points, i.e. the spatial origin of the excitation, is a crucial step. As this procedure is very time consuming and skilled cardiologists are required, there is a need for assisting localization procedures, preferably automatic and non-invasive ones. Electrocardiographic imaging tries to meet these needs by reconstructing the electrical activity of the heart from body surface potential measurements. The resulting information can be used to reconstruct the excitation origin. However, current methods for solving this inverse problem show either low precision or poor robustness, which limits their clinical utility. This work first analyzes the forward problem in combination with two source models: transmembrane voltages and extracellular potentials. The mathematical properties of the relation between sources on the heart and the body surface potentials are analyzed systematically and the impact on the inverse problem is explained and visualized. Subsequently, this knowledge is used to solve the inverse problem. Three novel methods are introduced: Delay-based regularization, body surface potential regression, and deep learning-based localization. These three methods are compared to four state-of-the-art methods using one simulated and two clinical datasets. On the simulated as well as one clinical dataset, one of the novel methods outperformed the existing approaches, whereas on the remaining clinical dataset, Tikhonov regularization performed best. Potential reasons for these results are discussed and related to properties of the forward problem.
Abstract:
Cardiac diseases are the number one reasons for death in the western world. Computa- tional simulations provide the opportunity to conduct experiments and predictions that are not possible in humans due to ethical and other reasons. High performance computa- tion allows the use of demanding coupled computational models of high complexity and a high level of detail, complying with a wide range of experimental data from the human heart. In this thesis, different aspects of computational heart modeling are covered: models describing passive tissue behavior, active contractile behavior, circulatory system modeling, influences of the pericardium and surrounding tissue on the heart as well as methods to obtain suitable parameters for these models. For each aspect, several modeling approaches are presented and compared. Finally, a scalability evaluation of the highly-parallelized implementation and an evaluation of the proper choice of mesh resolution for credible numerical results are covered. Concludingly, this thesis allows the reader to gain insights into the complexity of computational heart modeling and to make an appropriate choice of models and parameters suitable for specific applications.
Abstract:
Atrial fibrillation (AF) is the most common supra-ventricular tachycardia. Despite not fully understanding all mechanisms leading to AF, atrial dilation and thus cellular stretch are identified as risk factors. It is suspected that the influence of stretch-activated ion channels (SACs) on electrophysiological cellular properties connects atrial stretch and the pathogenesis of AF, among other contributing factors. Therefore, this work investigates the possible relationship between SACs and AF in silico. In addition, this also provides a better understanding of SACs in general. For this purpose, a model was implemented that distinguished between a K+-selective component and a non-selective component. Then conditions were identified that triggered stretch-induced action potentials (APs) on the cellular level and in coupled whole heart simulations. In the specific case of a constant stretch application at the cellular level, a problematic time-dependent depolarization of the transmembrane voltage was observed. This issue is not yet critically discussed in literature but was addressed to be able to perform whole heart simulations with a physiological atrial pre-stretch of λ ≈ 1.10 m/m. Adaptations of the channel conductances for the non-selective SAC current and the rectifying K+ current (IK1) were required to overcome the problem of excited states in continuously stretched cells. On the tissue level, a homogeneous distribution of SACs was considered in healthy as well as in pathological conditions on a whole heart geometry. In healthy tissue, the impact of SACs was varied to understand the effects of this channel type. This was done by scaling the sensitivity of the channel conductance towards stretch which led to ectopic beats for GSAC,NS ≥ 0.39 nS/pF. Pathological tissue conditions were simulated with GSAC,NS = 0.33 nS/pF to prevent spontaneous activity. Additionally, different aspects of cardiac remodeling were considered to represent tissue adaptations in the presence of AF. No ectopic activity was triggered with an ionic remodeling due to a faster repolarization mainly attributed to the adapted channel conductance of IK1. Only a reduction of the tissue conductivity in the myocardium of the atria to 40 % of the healthy conductivity initiated extrasystoles. Simulations of reduced basic cycle lengths showed unphysiological behavior and, therefore, revealed a lack of ability of the current simulation setup towards investigations using increased heart rates. In summary, the myocardium in cranial regions as well as close to the atrioventricular valves was identified as vulnerable towards triggering stretch-induced APs and causing conduction blocks that could lead to chaotic ectopic excitation propagations. Therefore, this simulation setup might be able to initiate AF if reentries are present additionally. To conclude, the amount and timing of atrial stretch were identified as equally important contributors to ectopic activity. Additionally, the simulations provide evidence to support the presumed connection between SACs and AF.
Abstract:
Atrial fibrillation (AF) is the most common arrhythmia posing a significant burden to patients and leading to an increased risk of stroke and heart failure. However, the underlying mechanisms of AF are still not entirely understood. While the general treatment of pulmonary vein isolation (PVI) is an effective therapy for patients with paroxysmal AF, the success rate for maintaining sinus rhythm significantly decreases for individuals with persistent AF. It has been reported that this may be due to non-PVI triggers in the atrial body of patients with persistent AF in combination with the presence of structural remodeling processes. Thus, individual treatment strategies need to be elaborated which require regional examinations of the atria. However, since there is no consensus regarding the atrial region division, intra- and inter-, as well as cross-modality quantitative comparisons cannot be performed.To address this challenge, a bi-atrial standard nomenclature with six regions for each atrium was established by considering multiple acquisition modalities and previous approaches proposed in the literature. Furthermore, a robust semi-automatic bi-atrial division pipeline with consistent and reproducible region assignment was developed. This algorithm was tested on 15 bi-atrial and 10 additional left atrial geometries from three independent data sets with multiple acquisition modalities. The results showed qualitative and quantitative accordance with the introduced standard nomenclature, while offering a high degree of autonomy and reproducibility. User intervention was required in 2.7 % of all region assignments.In addition, the atrial division pipeline was used to regionally analyze clinical data of electroanatomical mapping (EAM) and late gadolinium enhancement magnetic resonance imaging (LGE-MRI), which are commonly used modalities to detect arrhythmogenic substrate. However, no accordance was found between late gadolinium enhancement (LGE) and low voltage area (LVA). Instead, the regional extent and spatial distribution of identified pathological tissue varied significantly between both modalities. Moreover, an increased extent of LVA compared to LGE was observed in the septal wall of the left atrium (LA) in two independent data sets. In contrast, LGE was predominantly detected in the lateral wall of the LA. However, these findings need to be investigated further, since the analyzed data set of 12 patients in total was comparatively small.This work provides a unified framework to regionally compare and integrate data from different fields such as research, clinics and technology to further understand the underlying mechanisms of AF and to personalize the treatment strategies.
Abstract:
Ziel dieser Masterarbeit ist die Entwicklung eines automatischen Testsystems zur Reduzie- rung des Zeitaufwands für die Funktionsprüfung multiartikulierter Handprothesen der Firma Vincent Systems GmbH. Die durchgeführten Tests prüfen die korrekte Bewegung der Finger, Funktionsfähigkeit der Software, Vibrationsfeedback und mehr. Für die Automatisierung wird eine kontrollierte Testumgebung konstruiert, in welcher Sensoren visuelle und auditive Daten sammeln.Ein neuronales Netz führt mit den visuellen Daten in Echtzeit eine Posenestimation durch. Die Architektur ist an MediaPipe Hands [1] angelehnt. Durch die kontrollierte Umgebung und die kleine Anzahl an Freiheitsgraden der Prothese kann die Komplexität des Modells deutlich reduziert werden. Dies ermöglicht die direkte Bestimmung aller Gelenkwinkel anstelle der sonst üblichen Schätzung der Gelenkposition. Ein Datensatz mit etwa 55.000 Bildern wird automatisch aufgenommen um ein vortrainiertes ResNet50V2 mittels Trans- ferlernen auf das Regressionsproblem anzupassen. Eine Winkelgenauigkeit von etwa 10°° mittlerem quadratischen Fehler kann an den Basisgelenken erreicht werden. Verschiedene Algorithmen prüfen je nach Kontext die geschätzten Posen auf Anomalien. Dabei erschwert die sehr kleine Anzahl von vier fehlerbehafteten Datenpunkten die statistische Auswertung. Vibrationsfeedback und Anpassung der Griffkraft werden mittels Spektralanalyse überprüft. Der bestehende Dokumentationsprozess wird in das Testprogramm integriert.Zur Einschätzung der Performanz des entwickelten Systems werden einige ausgehende Prothesen sowohl automatisch als auch manuell getestet. Aufgrund einer Sicherheitsvor- kehrung weicht das Verhalten der Prothese etwas vom Testmodell ab. Dies hat einen stets detektierten falsch positiven Fehler zur Folge. Der Tester kann diesen allerdings leicht im Testlog abschätzen und das Testergebnis bestimmen. Es sind keine falsch negativen Fehler aufgetreten. Der Arbeitsaufwand für die Anpassung auf eine neue Prothesengeneration und Änderungen im Dokumentationsprozess wird auf niedrig bis moderat eingeschätzt.
Abstract:
Microscope-integrated optical coherence tomography (OCT) can provide valuable feed- back about the surgical procedure during ophthalmic surgeries through the visualization of two-dimensional cross-sections. The emerging swept-source technology also enables the acquisition of real-time volume scans during surgery. However, these volumes are inherently distorted, which limits their applicability for assistance functions due to inaccurate distance measurements. The three main topics addressed in this master thesis are the distortion correc- tion of OCT volume scans, the parametrization of the refractive surfaces and the examination of the influence of the refractive indices on the reconstruction accuracy.In this work, the 3D distortion correction of volume scans is realized with stepwise ray tracing and surface fitting. This allows the correction of fan distortion resulting from the scan ray geometry, refraction at interfaces and optical path length (OPL) compensation. It is shown that the implemented methods can be applied to segmented OCT volume scans and simulated data too. For the compensation of the fan distortion, a two-axis geometrical calibration method is proposed. With this calibration, the reconstruction error of a planar test target can be decreased by one order of magnitude.The surface fitting and reconstruction accuracy of different surface parametrization methods are evaluated, with a focus on the comparison of Zernike and B-spline surface parametrization. Two different cornea geometries are investigated in the simulations, a conical one and a deformed one with regard to the intraoperative environment. In both cases, increasing the number of polynomial terms of the Zernike and B-spline fitting results in a more accurate reconstruction. For the simulations, where noise and other measurement errors are excluded, the RMS reconstruction error of the points of the pupil plane can be reduced to below 10 μm. The number of polynomials required for this accuracy is similar for the Zernike polynomials and the B-splines. Analysis of experiments with ex vivo porcine eyes show the same relationship between the complexity of the parametrization and reconstruction accuracy. No previous publications were found that compare Zernike and B-spline fitting of corneal surfaces.The connection between the estimation error of the refractive indices and the reconstruction error of the pupil plane is evaluated through simulation. In these simulations the refractive index of the aqueous humour has a four times higher impact on the reconstruction accuracy of the pupil plane than the refractive index of the cornea.
Abstract:
Multiple biosignals obtained with a wearable device can be used to assess anxiety disorders. Electrocardiography (ECG) is an approach widely used to detect anxiety because it effectively monitors the activity of the cardiovascular system. Anxiety disorders frequently occur with respiratory changes, such as increased breath rate. Therefore, it is possible to improve the performance of anxiety detection by analyzing these two types of biosignals (ECG and Respiration Signal (RSP)). Little is known about combining these two signals’ effectiveness in assessing anxiety. This study proposed a method to detect anxiety from ECG and RSP to solve this problem.This study first segmented and labeled these biosignals based on the dataset. The dataset contained 20 individuals aged between 18 and 56 years, 14 are male, and six are female. The dataset provided the recorded ECG and RSP that were used for anxiety detection. A sliding window was applied to generate time windows from segmented signals. A fast QRS detection algorithm detected all R peaks in each time window, and a semi-automatic peak detection algorithm extracted all peaks of RSP. All R peaks and breath peaks were employed to calculate Heart Rate Variability (HRV) and Respiration Rate Variability (RRV). This study obtained 24 HRV features and 18 RRV features. After feature selection, 19 HRV features and 13 RRV features were left for classification. The classification used 90% data as training data and 10% data as test data. The Multi-Layer Perceptron (MLP) was identified as the best classifier using only RRV features with 84.6% accuracy. The best classifier using only HRV features as input data set was the MLP, which achieves 93.3% accuracy. When using the HRV and RRV features together, the Support Vector Machine (SVM) performed best with 99.6% accuracy.Results showed that anxiety patterns were associated with the features extracted from either the ECG or the RSP. The performance of multimodality was much better than single signal.
Abstract:
Optical coherence tomography (OCT) has revolutionized ophthalmology over the past two decades. Comparable success in other medical and non-medical fields is still prevented by the high cost of OCT. The main cost driver of swept-source OCT (SS-OCT) devices is the laser source itself. We have identified a light source commonly used in telecommunications and consumer applications, which costs several magnitudes less than the conventional swept sources. Thermal tuning of this source enables the wavelength shift of the output beam, making it a suitable candidate for low-cost SS-OCT devices. Based on this concept an OCT system was designed in order to serve as a proof of concept for obtaining B-scan images. In the scope of this thesis, two types of low-cost scanners (MEMS mirrors and a mechanically-controlled lens) were implemented and compared in terms of speed, scanning range, cost and robustness. In both cases the system proved adequate for performing full-eye OCT in-vitro in terms of sensitivity and imaging depth. In addition, means of enhancing the axial resolution of the low-cost source were explored, since the resulting axial resolution was approximately 10 times lower than the one of state-of-the-art swept source lasers. Based on our results, we believe that the low-cost OCT system described in this work has the potential for a point-of-care medical device.
Abstract:
Atrial fibrillation (AF) is the most common arrhythmia in the world and a leading cause of hospitalization and death, but its therapy remains suboptimal. Termination of AF during catheter ablation is an attractive procedural endpoint as it has been associated with improved long-term outcomes. Yet, there exists no reliable metric capable of predicting the likelihood of termination using clinical data. Therefore, we developed and applied four quantitative indices in prolonged global multi-electrode recordings of AF in the left atrium (LA) prior to ablation: average dominant frequency (DF), spectral power index (SPI), electrogram quality index (EQI), and peak width index (PWI). Subsequently, we combined three indices with high predictive capabilities (SPI, EQI, and PWI) via the index INaive and with machine learning (ML) using a gradient boosted decision tree (GBDT) and neural network approach. Lastly, we trained a deep learning (DL) architecture to predict AF termination.We recorded unipolar electrograms (EGMs) from 64-pole basket catheters (Abbott, CA) from N=42 persistent AF patients (65±9 years, 14 % female) in whom AF terminated in a number of N=17 patients during ablation. For each metric, we determined its ability to predict termination by computing a receiver operating characteristic (ROC) and calculating the respective area under the curve (AUC) with a 95 % confidence interval (CI).The DF did not differ significantly for termination and non-termination patients (p=0.34) with AUC of 0.57 ([0.38, 0.75] 95% CI). The AUCs for predicting AF termination were 0.85 ([0.68, 0.95] 95% CI) (p<0.001) for the SPI, 0.86 ([0.72, 0.95] 95% CI) (p<0.0001) for the EQI, and 0.97 ([0.87, 1.00] 95% CI) (p<0.000001) for the PWI. Combining the SPI, EQI, and PWI via the index INaive achieved an AUC of 0.97 ([0.89, 1.00] 95% CI) (p<0.000001), while the GBDT and the neural network showed mean AUCs of 0.98 (0.96, 0.99, 1.00) and 0.94 (0.96, 0.96, 0.91) over three stratified cross-validation folds, respectively. The DL model was trained on segments of the recorded EGMs and achieved a mean accuracy of 69.17 % (55.6 %, 77.8 %, 37.5 %, 75.0 %, 100.0 %) over five stratified cross-validation folds.Three quantitative indices (SPI, EQI, and PWI) and metrics that combine these three indices with and without ML may provide useful clinical tools to intraoperatively predict procedural ablation outcomes in persistent AF patients. Further studies with larger cohort sizes are required to validate the results and to identify which physiological features of AF are revealed by these metrics and hence linked to patients showing a favorable (termination) or non-favorable (non- termination) endpoint of catheter ablation.
Abstract:
Interatrial conduction block (IAB) refers to a disturbance in the propagation of electrical impulses from the right to the left atrium via the interatrial conduction pathways. It is a risk factor for atrial fibrillation, stroke and premature death. The clinical diagnosis is based on an increased P wave duration and biphasic P waves in lead II, III and aVF due to the retrograde activation of the left atrium. Recently, Bayés de Luna et al. presented patients with atypical IAB due to duration or morphology who did not fulfill the clinical diagnosis criteria. Instead, typical morphology patterns of IAB without a prolonged P wave duration or different morphology patterns in some of the leads were found.Machine learning (ML) algorithms such as a feedforward neural networks (FFNNs) have already been successfully applied for the detection and classification of cardiac diseases based on electrocardiogram (ECG) signals. They could therefore improve the diagnosis of IAB but should be based on a large-scale, well-controlled and balanced dataset. In silico ECG signals carry the potential to produce an extensive database meeting the requirements for a successful machine learning application.In this thesis, an in silico ECG dataset for IAB based on 98 atrial geometries derived from a bi-atrial statistical shape model (SSM) to cover the anatomical variability of the atria was generated. An automated feature analysis was performed to evaluate 100 different P wave features including P wave morphology, P wave duration, P wave dispersion, P wave terminal force in lead V1 (PTF-V1), P wave area, root mean square (RMS) voltages and P wave amplitude. Based thereon, the general feasibility of a classification of IAB with FFNNs was investigated for various combinations of input features.None of the extracted features showed distinct ranges for healthy models and models with IAB. P wave duration and PTF-V1 were increased for signals with IAB compared to healthy signals. A wide variability of P wave morphology was detected for models with IAB. Some of the models with IAB did not fulfill any of the clinical diagnosis criteria but had an atypical P wave morphology. Using the same features as in the clinical setting, a FFNN was able to distinguish between signals from healthy geometries and geometries with IAB with an accuracy of 88.38% on average. The performance could be even improved to an accuracy of 97.44% on average by using more input features. Using only a single lead as input data, the classification of IAB could be performed with an accuracy of 95.00 % on average.Contrary to previous assumptions in literature, the results suggest that a biphasic morphology in lead III seems to be neither necessary nor sufficient for the diagnosis of IAB. Moreover, no single and decisive feature could be identified for models with IAB. However, the results of this work prove that a classification of IAB with FFNNs is generally feasible. Furthermore, this work suggests that the diagnosis of IAB can be even improved with a neural network. The results further indicate that IAB could be classified using only a single lead recording. However, only a binary classification of healthy signals and signals with IAB was studied and further research is needed for the differential diagnosis of IAB. The generated dataset is ready for further studies on classification with ML algorithms, either by using a different algorithm or in a multiclass classification setup.
Abstract:
3D images of the patient’s skull are acquired with expensive and time-consuming methods such as Magnet Resonance Imaging (MRI) and harmful Computer Tomography (CT). With a combined head-skull Statistical Shape Model (SSM), it is possible to estimate to skull from the patient’s head surface and therefore reduce ionizing radiation. In this thesis an automatic and modular pipeline, to create such a combined SSM, was developed and evaluated. The pipeline consists of four steps. In the first step, head- and skull meshes were extracted from 195 CT-DICOM series using the marching cubes algorithm. In the second step, the post-processing of the extracted meshes artifacts was executed and unwanted objects such as the patient lie and inner structures of the head were removed. Additionally, the meshes were smoothed with HC Laplacian Smoothing to reduce noise artifacts. Third, the head- and skull meshes were registered by bringing them in dense correspondence to a head template and a skull template. This alignment step was implemented in two ways: Alignment with a manually set fixed transformation, and with a global alignment using RANSAC, followed by a local refinement of the alignment with ICP. Afterwards, the LYHM, which was used as head template, was morphed to the head meshes with NICPA to bring them into point-to-point correspondence. Because the skulls had many inner structures, the alpha shape of each skull was computed and used as target for the morphing step. The alpha shape of a general skull was used as skull template. After the skull morphing with NICPA, the general skull was deformed to the morphed skull template with an As-Rigid-As-Possible deformation method. In the last step, the combined head-skull-SSM was trained with the registered heads and skulls. First, the mean shape of the combined head-skull meshes was computed with General Procrustes Analysis (GPA). This mean shape was then used during the Principal Component Analysis (PCA) to reduce the dimensionality of the data and calculate the principal components to analyze the variance in the meshes. The skull was estimated by calculating the most probable total shape, given a new head surface. To evaluate the pipeline, the alignment step, the skull- and head registration step, and the trained SSMs were analyzed. The skull estimation was evaluated with ten-fold cross validation and the comparison of an estimated skull to the CT-based skull. It was shown, that the skulls could be estimated with an average distance error of 5.9 mm. This value is too large to use the created SSMs for surgery planning. However, the estimated skulls can be useful in other medical applications e.g., patient education with a patient-specific skull.
Abstract:
Die optische Kohärenztomographie (OCT) ist ein nicht-invasives Bildgebungsverfahren zurVisualisierung von Gewebestrukturen und wird insbesondere in der Ophthalmologie, derAugenheilkunde, eingesetzt. Die ausgegebenen Bilddaten unterliegen aufgrund der Funktionsweiseder OCT und der Eigenschaften von Licht als Informationsträger diversen optischenVerzerrungseffekten. Unter anderem an der Cornea werden diese Phänomene sichtbar. Dieserschwert eine messtechnische Nutzung der Informationen, welche für die Diagnose vonKrankheiten und für den Einsatz von Assistenzsystemen bedeutsam ist. Ziel dieser Arbeit istdie Analyse und Korrektur dieser Verzerrungseffekte.Bei den auftretenden Effekten handelt es sich um die Darstellung in optischen Weglängen,die Brechung an Grenzflächen zwischen Medien mit unterschiedlicher optischer Dichtesowie die Verzerrung aufgrund der Scangeometrie des verwendeten Operationsmikroskops.Im Korrekturalgorithmus wird zuerst eine Segmentierung vorgenommen, um die Grenzflächender verschiedenen optischen Medien im Bild zu identifizieren. Zur weiteren Vorbereitungder Daten erfolgt eine Vorskalierung dieser. Für jeden der identifizierten Effekte wirdeine Korrekturmethodik entwickelt. Die Korrekturschritte bauen jeweils aufeinander aufund werden im Gesamten in den Algorithmus eingeordnet. Anschließend werden möglicheFehlerquellen identifiziert und deren Einfluss auf den Korrekturalgorithmus wird bewertet.Es wurde herausgefunden, dass eine Positionierung des zu untersuchenden Objekts abweichendeiner zentralen Ausrichtung kompensiert werden kann, indem die Bilddaten mithilfeeiner Verschiebung angepasst werden. Durch diese Kompensation kann der Einfluss derTranslation auf die Verzerrung verursacht durch die Scangeometrie verringert werden. Darüberhinaus zeigt die Bestimmung geometrischer Maße aus den korrigierten Bildern vonplankonvexen Modelllinsen, dass die Genauigkeit der Ergebnisse vom betrachteten Maßabhängt. Zudem deutet der Vergleich mit den Ergebnissen aus der Literatur darauf hin, dassaufgrund einiger fehlender Parameter des Strahlengangs die Korrektur der Scangeometrieeiner Optimierung bedarf. Zuletzt wird der Einfluss einer plankonvexen Linse auf die dargestelltePosition eines Instruments untersucht. Eine Gegenüberstellung der Betrachtung einerKanülenspitze mit und ohne Linse zeigt eine Abweichung von unter 100 μm auf.Zusammenfassend konnte gezeigt werden, dass auf Basis einer Analyse von Verzerrungseffekten in OCT-Bildern ein Algorithmus mithilfe der Methoden der Bildverarbeitung entwickeltwerden kann. Dies ermöglicht die Korrektur dieser Effekte und fördert die messtechnischeNutzung von OCT. Die dabei erzielte Genauigkeit hängt von dem Anwendungsfall ab.Zusammenfassend konnte gezeigt werden, dass auf Basis einer Analyse von Verzerrungseffektenin OCT-Bildern ein Algorithmus mithilfe der Methoden der Bildverarbeitung entwickeltwerden kann. Dies ermöglicht die Korrektur dieser Effekte, sodass durch Bildnachverarbeitunggeometrische Informationen deutlich genauer ermittelt werden können. Die dabeierzielte Genauigkeit hängt von dem Anwendungsfall ab.
Abstract:
In silico studies of ventricular electrophysiology bear the potential to produce a vast amount of synthetic electrocardiograms (ECGs) required for big data and machine learning approaches to identify cardiovascular pathologies. For large-scale simulations of ventricular ECGs, simulation parameters applicable to a variety of anatomical models need to be found such that the synthetic data resemble clinically observed recordings with high fidelity.In the thesis at hand, parameters for simulating ventricular activation by solving the Eikonal equation were optimised. For this purpose, 11,705 clinical 12-lead ECGs were filtered, annotated and cut, to generate representative QRS complex templates for each lead and observation. Those were then aligned, keeping inter-lead signal shifts, before reducing the dataset to its principal components (PCs). The suitability of the PCs to represent signal char- acteristics was validated through a classification distinguishing the healthy and pathological subjects of the dataset.Subsequently, ventricular activation parameters for a simplified representation of the His- Purkinje network were optimised. Those parameters include conduction velocities of the subendocardium and the myocardium, anisotropy ratio of the myocardial fibres as well as number, location, size and activation delay of initially activated regions (IARs) on the suben- docardium. The optimisation was conducted on the mean shape geometry of a ventricular shape model employing ventricular coordinates to define IARs geometry independent. ECGs were derived from the body surface potentials using the boundary-element method (BEM). The optimised objective function was based on the L2 error norm between the original signal and the reconstructed signal after projection to 20 PCs of the healthy clinical cohort. An evolutionary algorithm (EA) and a Bayesian optimisation algorithm were developed to opti- mise the parameters. Robustness was assessed by investigating the influence of ventricular geometry variations....
Abstract:
The intrinsic pacemaker of the heart, the sinus node (SN), beats 2 to 3 billion times in the average lifetime of a human being. However, the sinoatrial node (SAN) can fail to activate the surrounding hyperpolarized cardiac tissue. The dynamics and the reasons of the physiological activity such as the sinus node dysfunctions (SNDs) are nowadays still under study. Computational model can help to study and better characterize the pace-and-drive capacity of the SN.Moreover, it was shown that the SN was isolated from the cardiac tissue except at a discrete number of sinoatrial exit pathways (SEPs), but no 3D model of this SAN-SEP structure exist.This work allowed to built a very first working model of the human SN, including a 3D modeling of the SEPs.A total number of 1266 simulations were performed on a reduced model, that described the dynamics of a single SEP linking the SN to the cardiac tissue. These in silico investigations allowed to characterize the cellular variability and tissue structure of the heterogeneous nodal tissue. This allowed to propose a first model of the highly detailed human pacemaker, able to reproduce a realistic physiological behavior.Among the numerous investigation possibilities offered by the complete model of the SN, 23 simulations were set up to explore different aspects of the SAN-SEP structure, thanks to the flexibility of the geometrical and electrophysiological model. The shift of the lead- ing pacemaker site (LPS), that initiates the depolarization wave, known as the wandering pacemaker phenomenon, was observed and explained for the first time with a 3D in silico characterization of the SN. Fiber orientation was found to be of major importance in the preferential activation of certain SEPs. It was also proposed, that a minimum number of 4 SEPs were needed, representing a certain amount of transitional cells. Additionally, among the SEPs, two preferential exit pathways were essential to depolarize the cardiac tissue. The complexity of the SN was faced but better described thanks to this work. However, some aspects should be further studied and were proposed for more advanced research.
Abstract:
Atrial fibrillation (AFib) is the most prevalent cardiac arrhythmia and affects between 1% and 2% of the population and may be facilitated by regions with pathologically altered substrate. Catheter ablation is a minimal invasive procedure to treat AFib by electrical insulation of regions that have been identified as potentially responsible. Therefore, electroanatomical mapping is used to provide information about the underlying substrate before the ablation procedure. Local impedance (LI) has recently gained attention for assessment of lesion formation, as an impedance drop is observed during ablation. Moreover, the use of LI for mapping purposes has been investigated recently and it was come to the conclusion that it shows a great potential as a complement to electrogram-based substrate mapping. However, LI measurements are influenced by various factors, among of which contact force (CF) between catheter and tissue is one. To identify the impact of CF on LI, the deformation of atrial tissue was simulated by finite element analysis (FEA) with a linear elastic material model in ANSYS at a first instance. Simulations were conducted for healthy and scar tissue properties. The applied CF ranged from 1 g to 6 g and additionally from 10 g to 25 g for scar simulations. The thickness of the tissue patch was varied between 2.5 mm and 7.5 mm. Mechanical simulation results were included in a setup which consists of atrial tissue patch, catheter, and a surrounding blood-box. This setup was used for forward electrical simulations that calculated the LI. An in-silico model of the IntellaNav StablePointTM ablation catheter was used for both, mechanical and electrical simulations. Results showed that the dependence of LI on CF could be approximated by a linear function. The gradient for healthy simulations was approximately 10 times bigger than for scar simulations. Furthermore, a dependence of LI on tissue thickness (TT) was identified and a surface that allows to calculate TT with known CF and LI was fitted. Healthy and scar simulations could be separated by a threshold for all CF and TT. Clinical data of ten patients that underwent catheter ablation was used to validate the conducted simulations.
Abstract:
Craniosynostosis is a congenital disease characterised by the premature closure of one or multiple sutures of the infant’s skull. Early diagnosis is crucial as it reduces possible damage to the brain and allows the usage of less invasive surgery. For the assessment, computer tomography scans are the gold standard. A promising, radiation-free alternative is the usage of 3D-photogrammetric scans, which provide a fast way to capture the shape of the head. In this work, an existing CNN-based classifier was improved substantially and compared to competing classical machine-learning-based classification approaches, two data augmenta- tion methods were evaluated in an environment of data-scarcity, and a feature analysis was performed to analyse the classification decisions in the CNN. Using a ray-based approach, distances between a central point and the surface of a triangular mesh of the photogrammetric scan were extracted and used as input features for classifi- cation models. Hyperparameters for training were optimised resulting in a classification improvement from 87 to 95%. Using a fine-tuning paradigm, in which all the weights can be adjusted, was identified as the main contributor to the improved accuracy. Different network architectures and comparison with classical machine learning approaches identified the Resnet18 as one of the optimal classifiers. To test data augmentation methods, a simulated case of data scarcity was created, by de- creasing the amount of training data. Two generative models (Statistical Shape Models and conditional deep convolution generative adversarial neural network) and one traditional data augmentation method (horizontal flipping) were incorporated into the test scenario. Anal- yses with two metrics (accuracy and F1-score) showed that no data augmentation showed consistent improvement to the model and revealed generally a little influence with respect to the classifiers. On purely synthetic training data, both methods failed to reproduce the original scores. Especially the generative adversarial network failed to capture the features of the training data. To further optimise the classifier , the number of rays could be reduced from 50176 to 784 while still achieving an accuracy of over 95% leading to a substantial speed-up in image generation. Using the integrated gradients to see which rays contributed the most to the classification models’ decision did not show a direct and clear correlation to each of the four classes. However, it revealed that areas prone to possible overfitting, like the ears, did not influence the final decision.
Abstract:
Magnetic Resonance Fingerprinting (MRF) is a versatile approach for multiparametric quantitative medical imaging. It can provide a wide range of different biophysical parameters such as T1 and T2 but also blood flow velocities. The quality of the measured parametric maps depends on the data acquisition in the raw data space, the so-called k-space sampling scheme. In this thesis, MRF sequences were designed in combination with varied sampling schemes: Cartesian, radial and spiral. To obtain comparable MRF sequences the open source and vendor-independent Pulseq1 framework was utilized. The accuracy of the T1 and T2 estimations was evaluated in phantom experiments and showed higher accuracy for radial and spiral sequences compared to Cartesian sequences. The average absolute error with respect to a spin echo reference scan is reduced by nearly 50% compared to the implemented Cartesian MRF sequence. Reproducibility experiments of the same sequence implementation on two different 3T MRI scanners detected no statistically significant difference due to the hardware for the T1 maps obtained. However, a statistically significant deviation due to the scanner was observed for the T2 maps.
Abstract:
Minimally invasive surgery brings great benefits in terms of patients’ outcome, but monitoring the instrument’s exact position and orientation, within the human body re- mains challenging, especially because of the high radiation exposure during intraoper- ative CT-visualization. Magnetic tracking has been developed as an alternative without radiation risks. So far, mainly rigid Hall sensor arrays or compliant systems with addi- tional sensor modalities for self sensing have been investigated. A compliant system can be placed directly on the patient, opening up new possibilities in the operating room, due to different space requirements.In this work a compliant Hall sensor array approach with variable bending radius, re- lying only on magnetic sensing for self-sensing (Form Detection) and for the actual instru- ment tracking is investigated. The hypothesis of this work is, that tracking measurement accuracy improves as the bend becomes tighter. This is supported by the conjecture that the smaller distance between tracking target magnet and sensors improves the Signal-to- Noise ratio (SNR). But when the bend becomes tighter, the distance to the Form Detection magnet increases, potentially worsening system performance.In this work a non real-time experimental system is developed, which tracks a mov- ing cylindrical permanent magnet using a compliant array of 3D Hall sensors. The Hall sensors are arranged in a rectangular 4 × 4 sensor grid with edge length of 100 mm. It is constructed using a flexible printed circuit board, intended to be only bend along the y-axis.When calculating sensor location, this allows the simplifying assumption of a part of a cylinder surface. The compliant setup creates the problem of unknown sensors posi- tions and orientations. To solve this a self-sensing approach, called Form Detection, was implemented, where a static magnet is placed in a specific known position relative to the array. By comparing the physical setup’s sensor data to a the model of the magnetic field, the radius of the array can be determined.The hypothesis was proven correct, by testing Form Detection performance and Track- ing performance individually and combined. It has been shown experimentally, that de- spite a deterioration in Form Detection results, overall system Tracking performance im- proves with tighter bends. The array in flat shape performs the worst, at a mean position error of 4.27 mm, while an array curved by λ=225° performs the best at 3.16 mm.
Abstract:
Deep learning based approaches are widely adopted in medical image processing. The current deep neural networks depend on a large amount of training data. However, the implementation of medical algorithms often suffers from sparsely available data. In order to overcome this difficulty, the statistical information of the data could be used to synthesize artificial data. Several authors have demonstrated the successful generation of a Statistical Shape Model (SSM) for medical shapes. In this work a pipeline was built to generate a SSM for the liver from a clinical data set of 28 3D Computer Tomography (CT) models. The created pipeline allows the synthesis of any amount of artificial but realistic data. The pipeline consists of four steps. First, a preprocessing step removed noise and mesh irregu- larities from the data. Second, the template and the targets were rigidly aligned using the Orthogonal Procurstes Analysis (OPA) with the help of corresponding landmarks among the training images. A method to create a suitable template was implemented. In the next step, the Two-stage Laplace-Beltrami Regularized Projection (LBRP) morphing algorithm and the Nonrigid Iterative Closest Point Translational (N-ICP-T) algorithm each with dif- ferent parameterizations were compared. Finally, a SSM for both morphing approaches was generated. First, the mean shape of the deformed templates was computed with the Generalized Procrustes Analysis (GPA). This mean shape was then employed during the Principal Component Analysis (PCA) to reduce the dimensionality of the data and to cal- culate the principal components. To evaluate the pipeline, the template morphing and the shape model performance were analyzed. It was shown that the model with the Two-stage LBRP approach was able to generate a model, which allows the synthesis of any amount of artificial but realistic data. The performance of the generated SSM is comparable with a state-of-the-art model. The synthesized data can be used to train neural networks which might lead to enhanced accuracy for deep-learning approaches.
Abstract:
Atrial fibrillation (AF) is the most common cardiac arrhythmia among cardiovascular diseases. It has a significant number of cases which lead to long-term damage or death. Due its complexity in comparison to other cardiovascular diseases it is poorly understood and there is no personalized successful therapy which terminates AF in every case. Current AF treatments are either painful (cardioversion), have major side effects (drugs) or are invasive methods in need of operation (ablation). Low-energy pacing is an approach to terminate AF using a septal pacing electrode which operates with stimulus pulses below the pain threshold. This project proposes a new pacing protocol to terminate AF. In comparison to other pacing protocol it does not use atrial fibrillation cycle length (AFCL) statistics which are currently only available in simulations and not in clinical environment. The proposed LAT informed pacing uses the Last Activation Time (LAT) at the pacing locations where the pacing electrodes are located. The other pacing protocol (mAFCL pacing) used the AFCL statistics of the previously induced AF and did not use the LAT of the stimulus location. The success rate of the two pacing protocols was compared regarding their success rate on their ability to continuously excite the atrial tissue around the pacing location without an interfering AF wave (local capture). The pacing duration for each simulation was 10 s. LAT informed pacing had a local capture success rate of 70.3 % (n = 384) and mAFCL pacing a success rate of 13.3 % (n = 30). This work demonstrated that the success of a low-energy pacing protocol might not rely on information of the AFCL and could improve by using the LAT of the pacing location which is already measurable by current pacemakers.
Abstract:
Atrial fibrillation is a common disease among the elderly. Atrial fibrillation is an irregular and often rapid heart rate. Ectopic activity could trigger atrial fibrillation, which induces fast pacing to create fibrotic regions. A treatment of AF is catheter ablation guided by electro-anatomical mapping. Computational modeling and in-silico experiments are an area that is continuing to grow, aiding in the understanding of electrophysiology and the relation between electrical propagation and intracardiac signals. 2D tissue patches simulations have been useful to understand the link between the depolarization propagation in the cardiac tissue and its corresponding electrograms. In previous studies, for 2 mm and 6 mm distant electrodes in simulated and clinical data, the value of bipolar electrogram changes depending on the direction of the catheter. The bipolar amplitudes are minimized when the wavefront propagation is perpendicular to the electrode pair. However, the influence a realistic geometry and a deformed catheter (reproducing a clinical procedure) has on the intracardiac signal has not been studied in detail. To explain why the directional dependence of bipolar signals is not passed on to bipolar voltage mapping, here is a hypothesis: in the clinical setting of the 1–5mm thick atrial wall [1] with multiple layers of myocardial fibers, instead, the waves always contain some degree of curvature in the three-dimensional space, so that the electrodes do not receive the signal at the same time point. In this work, to test this conjecture and observe the effect of the EGM signal by the endocardium of atrial geometry, a new set of algorithms extracting EGM signals and calculating wavefront angles in the human atria has been developed and implemented. For the 3D simulation, 2 different sets of simulations were created, with three different atrium catheter models in each set. The ground potential was selected in the blood mesh near the left atrial appendage. The stimulus point of the first simulation is chosen near Bachmann’s bundle, and the stimulus point of the second simulation is chosen near coronary sinus. So there are 6 groups of simulations in 3-dimensional geometry. For the 2D simulation, the stimulus point is selected in one corner of the patch, and the ground potential is selected in the blood mesh of the other corner. By comparing the unipolar amplitudes in the seven sets of data, the difference of in- terquartile range of the 3D unipolar is 1.11 mV larger than that of the 2D, indicating that the 3D unipolar does change according to the atrium geometry. 11 cases(18.3%) of a total of 60 3-dimensional simulations, bipolar amplitudes also appear to be 0, seven of these(63.6%) were where neither the two electrodes in one pair are not in contact with atrium. In other words, in the 3-dimensional simulations the wavefront can also appear at the same time to a pair of electrodes. The wave is always curved when it propagates on the 3D atrium, and there is no approx- imate planar wave. The curvature of the wavefront does not affect the bipolar amplitudes or the difference of activation times. But a variation in curvature, may indicate that several waves are transmitted to this electrode pair from different directions. It can be concluded from this paper that simulating a more realistic geometry helps to understand the true pattern of propagation within the atrium.
Abstract:
The coronary arteries supply the heart muscle with oxygenated blood to ensure a proper heart function. When this process is disturbed, it can have detrimental impacts on the physical conditions of the human body and can be lethal in many cases. One main reason for the malfunction of the blood sustenance of the heart can be a coronary myocardial infaction which typically occurs in connection with plaque depositions within the coronary arteries. The label plaque is a hypernym for material depositions e.g. calcium and fat in the arteries for patients who suffer from coronary artery disease and can be divided in two categories: Critically stenotic plaque and vulnerable or high risk plaque. For that reason, it is of high clinical relevance to further examine the phenomena related to heart attacks which can be executed from a fluid mechanical standpoint through fluid flow simulations. In this thesis, the simulation software COMSOL Multiphysics which utilizes the finite element method (FEM) to obtain numerical solutions was used to create models to represent the artery geometry and the blood flow inside the lumen area. First of all, a model with an idealized, rigid, cylindrical geometry was built and the solutions were compared to the analytically calculated quantities for the occuring axial flow velocity and wall shear stress assuming laminar flow. With this, it could be determined that the FEM delivers acceptably accurate results and therefore the basic validation for the model was ensured. Second, an idealized rotational symmetrical, cylindical model for the plaque and artery geometries was set up wich allows to switch parametrically between different levels of stenosis and other geometric entities. Material properties were assigned to asses the fluid structure interaction between the laminar flow and the solid mechanical components with the multiphysics package in the software. To evaluate patient specific clinical data, a general imaging pipeline that descibes the different steps necessary in the workflow of building a model to analyze fluid mechanic effects on the base of computed tomography images was formulated. Finally, a seperate model based on computed tomography images that were provided by the cardiology clinic Theresienkrankenhaus Mannheim was implemented. For all simulations, a pulsatile pressure condition was used as a boundary condition to represent the pumping mechanism of the myocard. In the next step, the velocity profiles, the pressure distributions and developments over the artery length and the occuring shear stresses were calculated and visualized for the models, respectively. The results of the simulations indicate that the localization of critical stenosis based on the quantitative data for the shear rate, the pressure change and the blood velocity can be assisted. With the color coded visualization of the dimensions and the generation of relevant two dimensional plots, the mechanical phenomena can be illustrated. Lastly, the three dimensional wall shear stress animations over several cardiac cycles can support identifying especially critical regions of the arterial wall.