Abstract:
Diseases caused by alterations of ionic concentrations are frequently observed challenges and play an important role in clinical practice. The clinically established method for the diagnosis of electrolyte concentration imbalance is blood tests. A rapid and non-invasive point-of-care method is yet needed. The electrocardiogram (ECG) could meet this need and becomes an established diagnostic tool allowing home monitoring of the electrolyte concentration also by wearable devices. In this review, we present the current state of potassium and calcium concentration monitoring using the ECG and summarize results from previous work. Selected clinical studies are presented, supporting or questioning the use of the ECG for the monitoring of electrolyte concentration imbalances. Differences in the findings from automatic monitoring studies are discussed, and current studies utilizing machine learning are presented demonstrating the potential of the deep learning approach. Furthermore, we demonstrate the potential of computational modeling approaches to gain insight into the mechanisms of relevant clinical findings and as a tool to obtain synthetic data for methodical improvements in monitoring approaches.
Abstract:
The solution of the inverse problem of electrocardiology allows the reconstruction of the spatial distribution of the electrical activity of the heart from the body surface electrocardiogram (electrocardiographic imaging, ECGI). ECGI using the equivalent dipole layer (EDL) model has shown to be accurate for cardiac activation times. However, validation of this method to determine repolarization times is lacking. In the present study, we determined the accuracy of the EDL model in reconstructing cardiac repolarization times, and assessed the robustness of the method under less ideal conditions (addition of noise and errors in tissue conductivity). A monodomain model was used to determine the transmembrane potentials in three different excitation-repolarization patterns (sinus beat and ventricular ectopic beats) as the gold standard. These were used to calculate the body surface ECGs using a finite element model. The resulting body surface electrograms (ECGs) were used as input for the EDL-based inverse reconstruction of repolarization times. The reconstructed repolarization times correlated well (COR > 0.85) with the gold standard, with almost no decrease in correlation after adding errors in tissue conductivity of the model or noise to the body surface ECG. Therefore, ECGI using the EDL model allows adequate reconstruction of cardiac repolarization times. Graphical abstract Validation of electrocardiographic imaging for repolarization using forward calculated body surface ECGs from simulated activation-repolarization sequences.
Abstract:
BACKGROUND: Using cardiovascular magnetic resonance imaging (CMR), it is possible to detect diffuse fibrosis of the left ventricle (LV) in patients with atrial fibrillation (AF), which may be independently associated with recurrence of AF after ablation. By conducting CMR, clinical, electrophysiology and biomarker assessment we planned to investigate LV myocardial fibrosis in patients undergoing AF ablation. METHODS: LV fibrosis was assessed by T1 mapping in 31 patients undergoing percutaneous ablation for AF. Galectin-3, coronary sinus type I collagen C terminal telopeptide (ICTP), and type III procollagen N terminal peptide were measured with ELISA. Comparison was made between groups above and below the median for LV extracellular volume fraction (ECV), followed by regression analysis. RESULTS: On linear regression analysis LV ECV had significant associations with invasive left atrial pressure (Beta 0.49, P = 0.008) and coronary sinus ICTP (Beta 0.75, P < 0.001), which remained significant on multivariable regression. CONCLUSION: LV fibrosis in patients with AF is associated with left atrial pressure and invasively measured levels of ICTP turnover biomarker.
Abstract:
Background Radiofrequency ablation (RFA) is a common approach to treat cardiac arrhythmias. During this intervention, numerous strategies are applied to indirectly estimate lesion formation. However, the assessment of the spatial extent of these acute injuries needs to be improved in order to create well-defined and durable ablation lesions. Methods We investigated the electrophysiological characteristics of rat atrial myocardium during an ex vivo RFA procedure with fluorescence-optical and electrical mapping. By analyzing optical data, the temporal growth of punctiform ablation lesions was reconstructed after stepwise RFA sequences. Unipolar electrograms (EGMs) were simultaneously recorded by a multielectrode array (MEA) before and after each RFA sequence. Based on the optical results, we searched for electrical features to delineate these lesions from healthy myocardium. Results Several unipolar EGM parameters were monotonically decreasing when distances between the electrode and lesion boundary were smaller than 2 mm. The negative component of the unipolar EGM [negative peak amplitude (Aneg)] vanished for distances lesser than 0.4 mm to the lesion boundary. Median peak-to-peak amplitude (Vpp) was decreased by 75% compared to baseline. Conclusion Aneg and Vpp are excellent parameters to discriminate the growing lesion area from healthy myocardium. The experimental setup opens new opportunities to investigate EGM characteristics of more complex ablation lesions.
Abstract:
The electrophysiological mechanism of the sinus node automaticity was previously considered exclusively regulated by the so-called "funny current". However, parallel investigations increasingly emphasized the importance of the Ca-homeostasis and Na/Ca exchanger (NCX). Recently, increasing experimental evidence, as well as insight through mechanistic modeling demonstrates the crucial role of the exchanger in sinus node pacemaking. NCX had a key role in the exciting story of discovery of sinus node pacemaking mechanisms, which recently settled with a consensus on the coupled-clock mechanism after decades of debate. This review focuses on the role of the Na/Ca exchanger from the early results and concepts to recent advances and attempts to give a balanced summary of the characteristics of the local, spontaneous, and rhythmic Ca releases, the molecular control of the NCX and its role in the fight-or-flight response. Transgenic animal models and pharmacological manipulation of intracellular Ca concentration and/or NCX demonstrate the pivotal function of the exchanger in sinus node automaticity. We also highlight where specific hypotheses regarding NCX function have been derived from computational modeling and require experimental validation. Nonselectivity of NCX inhibitors and the complex interplay of processes involved in Ca handling render the design and interpretation of these experiments challenging.
Abstract:
This work concerns the mathematical and numerical modeling of the heart. The aim is to enhance the understanding of the cardiac function in both physiological and pathological conditions. Along this road, a challenge arises from the multi-scale and multi-physics nature of the mathematical problem at hand. In this paper, we propose an electromechanical model that, in bi-ventricle geometries, combines the monodomain equation, the Bueno-Orovio minimal ionic model, and the Holzapfel-Ogden strain energy function for the passive myocardial tissue modelling together with the active strain approach combined with a model for the transmurally heterogeneous thickening of the myocardium. Since the distribution of the electric signal is dependent on the fibres orientation of the ventricles, we use a Laplace-Dirichlet Rule-Based algorithm to determine the myocardial fibres and sheets configuration in the whole bi-ventricle. In this paper, we study the influence of different fibre directions and incompressibility constraint and penalization on the compressibility of the material (bulk modulus) on the pressure-volume relation simulating a full heart beat. The coupled electromechanical problem is addressed by means of a fully segregated scheme. The numerical discretization is based on the Finite Element Method for the spatial discretization and on Backward Differentiation Formulas for the time discretization. The arising non-linear algebraic system coming from application of the implicit scheme is solved through the Newton method. Numerical simulations are carried out in a patient-specific biventricle geometry to highlight the most relevant results of both electrophysiology and mechanics and to compare them with physiological data and measurements. We show how various fibre configurations and bulk modulus modify relevant clinical quantities such as stroke volume, ejection fraction and ventricle contractility.
Abstract:
Identification of atrial sites that perpetuate atrial fibrillation (AF), and ablation thereof terminates AF, is challenging. We hypothesized that specific electrogram (EGM) characteristics identify AF-termination sites (AFTS). Twenty-one patients in whom low-voltage-guided ablation after pulmonary vein isolation terminated clinical persistent AF were included. Patients were included if short RF-delivery for <8sec at a given atrial site was associated with acute termination of clinical persistent AF. EGM-characteristics at 21 AFTS, 105 targeted sites without termination and 105 non-targeted control sites were analyzed. Alteration of EGM-characteristics by local fibrosis was evaluated in a three-dimensional high resolution (100 µm)-computational AF model. AFTS demonstrated lower EGM-voltage, higher EGM-cycle-length-coverage, shorter AF-cycle-length and higher pattern consistency than control sites (0.49 ± 0.39 mV vs. 0.83 ± 0.76 mV, p < 0.0001; 79 ± 16% vs. 59 ± 22%, p = 0.0022; 173 ± 49 ms vs. 198 ± 34 ms, p = 0.047; 80% vs. 30%, p < 0.01). Among targeted sites, AFTS had higher EGM-cycle-length coverage, shorter local AF-cycle-length and higher pattern consistency than targeted sites without AF-termination (79 ± 16% vs. 63 ± 23%, p = 0.02; 173 ± 49 ms vs. 210 ± 44 ms, p = 0.002; 80% vs. 40%, p = 0.01). Low voltage (0.52 ± 0.3 mV) fractionated EGMs (79 ± 24 ms) with delayed components in sinus rhythm ('atrial late potentials', respectively 'ALP') were observed at 71% of AFTS. EGMs recorded from fibrotic areas in computational models demonstrated comparable EGM-characteristics both in simulated AF and sinus rhythm. AFTS may therefore be identified by locally consistent, fractionated low-voltage EGMs with high cycle-length-coverage and rapid activity in AF, with low-voltage, fractionated EGMs with delayed components/ 'atrial late potentials' (ALP) persisting in sinus rhythm.
Abstract:
Afterinterventionssuchasbypasssurgeriesthevascularfunctionischeckedqualitatively and remotely by observing the blood dynamics inside the vessel via Fluorescence Angiography. This state-of-the-art method has to be improved by introducing a quantitatively measured blood flow. Previous approaches show that the measured blood flow cannot be easily calibrated against a gold standard reference. In order to systematically address the possible sources of error, we investigated the error in geodesic length measurement caused by spatial discretization on the camera chip. We used an in-silico vessel segmentation model based on mathematical functions as a ground truth for the length of vessel-like anatomical structures in the continuous space. Discretization errors for the chosen models were determined in a typical magnitude of 6%. Since this length error would propagate to an unacceptable error in blood flow measurement, counteractions need to be developed. Therefore, different methods for the centerline extraction and spatial interpolation have been tested and compared against their performance in reducing the discretization error in length measurement by re-continualization. In conclusion, the discretization error is reduced by the re-continualization of the centerline to an acceptable range. The discretization error is dependent on the complexity of the centerline and this dependency is also reduced. Thereby the centerline extraction by erosion in combination with the piecewise Bézier curve fitting performs best by reducing the error to 2.7% with an acceptable computational time.
Abstract:
Von wandernden Ionen über das Zellgewebe bis hin zur Aufzeichnung eines EKG: Die Softwaresimulation des menschlichen Herzens ermöglicht maßgeschneiderte Therapien. Am Karlsruher Institut für Technologie (KIT) haben Forscher ein Computermodell des menschlichen Herzens entwickelt. Die Wissenschaftler sind mittlerweile sogar schon so weit, dass sie das Modell auf die individuellen Eigenschaften eines einzelnen Patienten maßschneidern können. Diese Simulation hat einen erheblichen Nutzen für die medizinische Praxis.
Abstract:
Therapeutic hypothermia (TH) is an approved neuroproctetive treatment to reduce neurological morbidity and mortality after hypoxic-ischemic damage related to cardiac arrest and neonatal asphyxia. Also in the treatment of acute ischemic stroke (AIS), which in Western countries still shows a very high mortality rate of about 25 %, selective mild TH by means of Targeted Temperature Management (TTM) could potentially decrease final infarct volume. In this respect, a novel intracarotid blood cooling catheter system has recently been developed, which allows for combined carotid blood cooling and mechanical thrombectomy (MT) and aims at selective mild TH in the affected ischemic brain (core and penumbra). Unfortunately, so far direct measurement and control of cooled cerebral temperature requires invasive or elaborate MRI-assisted measurements. Computational modeling provides unique opportunities to predict the resulting cerebral temperatures on the other hand. In this work, a simplified 3D brain model was generated and coupled with a 1D hemodynamics model to predict spatio-temporal cerebral temperature profiles using finite element modeling. Cerebral blood and tissue temperatures as well as the systemic temperature were analyzed for physiological conditions as well as for a middle cerebral artery (MCA) M1 occlusion. Furthermore, vessel recanalization and its effect on cerebral temperature was analyzed. The results show a significant influence of collateral flow on the cooling effect and are in accordance with experimental data in animals. Our model predicted a possible neuroprotective temperature decrease of 2.5 ℃ for the territory of MCA perfusion after 60 min of blood cooling, which underlines the potential of the new device and the use of TTM in case of AIS.
Abstract:
Background and Purpose: The exact mechanism of spontaneous pacemaking is not fully understood. Recent results suggest tight cooperation between intracellular Ca handling and sarcolemmal ion channels. An important player of this crosstalk is the Na/Ca exchanger (NCX), however, direct pharmacological evidence was unavailable so far because of the lack of a selective inhibitor. We investigated the role of the NCX current in pacemaking and analyzed the functional consequences of the I-NCX coupling by applying the novel selective NCX inhibitor ORM-10962 on the sinus node (SAN). Experimental Approach: Currents were measured by patch-clamp, Ca-transients were monitored by fluorescent optical method in rabbit SAN cells. Action potentials (AP) were recorded from rabbit SAN tissue preparations. Mechanistic computational data were obtained using the Yaniv . SAN model. Key Results: ORM-10962 (ORM) marginally reduced the SAN pacemaking cycle length with a marked increase in the diastolic Ca level as well as the transient amplitude. The bradycardic effect of NCX inhibition was augmented when the funny-current (I) was previously inhibited and , the effect of I was augmented when the Ca handling was suppressed. Conclusion and Implications: We confirmed the contribution of the NCX current to cardiac pacemaking using a novel NCX inhibitor. Our experimental and modeling data support a close cooperation between I and NCX providing an important functional consequence: these currents together establish a strong depolarization capacity providing important safety factor for stable pacemaking. Thus, after individual inhibition of I or NCX, excessive bradycardia or instability cannot be expected because each of these currents may compensate for the reduction of the other providing safe and rhythmic SAN pacemaking.
Abstract:
OBJECTIVE: Unipolar intracardiac electrograms (uEGMs) measured inside the atria during electro-anatomic mapping contain diagnostic information about cardiac excitation and tissue properties. The ventricular far field (VFF) caused by ventricular depolarization compromises these signals. Current signal processing techniques require several seconds of local uEGMs to remove the VFF component and thus prolong the clinical mapping procedure. We developed an approach to remove the VFF component using data obtained during initial anatomy acquisition. METHODS: We developed two models which can approximate the spatio-temporal distribution of the VFF component based on acquired EGM data: Polynomial fit, and dipole fit. Both were benchmarked based on simulated cardiac excitation in two models of the human heart and applied to clinical data. RESULTS: VFF data acquired in one atrium were used to estimate model parameters. Under realistic noise conditions, a dipole model approximated the VFF with a median deviation of 0.029mV, yielding a median VFF attenuation of 142. In a different setup, only VFF data acquired at distances of more than 5mm to the atrial endocardium were used to estimate the model parameters. The VFF component was then extrapolated for a layer of 5mm thickness lining the endocardial tissue. A median deviation of 0.082mV (median VFF attenuation of 49x) was achieved under realistic noise conditions. CONCLUSION: It is feasible to model the VFF component in a personalized way and effectively remove it from uEGMs. SIGNIFICANCE: Application of our novel, simple and computationally inexpensive methods allows immediate diagnostic assessment of uEGM data without prolonging data acquisition.
Abstract:
End-stage chronic kidney disease (CKD) patients are facing a 30% rise for the risk of lethal cardiac events (LCE) compared to non-CKD patients. At the same time, these patients undergoing dialysis experience shifts in the potassium concentrations. The increased risk of LCE paired with the concentration changes suggest a connection between LCE and concentration disbalances. To prove this link, a continuous monitoring device for the ionic concentrations, e.g. the ECG, is needed. In this work, we want to answer if an optimised signal processing chain can improve the result quantify the influence of a disbalanced training dataset on the final estimation result. The study was performed on a dataset consisting of 12-lead ECGs recorded during dialysis sessions of 32 patients. We selected three features to find a mapping from ECG features to [K+]o: T-wave ascending slope, T-wave descending slope and T-wave amplitude. A polynomial model of 3rd order was used to reconstruct the concentrations from these features. We solved a regularised weighted least squares problem with a weighting matrix dependent on the frequency of each concentration in the dataset (frequent concentration weighted less). By doing so, we tried to generate a model being suitable for the whole range of the concentrations.With weighting, errors are increasing for the whole dataset. For the data partition with [K+]o<5 mmol/l, errors are increasing, for [K+]o≥5 mmol/l, errors are decreasing. However, and apart from the exact reconstruction results, we can conclude that a model being valid for all patients and not only the majority, needs to be learned with a more homogeneous dataset. This can be achieved by leaving out data points or by weighting the errors during the model fitting. With increasing weighting, we increase the performance on the part of the [K+]o that are less frequent which was desired in our case.
Abstract:
The morphology of the electrocardiogram (ECG) varies among different healthy subjects due to anatomical and structural reasons, such as for example the shape of the heart geometry or the position and size of surrounding organs in the torso. Knowledge about these ECG morphology changes could be used to parameterize electrophysiological simula- tions of the human heart. In this work, we detected the boundaries of ECG waveforms, i.e. the P-wave, the QRS-complex and the T-wave, in 12- lead ECGs from 918 healthy subjects in the Physionet Com- puting in Cardiology Challenge 2020 Database with the IBT openECG toolbox. Subsequently, we obtained the onset, the peak and the offset of each P-wave, QRS-complex and T-wave in the signal. In this way, the duration of the P-wave, the QRS- complex and the T-wave, the PQ-, RR- and the QT-interval as well as the amplitudes of the P-wave, the Q-, R- and S- peak and the T-wave in each lead were extracted from the 918 healthy ECGs. Their statistical distributions and correlation between each other were assessed. The highest variabilities among the 918 healthy subject were found for the RR interval and the amplitudes of the QRS- complex. The highest correlation was observed for feature pairs that represent the same feature in different leads. Es- pecially the R-peak amplitudes showed a strong correlation across different leads. The calculated feature distributions can be used to optimize the parameters of populations of cardiac electrophysiological models. In this way, realistic in-silico generated surface ECGs can be simulated in large scale and could be used as input data for machine learning algorithms for a classification of cardio- vascular diseases.
Abstract:
Lernende Systeme oder Machine Learning, so sind sich Fachleute einig, werden auch in der Medizin und der Medizintechnik zukünftig eine große Bedeutung erlangen – mit Vorteilen aber auch mit Risiken für Patientinnen und Patienten, Unternehmen und Fachpersonal. Dabei ergeben sich verschiedenste Herausforderungen im Umgang mit Machine-Learning-Systemen – unter anderem für praktische Behandlungssituationen, für die Qualitätskontrolle, für die Sicherheit in Notfallsituationen oder die Bewertung der vom Computer vorgeschlagenen Diagnosen und Therapiepfade. Die vorliegende acatech POSITION ist das Ergebnis einer Arbeitsgruppe von Wissenschaftlerinnen und Wissenschaftlern aus Medizin und Technik. Die Projektgruppe gibt einen Überblick über heutige Anwendungen von Machine Learning in der Medizintechnik und beleuchtet wichtige zukünftige Anwendungsfelder. Im Fokus stehen darüber hinaus ethische, rechtliche und regulatorische Aspekte sowie kritische Fragen zum Datenschutz und mögliche Veränderungen im Arzt-Patienten-Verhältnis. Neben Vorschlägen zum Aufbau großer medizinischer Datenbanken gibt diese Position auch Handlungsempfehlungen für Ärztinnen und Ärzte, Einrichtungen der Forschungsförderung und die Politik.
Abstract:
The fast conduction system, in particular the HisPurkinje-System (HPS), is a key element for coordinated electrical activation of the heart. However, it is often omitted in computational studies. We hypothesized that the inclusion of the HPS is necessary when investigating arrhythmia maintenance and termination in an ischemic heart. We used a computational model of regionally-ischemic human ventricles reconstructed from magnetic resonance imaging data, and combined this with a rule-based HPS that produced a realistic activation pattern. Simulations using a high-frequency pacing protocol showed that re-entrant waves through the ischemic region may retrogradely activate the HPS, leading to self-terminating ventricular tachycardia (VT). Simulations without the HPS maintained the ischemia-induced VT, highlighting the role of the HPS in arrhythmia termination. Optical mapping recordings from isolated Langendorf-perfused rabbit hearts during regional ischemia and ischemia-reperfusion are compatible with the conclusions from the in-silico model, showing patterns of re-entry and termination that may be generated from retrograde HPS conduction.
Abstract:
The vascular function after interventions as revascularization surgeries is checked intraoperatively and qualita- tively by observing the blood flow dynamic in the vessel via Indocyanine Green (ICG) Fluorescence Angiography. This state-of-the-art technique does not provide the surgeon with objective information whether the revascu- larization is sufficient and should be improved by obtaining a quantitative intraoperative optical blood flow measurement. Previous approaches using ICG Fluorescence Angiography show that the blood flow measure- ment does not match the reference and overestimates the flow. The experiments indicate that the amount of overestimation is linked to the vessels diameter. We have, in previous work, quantified the propagated error on the flow calculation resulting from the error in the measurement of the vessels diameter and length and realized that they cannot be accounted solely for this deviation. The influence of the transit time error is not revealed yet. We propose a model combining the penetration depth of diffusely reflected photons and the flow velocity profile to estimate the error in transit time measurement. The flow is assumed to be laminar. The photons path is obtained from a Monte Carlo simulation. This is used to determine the maximum penetration depth of each diffusely reflected photon and therefore state how the recorded signal is composed of the signals originating from different depths to check the hypothesis that the error is systematically linked to the vessels diameter. A simplified geometry is set as a homogeneous layer structure of vessel wall, blood and vessel wall. The total thickness ranges from 1 mm to 5 mm. The probability density of the depth distribution of the diffusely reflected photons and the parabolic flow profile are convolved to obtain a weighted average of the flow velocity, which is set into relation with the mean flow velocity. The results show a clear dependency of the error in transit time measurement on the vessels diameter which complies qualitatively with literature and confirms the hypothesis.
Abstract:
Atrial flutter (AFl) is a common heart rhythm disorder driven by different self-sustaining electrophysiological atrial mechanisms. In the present work, we sought to discriminate which mechanism is sustaining the arrhythmia in an individual patient using non-invasive 12-lead electrocardiogram (ECG) signals. Specifically, we analyse the influence of atrial and torso geometries for the success of such discrimination. 2,512 ECG were simulated and 151 features were extracted from the signals. Three classification scenarios were investigated: random set classification; leave-one-atrium-out (LOAO); and leave-one-torso-out (LOTO). A radial basis neural network classifier achieved test accuracies of 89.84%, 88.98%, and 59.82% for the random set classification, LOTO, and LOAO, respectively. The most discriminative single feature was the F-wave duration (74% test accuracy). Our results show that a machine learning approach can potentially identify a high number of different AFl mechanisms using the 12-lead ECG. More than the 8 atrial models used in this work should be included during training due to the significant influence that the atrial geometry has on the ECG signals and thus on the resulting classification. This non-invasive classification can help to identify the optimal ablation strategy, reducing time and resources required to conduct invasive cardiac mapping and ablation procedures.
Abstract:
A variety of biophysical and phenomenological active tension models has been proposed during the last decade that show physiological behaviour on a cellular level. However, applying these models in a whole heart finite element simulation framework yields either unphysiological values of stress and strain or an insufficient deformation pattern compared to magnetic resonance imaging data. In this study, we evaluate how introducing an orthotropic active stress tensor affects the deformation pattern by conducting a sensitivity analysis regarding the active tension at resting length Tref and three orthotropic activation parameters (Kss, Ksn and Knn). Deformation of left ventricular contraction is evaluated on a truncated ellipsoid using four features: wall thickening (WT), longitudinal shortening (LS), torsion (Θ) and ejection fraction (EF). We show that EF, WT and LS are positively correlated with the parameters Tref and Knn while Kss reduces all of the four observed features. Introducing shear stress to the model has little to no effect on EF, WT and LS, although it reduces torsion by up to 3◦. We find that added stress in the normal direction can support healthy deformation patterns. However, the twisting motion, which has been shown to be important for cardiac function, reduces by up to 20◦.
Abstract:
Introduction: Multi-scale computational models of cardiac electrophysiology are used to investigate complex phenomena such as cardiac arrhythmias, its therapies and the testing of drugs or medical devices. While a couple of software solutions exist, none fully meets the needs of the community. In particular, newcomers to the field often have to go through a very steep learning curve which could be facilitated by dedicated user interfaces, documentation, and training material. Outcome: openCARP is an open cardiac electrophysiology simulator, released to the community to advance the computational cardiology field by making state-of-the-art simulations accessible. It aims to achieve this by supporting self-driven learning. To this end, an online platform is available containing educational video tutorials, user and developer-oriented documentation, detailed examples, and a question-and-answer system. The software is written in C++. We provide binary packages, a Docker container, and a CMake-based compilation workflow, making the installation process simple. The software can fully scale from desktop to high-performance computers. Nightly tests are run to ensure the consistency of the simulator based on predefined reference solutions, keeping a high standard of quality for all its components. openCARP interoperates with different input/output standard data formats. Additionally, sustainability is achieved through automated continuous integration to generate not only software packages, but also documentation and content for the community platform. Furthermore, carputils provides a user-friendly environment to create complex, multi-scale simulations that are shareable and reproducible. Conclusion: In conclusion, openCARP is a tailored software solution for the scientific community in the cardiac electrophysiology field and contributes to increasing use and reproducibility of in-silico experiments.
Abstract:
During neurovascular surgery the vascular function can be checked intraoperatively and qualitatively by observing the blood dynamics inside the vessel via Indocyanine Green (ICG) Fluorescence Angiography. This state-of-the- art method provides the surgeon with valuable semi-quantitative information but needs to be improved towards a quantitative assessment of vascular volume flow. The precise measurement of volume flow rely on the assumption that both the inner geometry of the blood vessel and the blood flow velocity can be precisely obtained from Fluorescence Angiography. The correct reconstruction of the inner diameter of the vessel is essential in order to minimize the propagated error in the flow calculation. Although ICG binds specifically on blood plasma proteins the fluorescence light radiates also from outside the inner vessel volume due to multiple scattering in the vessel wall, causing a fading edge intensity contrast. A spatial gradient based segmentation method is proposed to reliably estimate the inner diameter of cerebral vessels from intraoperative Fluorescence Angiography images. As result the minimum of the second deviation of the intensity values perpendicular to the vessels edge was identified as the best feature to assess the inner diameter of artificial vessel phantoms. This method has been applied to cerebrovascular vessel images and the results, since no ground truth is available, comply with literature values.
Abstract:
Confocal laser endomicroscopy (CLE) has found an increasing number of applications in clinical and pre-clinical studies, for it allows intraoperative in-situ tissue morphology at cellular resolution. CLE is considered as one of the most promising systems for in-vivo pathological diagnostics. Miniaturized imaging probes are designed for intraoperative applications. Due to less sophisticated optical design, CLE systems are more prone to image aberrations and distortions. While diagnostics with CLE takes reference from the corresponding histological images, the determination of the resolution and aberrations of the CLE systems becomes essential. Thereby on-site quality check of system performance is required. Additionally, these compact systems enable a field of view of less than half square millimeter without zooming function, which makes it difficult to correlate human vision to the microscopic scenes. Therefore, it is necessary to have defined microstructures working as a test target for CLE systems. We have extended the 2D bar pattern in 1951 USAF test chart to 3D structures for both lateral and axial resolution assessment, since axial resolution represents the optical sectioning ability of CLE systems and is one of the key parameters to be assessed. The test target was produced by direct laser writing. Yellow-green fluorescence emission can be excited at 488 nm. It can also be used for other fluorescence microscopic imaging modalities in the corresponding wavelength range.
Abstract:
To measure blood flow distributions within the lungs at bedside, Electrical Impedance Tomography measurements based on conductive indicator signals have been recently proposed. The first passage of the indicator signal through the lungs is exploited, but needs to be separated from a superimposed slow drift signal. Two fitting approaches are presented in this paper to accomplish this separation task. The accuracy of estimated first pass signal features is investigated on a synthetic data base. Both algorithms alter the shape of the indicator signal similarly. The algorithms are finally tested on real data from a preclinical porcine study.
Abstract:
Atrial fibrillation (AF) is an irregular heart rhythm due to disorganized atrial electrical activity, often sustained by rotational drivers called rotors. In the present work, we sought to characterize and discriminate whether simulated single stable rotors are located in the pulmonary veins (PVs) or not, only by using non-invasive signals (i.e., the 12-lead ECG). Several features have been extracted from the signals, such as Hjort descriptors, recurrence quantification analysis (RQA), and principal component analysis. All the extracted features have shown significant discriminatory power, with particular emphasis to the RQA parameters. A decision tree classifier achieved 98.48% accuracy, 83.33% sensitivity, and 100% specificity on simulated data. Clinical relevance— This study might guide ablation proce- dures, suggesting doctors to proceed directly in some patients with a pulmonary veins isolation, and avoiding the prior use of an invasive atrial mapping system.
Abstract:
Over the last decades, computational models have been applied in in-silico simulations of the heart biomechan- ics. These models depend on input parameters. In particular, four parameters are needed for the constitutive law of Guc- cione et al., a model describing the stress-strain relation of the heart tissue. In the literature, we could find a wide range of values for these parameters. In this work, we propose an optimization framework which identifies the parameters of a constitutive law. This framework is based on experimental measurements conducted by Klotz et al.. They provide an end-diastolic pressure-volume relation- ship. We applied the proposed framework on one heart model and identified the following elastic parameters to optimally match the Klotz curve: 𝐶 = 313 Pa, 𝑏𝑓 = 17.8, 𝑏𝑡 = 7.1 and 𝑏𝑓𝑡 = 12.4. In general, this approach allows to identify optimized param- eters for a constitutive law, for a patient-specific heart geome- try. The use of optimized parameters will lead to physiological simulation results of the heart biomechanics and is therefore an important step towards applying computational models in clinical practice.
Abstract:
Transit times of a bolus through an organ can provide valuable information for researchers, technicians and clinicians. Therefore, an indicator is injected and the temporal propagation is monitored at two distinct locations. The tran- sit time extracted from two indicator dilution curves can be used to calculate for example blood flow and thus provide the surgeon with important diagnostic information. However, the performance of methods to determine the transit time Δt can- not be assessed quantitatively due to the lack of a sufficient and trustworthy ground truth derived from in vivo measure- ments. Therefore, we propose a method to obtain an in silico generated dataset of differently subsampled indicator dilution curves with a ground truth of the transit time. This method allows variations on shape, sampling rate and noise while be- ing accurate and easily configurable. COMSOL Multiphysics is used to simulate a laminar flow through a pipe containing blood analogue. The indicator is modelled as a rectangular function of concentration in a segment of the pipe. Afterwards, a flow is applied and the rectangular function will be diluted. Shape varying dilution curves are obtained by discrete-time measurement of the average dye concentration over differ- ent cross-sectional areas of the pipe. One dataset is obtained by duplicating one curve followed by subsampling, delaying and applying noise. Multiple indicator dilution curves were simulated, which are qualitatively matching in vivo measure- ments. The curves temporal resolution, delay and noise level can be chosen according to the requirements of the field of research. Various datasets, each containing two corresponding dilution curves with an existing ground truth transit time, are now available. With additional knowledge or assumptions re- garding the detection-specific transfer function, realistic signal characteristics can be simulated. The accuracy of methods for the assessment of Δt can now be quantitatively compared and their sensitivity to noise evaluated.
Abstract:
Numerical simulations are increasingly often in- volved in developing new and improving existing medical therapies. While the models involved in those simulations are designed to resemble a specific phenomenon realistically, the results of the interplay of those models are often not suffi- ciently validated. We created a plugin for a cardiac simula- tion framework to validate the simulation results using clinical MRI data. The MRI data were used to create a static whole- heart mesh as well as slices from the left ventricular short axis, providing the motion over time. The static heart was a starting point for a simulation of the heart’s motion. From the simula- tion result, we created slices and compared them to the clinical MRI slices using two different metrics: the area of the slices and the point distances. The comparison showed global simi- larities in the deformation of simulated and clinical data, but also indicated points for potential improvements. Performing this comparison with more clinical data could lead to person- alized modeling of elastomechanics of the heart.
Abstract:
The indicator dilution method (IDM) is one approach to measure pulmonary perfusion using Electrical Impedance Tomography (EIT). To be able to calculate perfu- sion parameters and to increase robustnes, it is necessary to approximate and then to separate the components of the mea- sured signals. The component referring to the passage of the injected bolus through the pixels can be modeled as a gamma variate function, its parameters are often determined using nonlinear optimization algorithms. In this paper, we introduce a linear approach that enables higher robustnes and faster com- putation, and compare the linear and nonlinear fitting approach on data of an animal study.
Abstract:
This work uses a highly detailed computational model of human atria to investigate the effect of spatial gradient and smoothing of atrial wall thickness on inducibility and maintenance of atrial fibrillation (AF) episodes. An atrial model with homogeneous thickness (HO) was used as baseline for the generation of different atrial models including either a low (LG) or high thickness gradient between left/right atrial free wall and the other regions. Since the model with high spatial gradient presented non-natural sharp edges between regions, either 1 (HG1) or 2 (HG2) Laplacian smoothing iterations were applied. Arrhythmic episodes were initiated using a rapid pacing protocol and long-living rotors were detected and tracked over time. Thresholds optimised with receiver operating characteristic analysis were used to define high gradient/curvature regions. Greater spatial gradients increased the atrial model inducibility and unveiled additional regions vulnerable to maintain AF drivers. In the models with heterogeneous wall thickness (LG, HG2 and HG1), 73.5 ± 8.7% of the long living rotors were found in areas within 1.5 mm from nodes with high thickness gradient, and 85.0 ± 3.4% in areas around high endocardial curvature. These findings promote wall thickness gradient and endocardial curvature as measures of AF vulnerability.
Abstract:
Atrial fibrillation (AF) is the most frequent irregular heart rhythm due to disorganized atrial electrical activity, often sustained by rotational drivers called rotors. The non-invasive localization of AF drivers can lead to improved personalized ablation strategy, suggesting pulmonary vein (PV) isolation or more complex extra- PV ablation procedures in case the driver is on other atrial regions. We used a Machine Learning approach to characterize and discriminate simulated single stable rotors (1R) location: PVs, left atrium (LA) excluding the PVs, and right atrium (RA), utilizing solely non-invasive signals (i.e., the 12-lead ECG). 1R episodes sustaining AF were simulated. 128 features were extracted from the signals. Greedy forward algorithm was implemented to select the best feature set which was fed to a decision tree classifier with hold-out cross-validation technique. All tested features showed significant discriminatory power, especially those based on recurrence quantification analysis (up to 80.9% accuracy with single feature classification). The decision tree classifier achieved 89.4% test accuracy with 18 features on simulated data, with sensitivities of 93.0%, 82.4%, and 83.3% for RA, LA, and PV classes, respectively. Our results show that a machine learning approach can potentially identify the location of 1R sustaining AF using the 12-lead ECG.
Abstract:
Changes in atrial fibrillation cycle length (AF-CL) are broadly used as a ‘ground truth’ to assess the effect of substrate modification during AF ablation. This work sought to optimize thresholds for changes in coronary sinus CL (CS-CL) after local ablation using different atrial electrogram (AEG)-derived markers. 834 AEGs were collected from 11 patients undergoing persAF ablation. CS-CL was measured before and after each ablation point. Five AEG-derived markers were tested as classifiers for CS-CL changes: ICL (Biosense Webster), CFE-Mean (St. Jude Medical), Wave Similarity, Shannon Entropy and AEG-CL. The area under the receiver operating characteristic (AUROC) curve was used to assess the quality of classification for each marker. Maximum AUROC was found at threshold values between 9 and 14 ms in all markers, except for Shannon Entropy. The average AUROC of the five markers reached a maximum of 0.60 at a threshold value of 10 ms. The 10 ms threshold is suggested as a starting setpoint for future studies seeking to identify AF ablation targets based on an objective 'ground truth'.
Abstract:
Beat acceptance and rejection during atrial tachycardia are crucial for the compilation of meaningful electroanatomical maps during an electrophysiological study. State of the art methods compare the delays in activation time between two or more electrograms recorded with electrodes of a spatially stable reference catheter. This work introduces morphology-based measures for beat selection in the context of mapping atrial tachycardia. Active segments were extracted from bipolar reference electrograms with the help of the non-linear energy operator. After prealignment by means of maximum cross-correlation, the correlation coefficient as well as the normalized 1-norm distance yielded a similarity measure for each pair of prealigned active segments. The morphology-based measures were then compared to the delay-based measure. In an exemplary patient with 5163 recorded beats, the delay-based measures were strongly dependent on the accuracy of the local activation times as well as on the selection of reference leads. The morphology-based measures emphasized changes in the target tachycardia which were not detectable by the delay-based method. The correlation and the distance measure showed similar behavior but stressed different aspects of morphological changes. Ventricular components in active segments caused minor changes in morphology which were also reflected in the morphology-based measures. The morphology-based measures introduced in this work enhanced beat selection in the exemplary patient. A follow-up study with a representative patient cohort needs to quantify the improvement across patients and translate the measure to clinical practice. A combination of activation delays and morphological similarity is strongly expected to exploit the advantages of both methods for beat selection.
Abstract:
Atrial fibrillation (AF) is the most common cardiac ar- rhythmia seen in clinical practice and its treatment by an- tiarrhythmic drugs is still non-effective. Radiofrequency catheter ablation (RFA) has been widely accepted as a strategy to prevent AF by creating myocardial lesions to block the AF electrical wavefront propagation and elimi- nate arrhythmogenic tissue. In this study, we analyzed the electrophysiological impact of different RFA time duration strategies through a controlled animal protocol. Electrical activity of the isolated right atrium of rats, under different RFA time strategies on the epicardium, was acquired dur- ing 4 s on the endocardium by electrical Mapping (EM) and simultaneously on the endocardium by Optical Map- ping (OM), respectively. Analyses were concentrated on both time and frequency domain, through analysis of sig- nal’s morphology, local activation time, conduction veloc- ity, dominant frequency (DF), and organization index (OI). The morphology of the optical and electrical signals was altered as the ablation time increased, making it difficult to identify activation times. Moreover, DF and OI decreased with increasing ablation time implied in fragmented elec- trograms. Through the characterization of traditional met- rics applied to the electrical and optical data, it was pos- sible to identify important changes, in time and frequency, inside the ablated regions.
Abstract:
Electrical Impedance Tomography (EIT) is a clini- cally used tool for bed-side monitoring of ventilation. Previous work also showed a high potential for lung perfusion moni- toring with indicator-enhanced EIT. However, many research questions have yet to be answered before it can be broadly ap- plied in clinical everyday life. The goal of this work is to eval- uate a new method to improve EIT perfusion measurements. Pulmonary hemodynamic transfer functions were estimated using regularized deconvolution with Tikhonov regularization to estimate spatial perfusion parameters. The final comparison between EIT images and PET scans showed a median corre- lation of 0.897 for the images which were reconstructed using the regularized deconvolution. In comparison the previously used maximum slope method led to a median correlation of 0.868.
Abstract:
Acute ischemic stroke is a major health problem due to its high mortality rate and high residual risk for permanent disabilities. Targeted temperature management in terms of hypothermia is known to have neuroprotective effects and can potentially reduce the cerebral harm caused by an acute ischemic stroke. Nevertheless, available clinical studies show that the efficacy depends on various factors such as the timing, duration, and depth of hypothermia. In this context, selective brain hypothermia by means of endovascular blood cooling and the combination with mechanical thrombectomy appears to be especially promising. A novel catheter system enables the direct combination of endovascular blood cooling and thrombectomy using the same endovascular access. In this context, a prereperfusion cooling of penumbral tissue by cold leptomeningeal collateral blood flow might mitigate the risk of a reperfusion injury. However, direct measurements of blood temperature in the penumbra and temperature decrease induced by the novel catheter is not possible without additional harm to the patient. Additionally, cerebral circulation varies distinctly between patients and can influence the cooling conditions. A computational model can provide an alternative to temperature measurements and can help to gain knowledge about influences on the catheter's cooling performance. This work presents the development of a brain temperature model that is based on a realistic 3D brain geometry. Divided into gray and white matter, the geometry considers spatially resolved blood perfusion rates. To account for realistic spatial blood perfusion, a detailed hemodynamics model of the cerebral arterial anatomy was coupled. The hemodynamics model includes the possibility of personalizing based on real patient anatomy. For the evaluation of the catheter's performance, a complete right middle cerebral artery occlusion was simulated for different scenarios of cerebral arterial anatomy. The model predicted a distinct influence of congenital arterial variations in the circle of Willis on the cooling capacity and on the resulting spatial temperature distribution before vessel recanalization. Nevertheless, the model showed a possible cold reperfusion in the penumbra due to a strong increase in cooling performance after the recanalization (-1.4-2.2\,∘C 25\,min after the start of cooling, recanalization 20\,min after the start of cooling). This steep decrease in temperature was independent of the cerebral arterial anatomy. The developed model proves the effectiveness of endovascular blood cooling in combination with mechanical thrombectomy. Moreover, the model can contribute to the identification of possible influencing factors in the therapy of acute ischemic stroke with targeted temperature management, which is a timely and highly relevant topic as similar data can hardly be obtained by studying stroke cases in clinics.
Abstract:
As digitalization moves forward in the health sector, new ways of measuring the patients’ outcome and current status are evolving. Additionally with User-Centered Design, also Patient-Centered Therapy becomes more important. Instead of measuring the pure medical output of patients, circumstances and journey of the patient plays a bigger role. Quality of Life of chronically ill patients is measured to offer the best support for individuals to nudge them sticking to their therapy and enjoying their lives as good as possible. Patients that suffer from End-Stage Renal Disease (ESRD) need to wait in average four years for their transplantation [1]. The dialysis therapy occupies big parts of their life and freedom during this waiting time. The use of computer-aided technologies to measure patients’ quality of life and well-being can make the process easier to handle for patients. Complications can be detected earlier, psychological illnesses can be tracked and treatment can be simplified. Following this idea, this thesis is researching on possible levers for quality of life, technologies for measurement and influencing factors. Further, fluid measurement of dialysis patients through detecting shifts in hand swelling is proposed as an option of improving patients monitoring and maintain quality of life level. When monitoring the patient, also measurement burden needs to be considered as it reduces the perceived quality of life. Therefore the goal is to measure the patient seamlessly and continuously. Image analysis and pervasive sensing are promising. Especially video analysis offers many measurement opportunities of psychological as well as physiological status. Many symptoms that threaten the Quality of Life (QoL) can be measured and diagnosed via optical measurements. The Proof of Concept (PoC) of measuring fluid status of dialysis patients by taking pictures of their hands showed the technical feasibility of the concept. It could be shown that the hand would approximately swell 1,5 mm if the dialysis patient is suffering from 3 litres of fluid overload which is common. This change should be detectable by an average smartphone camera. Pictures of different-sized hands were taken and analysed regarding area change of the hand contour. The same procedure was repeated for pictures of water-filled gloves and hand-in-water-filled-gloves. The area size in pixels was then converted into cm by accounting the known size and pixel area of a 5 cents coin. Even if only few experiments were made, results showed a significant correlation with a p-value of 5.7 e-06 between measured pixel area and real volume change of the hand or glove. Next steps would be to apply this process to real swollen hands to test if the gradient is big enough to measure the fluid changes in binary and later quantitative concerns in dialysis patients. Applying deep learning algorithms to the pictures could be added in a later step. Wrinkles, skin colour and reflection as well as other until now unknown factors can be therefore considered in the fluid status determination and may lead to more precise results. As the PoC was focussed on tightly controlled circumstances and surroundings, more experiments are necessary to determine, if the same results can be found when applying a more unstable environment as different backgrounds, skin colour, lighting and others.
Abstract:
More than 30 million adults in the United States suffer from chronic kidney disease (CKD). In the end-stages of CKD, patients are often treated by haemodialysis to regulate for example their extracellular concentrations of potassium. Deaths caused by cardiovascular events are 10% to 30% higher in CKD patients than in patients with normal kidney functions. Rapid changes in ionic concentrations are one possible explanation, which is why a continuous, non-invasive monitoring tool could provide new insights and save lifes in the end. It has been shown that the electrocardiogram (ECG) is suitable for determining ionic concentration changes. In this work, methods for processing data of hemodialysis patients are evaluated to improve estimation results. During a session, patients underwent a twelve lead ECG recording and had five to eight extracorporeal blood tests taken. In total, there were 84 sessions of 36 patients available. One half of the data set is used to find the best processing variant for six chosen reconstruction features. Using a patient specific approach, the best estimation of potassium concentrations was achieved with the TS/SQA feature leading to an absolute error and a standard deviation of 0.34 ± 0.27mmol/l. Applying the same data processing options to the second half of the data set, an absolute error and a standard deviation of 0.59 ± 0.54mmol/l was determined. A possible cause for these varying results might be the different quality of the data. Nevertheless, it is concluded that a personalized approach for ionic concentration reconstruction is advantageous, because patients specific characteristics such as the orientation of the heart can be compensated. The impact of the heart orientation in the torso was investigated in the second part of this thesis. A model of human ventricles was rotated inside the torso around three axes. After changing the orientation, the local activation times were calculated with a fast marching method. Using single cell simulations with the Himeno et al. cell model and boundary element method the ECG was extracted and already used features were determined. The feature values deviated from each other, for example the amplitude of the T wave varied by 0.9mV in the worst case for the same potassium concentration but different orientations of the heart. The conclusion is that a definite reconstruction is not possible without knowing the exact orientation. This also supports the findings from the first part of this work. The use of a patient specific approach delivers better results, because anatomical attributes might get compensated. With regard to a global approach, methods for the compensation of patient specific characteristics need to be found.
Abstract:
In neurovascular surgery, the surgeon tries to restore the vascular function, in particular theblood flow (ml/s). State-of-the-art methods rely on tissue contact or even wrap around thevessel which is increasing the risk of complications while intervention. Optical methodscould avoid the risks and provide the surgeon with the needed data. Quantitative fluorescenceangiography (QFA) has become an important research focus in the last years. Hereby,normally, a camera is integrated into the microscope which records the fluorescence dynamicof indocyanine green (ICG) in the near-infrared (NIR). Experimental pre-studies have showna mismatch of the flow value obtained by QFA and the reference. Therefore, a Monte Carlo(MC) simulation of the propagation of the photons in a vessel was developed to predictthis mismatch quantitatively. So far, this simulation assumes a homogeneous distributionof ICG. This thesis aims to check the validity of this assumption by computing the spatialand temporal ICG distribution via COMSOL Multiphysics. Further, the distribution canbe used as an input for the MC model to eliminate the assumption of a homogeneous ICGdistribution. For this task, 20 different simulations were conducted by variation of the radiusof the vessel model (VM) and the blood flow rate. The investigation of the homogeneousphase of the ICG distribution revealed that this phase is very short, ranging from 0.37 to 0.05seconds. These periods were so small, that the flow of the ICG molecules through the VMcan be modeled as heterogeneous.
Abstract:
The incidence of atrial flutter and atrial fibrillation steadily increases in societies of industrial nations. Atrial electrograms help to diagnose these diseases and assist their treatment. Unipolar electrograms have several advantages over bipolar electrograms, but on the other hand they are more affected by noise and the ventricular far field. Therefore, commonly only bipolar electrograms are used in clinical settings. This thesis evaluates different approaches for removing the ventricular far field as a major confounding effect from clinical unipolar electrograms. This work focuses on approaches based on a spatio-temporal model of the ventricular far field. The models were trained using a dipole and polynomial method. Several criteria for the quality of the removal of the ventricular far field from unipolar electrograms in atrial flutter and sinus rhythm / paced rhythm were introduced. The influence of parameters such as the number of dipoles, the polynomial degree and the rhythm of the data used for the ventricular far field model on the quality of the removal was investigated quantitatively. Among the used methods, the dipole model performed best and is capable of removing a major amount of the ventricular far field by up to 82:76% for the median patient. Training the ventricular far field model with atrial flutter or atrial fibrillation sequences led to to a less complete removal, but the approaches were still able to remove the impact of the ventricular far field significantly by up to 75:93% for the median patient.
Abstract:
The ischemic stroke is a major health problem because of its high probability to cause permanent loss of neurological functions or even life-threatening situations for the patient. Today, it is one of the leading causes of death in Western countries. Therapeutic hypothermia (TH) is a promising therapy to reduce the damage to brain tissue resulting from the cerebral hypoperfusion. A new catheter system aims at selective brain hypothermia through intrac- arotid blood cooling and simultaneously allows for a mechanical thrombectomy of the vessel occlusion. For evaluating the effect of blood cooling on the spatio-temporal temperature distribution in the brain tissue, a spatial cerebral temperature model based on Pennes’ bioheat equation was developed, since temperature measurements inside the patient’s brain would increase the risk of injuries. In this work a parameterization of the spatial cerebral temperature model was performed. Therefore, simulation results of the model were compared to real experimental data, which consist of thermal videos tracing the brain surface temperature of a patient suffering from acute ischemic stroke (AIS), who underwent a decompressive craniectomy and received a cold saline bolus injection. The model was adapted to the present situation of the head in the thermal video to enable the comparison. A 3D head geometry was modified to represent the patient’s head after a decompressive craniectomy. Further a vessel occlusion in the left middle cerebral artery (MCA) was included into a pre-existing hemodynamics model, which is based on a 1D transmission line approach. The thermal boundary conditions of the spatial cerebral temperature model were adjusted to represent the modified head geometry. A more realistic blood temperature model was developed, to simulate the cold saline bolus mixing with the venous blood and to consider a more realistic arterial blood temperature in the bioheat equation. Simulations showed an impact of the cold saline bolus on the blood and ischemic brain tissue temperature by a temperature drop of about -0.3 K after 11 seconds, followed by a longer rewarming phase of over 70 seconds. In the thermal video, this behavior was only visible for a selected artery on the brain surface, but the surface temperature on the remaining brain tissue showed no distinct temperature change after the bolus injection. The parameterization of the blood temperature model enabled a realistic simulation of the cold saline bolus impact on the blood temperature and the simulation results coincided with the observed cerebral artery temperature course in the thermal video. However, since the temperature drop induced by the cold saline bolus is small in the ischemic brain tissue, it might have been covered under the signal fluctuations.
Abstract:
In this thesis, left valvular heart diseases have been analysed based on numerical hemody- namic simulations. Three different severity gradings of mitral regurgitation were imple- mented in the same physiological left human heart, as well as aortic and mitral stenosis. The heart valves have hereby been modeled as porous zones with dynamic permeability. Our approach was to represent mitral regurgitation with an area of high permeability within the porous zone. During ventricular systole, the permeability was set high, in a way that retrograde flow was made possible. In the case of aortic stenosis, a proper ventricular systolic ejection was impaired by an impermeable porous ring. In the case of mitral stenosis, the atrioventricular orifice was guarded by an impermeable porous ring during diastole. The simulated regurgitant volumes agree with clinical measurements to a reasonable extent. Eccentric retrograde jets caused a slightly inferior regurgitant volume. Further, results show that a stenotic valve itself, within a healthy left heart environment, is not alone proficient to generate plausible hemodynamic properties. Modified boundary conditions did not rule out this pitfall. Thus the approach to model heart valves as porous zones is questionable. The conception of a high fidelity human mitral valve model, whereupon a wide morphomet- rical range can be addressed, inspired the modelling of a parametric mitral valve. An adaptable mitral valve model was developed using hyperbolic parabolic functions. Afore- mentioned prepares subsiding numerical hemodynamic simulations with prescribed mitral valve leaflet motion and can supersede the porous zone approach.
Abstract:
The digitization of the surgery microscope is an important part of the digitization of the surgery room. By digitizing the operating microscope there are many advantages such as the amplification of individual color channels, the addition of augmented reality and the documentation of the surgical process. The digitalization is realized by two or more cameras that record the operating scenario. After recording, a 3-D image is reconstructed from both images.To secure the proper reconstruction, the positions of the cameras in relation to each other and the extrinsic camera parameters of both cameras must be known. From the extrinsic camera parameters and the positions the depth information of the image can be calculated. As the measurement of the positions is too inaccurate, the cameras have to be calibrated. In my work a camera setup with 4 single cameras behind a main lens is used. Because the common methods of camera calibration with a main lens do not work, the following hypo- thesis was made in my work. It is possible to calibrate 4 single cameras behind a main lens by calibrating the single color channels. Furthermore it is possible to correct the distortion errors in an image by correcting the individual color channels and to reassemble the picture afterwards. To test the hypotheses two algorithms were reimplemented. The affine and the adaptive algorithm, both based on the Zhang algorithm. Both algorithms were compared for the accuracy of the minimum deviation of camera positions in z-direction. Furthermore, the distortion correction was implemented. The implementation was done in a C++ program. The work has shown that it is possible to calibrate 4 cameras behind a main lens by calibrating the individual color channels. The adaptive algorithm is the more accurate one. The functioning of the correction of the individual color channels was not confirmed. Die Digitalisierung des Oparationsmirkroskops ist ein wichtiger Bestandtei der Digitalisie- rung des Operationssaals. Durch die Digitalisierung des Operaitonsmikroskops erhält man viel Vorteile wie das Verstärken einzelner Farbkanäle, Hinzufügen erweiterter Realität und der Dokumentation des Operationsprozess. Die Digitalisierung wird durch zwei oder mehr Kameras realisiert die das Operationsszenario aufnehmen. Nach der Aufnahme wird aus bei- den Bildern ein 3-D Bild rekonstruiert. Damit die Rekonstruktion korrekt funktioniert müssen die Postitionen der Kameras zueinander sowie die extrinsischen Kameraparameter beider Kameras bekannt sein. Aus den extrinsischen Kamreaparametern und den Positionen lassen sich die Tiefeninformationen des Bildes errechnen. Da das Abmessen der Positionen zu ungenau ist müssen die Kameras kalibriert werden. In meiner Arbeit wird ein Kamraaufbau mit 4 einzelnen Kameras hinter einem Hauptobjektiv verwendet. Da die gängigen Methoden der Kamerakalibrierung mit einer Hauptlinse nicht funktionieren wurde in meiner Arbeit i ii folgende Hypothese aufgestellt. Es ist möglich durch Betrachtung der einzelnen Farbkanäle 4 einzelne Kameras hinter einer Hauptlinse zu kalibrieren. Außerdem ist es möglich durch Betrachtung der einzelnen Farbkanäle die Verzeichnungsfehler in einem Bild zu korrigieren und es anschließend wieder zusammen zu setzen. Um die Hypothesen zu testen wurden zwei Algorithmen reimplementiert. Der affine und der adaptive Algorithmus, welche beide auf dem Zhang Algorithmus aufbauen. Beide Algorithmen wurden auf die Genauigkeit der mini- malen Abweichung der Kamerapositionen in z-Richtung verglichen. Außerdem wurde die Verzeichnungskorrektur implementiert. Die Implementierug fand in einem C++ Programm statt. Die Arbeit hat gezeigt dass es möglich ist 4 Kameras hinter einem Hauptobjektiv durch Betrachten der einzelnen Farbkanäle zu kalibrieren. Dabei ist der adaptive Algorithmus der genauere. Das Funktioniere der Korrektur der einzelnen Farbkanäle konnte nicht bestätigt werden.
Abstract:
Transit times of a bolus through an organ can provide valuable information for researchers, technicians and clinicians. Current clinical routines for blood flow measurement involve the usage of an ultrasonic flowprobe. Although this clinical flowprobe has an accuracy of ± 10 %, downsides are additional time, equipment and interrupting surgical workflow; rupturing the cerebral vessel due to mechanical stress is possible as well. Thus, quantitative fluorescence angiography would be advantageous as a non-invasive method monitoring the temporal propagation of an injected indicator at two distinct locations for cerebral blood flow evaluation. However, current methods extracting the blood's transit from two discrete-time dilution curves show a low accuracy and thus cannot be used for clinical studies. Therefore, the goal of this thesis is the evaluation of new methods ascertaining the time difference between two indicator dilution curves using four mathematical models. The target is to achieve a relative mean temporal error less than one camera frame. In the first part of the thesis, datasets of two corresponding in silico generated indicator dilution curves with a ground truth of the transit time are created. To obtain one dataset, one dilution curve is duplicated followed by subsampling, delaying and applying noise. In the second part, the two corresponding curves of one dataset are cut and fitted by a mathematical model before ascertaining the time difference using a feature. Overall, 574 combinations of mathematical models, features and methods to cut the discrete-time dilutions curves before fitting were compared for seven different noise levels and two framerates. The evaluation showed the local density random walk (LDRW) model and the gamma-variate model being superior to the monoexponential model and the parabola model. The LDRW model in combination with the feature cross-correlation of the first derivation showed the best results overall after cutting the indicator dilution curves before the rise at the ascending side and after 40 % of the respective peak at the descending side. For a framerate of 25 fps, the regarded noise levels SNR ∈ [20,8] dB showed a mean temporal error of µ ∈ [0.25,0.78] frames, respectively. At a framerate of 60 fps, the mean temporal error using the same SNR levels was evaluated to µ ∈ [0.43,1.41] frames with a mean temporal error less than one frame only for SNR ≥ 12 dB.
Abstract:
Prediction of the post-operative face after craniofacial surgery is an import tool to guide the patient’s decision making to undergo surgery. Hereby, neural networks might provide an inexpensive and fast means to predict the post-operative face without the need for expensive tomography or 3D scans compared to existing methods for surgery prediction. Ideally, these neural networks would be able to predict the post-operative face based on images of the patient and a virtual plan to describe craniofacial surgery. To solve tasks, neural networks have to learn parameterized models directly from data. However, learning of neural networks to predict post-operative faces using supervision would be unfeasible since paired data of pre- and corresponding post-measurements are only sparsely available and often comprise a large time gap of several month between measurements. On the other hand, the invention of CycleGANs has enabled a means to map measurements of one domain to measurements of another domain without requiring paired data for training. This mapping can be derived by enforcing the domain-specific statistics in the predicted measurement and penalizing the reconstruction loss after back and forth translation between two domains. An application of CycleGANs to predict post-operative faces in a clinical setup would require accurate and realistic projections of varying, virtual surgery plans on the pre-operative face. To analyse the suitability of CycleGANs to meet these requirements, a strongly simplified setup was proposed in this work. Hereby, a CycleGAN was developed to predict the outcome after applying four different virtual plans of a modified face in 3D space to a 2D image of a face. For simplification, these virtual plans to describe a modification were represented by a statistical model of a 3D face and were not required to reflect clinical surgery plans. After teaching a CycleGAN to modify an image based on a virtual 3D plan, the resulting images were shown to ten volunteers who were asked to recognize the applied modification in the image and to judge whether the implementation of the modification was realistic or not. In total, the volunteers correctly recognized the applied modifications in 47.33% of images of which 18.0% were judged to be realistic as well. Additionally, the preliminary results of this work suggested the ability of the proposed CycleGAN to interpolate between virtual plans to describe a modified face. As a conclusion, the proposed CylceGAN showed its potential to predict realistically modified faces that satisfied the requirements of the strongly simplified setup in this work. On the other hand, the robustness of the implemented CycleGAN was low i.e. the applied modifications were often not recognizable or not realistic. The results of this work offer an initial attempt to analyze the suitability of CycleGANs to predict post-operative faces.
Abstract:
Atrial flutter (AFL) is a common heart rhythm disorder, which is characterised by regularly propagating electrical signals and self-sustained electrophysiological mechanisms. Since AFl mechanisms are usually discriminated from invasive intra-cardiac signals, enabling a non-invasive categorisation of driving mechanisms prior to the invasive procedure, would greatly benefit the ablation strategy. In the present work, various image and video classifi- cation approaches are implemented and evaluated, in order to discriminate distinguishable electrophysiological mechanisms sustaining AFl. Therefore, the cardiac excitation of 20 different AFl scenarios, was simulated on eight volumetric bi-atrial anatomies. Solving the forward problem of electrocardiography, the body surface potential maps (BSPMs) were calculated and projected onto eight triangulated torso meshes. Given by the implementation of 124 atrial rotations within the different torso models, the provided database comprising 153,760 simulations, was completed. Finally, with further processing the generated 12-lead electrocardiograms (ECGs) and spatially down- sampling the obtained BSPMs, a consistent ground truth for AFl perpetuation mechanisms, was established. In the first part of this work, the resulting recurrence plots (RPs) and distance plots (DPs) were discriminated via the atlas and K-nearest-neighbour (KNN) classification approach as well as referring to two residual convolutional neural networks (CNNs). Moreover, for each approach the ascription into 20 AFl mechanisms and five macro groups was performed independently. Hit rates of 21% considering the assignment into 20 AFl scenarios and 40% corresponding to the macro groups, were achieved. Focused on the classification of the BSPM video sequences, the second part of this work contained the assignments based on a three-dimensional residual network (ResNet). This resulted in average accuracy values of 57%, in cases of 20 AFl mechanisms and 67% considering the simplified macro groups ascription task. Thus, the BSPM-based categorisation has been shown an effective method to discriminate distinguishable AFl electrophysiological mechanisms in a non-invasive in silico study. It is outlined that this could help to delineate the ablation strategy, reduce resources to conduct invasive cardiac mapping and time.
Abstract:
The electrocardiogram (ECG) is a non-invasive and cost-effective method of measurement for the examination and monitoring of patients with cardiac disorders such as atrial flutter or ischemia. The increasing number of heart patients worldwide and future telemedical systems will further increase the need for automated and validated ECG analysis tools.In order to use machine learning techniques for ECG-based detection and classification of heart disease, these must be validated by reference data that have an understandable basic truth. The aim of this project is therefore to simulate using an electrophysiological model 10,000 physiological and pathologicalECG time signals marked with the underlying heart disease. In order for the signals generated in silico to be indistinguishable from clinically recorded ECGs, the time intervals and amplitudes of the individual ECG segments must, among other things, have the same statistical distribution as the actual data.Firstly, an existing shape model for the left ventricle and right ventricle was prepared so that the simulation of transmembrane voltages and a prediction calculation for the extraction of a 12- channel surface ECG can be performed. Then the model parameters and their variability are defined as input data for a sensitivity analysis. We used the Bayesian optimisation algorithm to compare the modelled and clinical signals. And get a better insight of the resulting extracted ECG.
Abstract:
The present study of different atrial flutter mechanisms remains a very complex subject. Without the use of invasive mapping techniques or just by observing the 12-lead ECG signals, it is impossible to differentiate between atrial flutter mechanisms. A more sophisticated approach like a radial basis neural network classifier is implemented in this thesis to classify atrial flutter signals according to its mechanism. However, in order to have a good classifier, two important aspects need to be considered: a huge amount of data which, at the same time, does not cause overfitting. Data from previous studies were used as a benchmark to assess the performance of the classifier by enlarging the available dataset. One way to feed the classifier new data is by using data augmentation methods. In our study, we simulated different rotations around all 3 axes of the atrial model to generate new 12-lead ECG signals. We also investigated the potential problem of overfitting in the process. We started by first doing a correlation analysis of the ECG signals to have an idea how much signals could change at each rotation. Here, the signals between the initial position and each of the rotated position were compared. We found out that within a range of ±10◦, there were, in most cases, correlation coefficients higher than 0.75 which might not be useful for machine learning applications. We implemented different scenarios to investigate which train and test dataset division would improve the classifier accuracy or trigger an undesired overfitting problem. We found out that adding rotations to the atrial model as a means of data augmentation improved the performance of the classifier for some mechanisms. This, however, was valid if a part of the atrial model dataset was also used for training. From this, we learned that we could achieve an individual increase of 12 - 25% in accuracy using atrial models whose data was partially used in the train set. The initial idea was to train the classifier with atria models and test with ’unseen’ atria models. Yet, we noticed that the classifier did not perform as well on unknown atrial models as accuracies observed were lower than the benchmark. To achieve a higher accuracy, we concluded that augmenting the dataset of only 2 atrial models were not enough to improve the overall classifier accuracy and more atrial geometries were needed to investigate the possible improvements they could bring.
Abstract:
Mithilfe von Bypassoperationen kann der Blutstrom in Gebiete mit zu geringer Perfusion erhöht haben. Bei einem chirurgischen Eingriff kann die Fehlerrate reduziert werden, indem während des Eingriffs der Blutfluss in den Gefäßen überprüft wird [1]. Für qualitative Messungen steht die ICG-Fluoreszenzangiographie (FA) zur Verfügung. Eine quantitative Messung ist noch nicht möglich. Für die ICG-FA wird die Bewegung eines ICG-Bolus im vaskulären System in einem Video aufgezeichnet. Bei einer Bestimmung der Flussgeschwindigkeit aus den Videodaten, wurde diese größer bestimmt als bei Referenzmessungen [2]. Der Faktor zwischen den Messergebnissen wird als k-Faktor bezeichnet. Diese Arbeit widmet sich der Untersuchung des k-Faktors mit einem Flussphantom. Es wurden 235 Versuche an Glasröhren mit Durchmessern von 2,4 - 4,0mm und Flüssen im Bereich von 24 - 302 ml durchgeführt. Dabei wurde die Flussgeschwindigkeit sowohl mit min einem Ultraschallflussmesser, als auch über ein ICG-FA-System bestimmt. Die k-Faktoren wurden anschließend ausgewertet. Entgegen der Erwartungen konnten keine konstanten k-Faktoren bei einem bestimmten Fluss und einem bestimmten Durchmesser gemessen werden. Es wird vermutet, dass die Form des ICG-Bolus in den Experimenten nicht gleichbleibend war und deswegen unterschiedliche Geschwindigkeiten gemessen wurden. Eine Tendenz, dass die k-Faktoren für große Flussgeschwindigkeiten sinken, konnte ermittelt werden. Bypass surgery is a means to increase blood flow in areas with too little perfusion. The failure rate of such interventions may be reduced by assessment of blood flow during the procedure. A qualitative method of such measurements is given by indocyanine green (ICG) fluorescenceangiography (FA). A qualitive measurement using that technique is not yet available. For ICG-FA a bolus of ICG is injected into the vascular system. The movement of this bolus is investigated using an infrared video camera. The evaluation of those videos to estimate the speed of the bolus showed that the speed was estimated higher than reference measurements did. The factor that described the difference has been called k-factor. This work aims to investigate the k-factor using a flow phantom. A total of 235 experiments have been performed. For those glass tubes of diameters form 2.4 to 4.0 mm have been used and constant flows in a range form 24-302 ml/min were established. A blood analogue containing an ICG bolus was used. During the experiments volume flowwas estimated using both an ultrasonic probe and an imaging system resembling an ICG-FA system. The k-factors were calculated for each experiment. Contrary to our expectations no constant k-factors could be measured for a given volume flow at a given tube diameter. Instead the values ranged from 1.91 to 2.01 at best and from 0.98 to 1.17 at worst. It is expected that inconsistent shape of the ICG bolus within the experiments was the main cause of the deviation of the factors. A tendency that k-factors decreased for higher flow velocities could be established.
Abstract:
Cardiac arrhythmias are a serious concern to both the individual patient’s health and society as a whole. Treatment and diagnosis of cardiac arrhythmias and fibrosis rely heavily on voltage mapping procedures. Even though voltage mapping is widely used, the factors influencing mapping of the atria are vast and not fully understood. Multiple studies have stressed the importance of revising a universal voltage threshold for classification of diseased heart tissue. In this thesis two in silico studies are conducted by the means of bidomain simulations with a model of atrial tissue. For the first part, a pair of variably sized electrodes is placed on the tissue and the resulting signals are compared in order to investigate the effects electrode size and shape have on the recorded unipolar and bipolar electrograms. Other influence factors like inter-electrode distance (IED) distance, wavefront angle and conduction velocity are kept constant. It is assessed, how accurately the measurements represent the extracellular potential for two different electrode conductivities. In addition, two models of fibrotic tissue (epicardial and transmural) are simulated with the same setups. The two main characteristics investigated were bipolar peak-to-peak voltage and local activation time. We have found various averaging effects in the electrodes affected bipolar signals differently dependent on electrode shape. The highest amplitudes were 8.6 mV recorded with the smallest electrode (0.2 mm × 0.2 mm × 0.2 mm). Electrodes that increased in length showed a linear decrease in voltage of ca. 1 mV/mm. If the electrodes increased in length and width simultaneously, the drop was quadratic and steeper. Bipolar amplitude decreased exponentially with respect to size when cubic electrodes were used. Our study suggests voltage averaging effects in wavefront direction and in the normal direction to the tissue are responsible for a decrease in unipolar amplitudes and thus change the bipolar amplitude as well. For the second study, a lasso catheter model is placed on a patch of atrial tissue. In multiple simulations, positions of a pair of 1 mm cylindrical electrodes are shifted along the lasso. The resulting changes in wavefront angle and interelectrode distance led to changes in bipolar peak-to-peak voltage. When the angle between the electrode pair and the wavefront approached 0◦, the bipolar peak-to-peak voltage decreased towards 0 mV. The conducted studies improved our understanding of electrogram (EGM) behaviour when electrode sizes, shapes and configurations are varied. We’ve shown these variations to be significant to EGM interpretation. Because the variety of catheters in clinical use, the specific nature of the EGM-geometry relationship needs to be investigated further.
Abstract:
As direct activators of the contractile apparatus of cardiac myocytes, calcium ions have a strong impact on the tension development of the heart. Therefore, the standard procedure for modelling the electromechanical coupling is based on the transfer of the calcium transient from the cell model to the force model. As a consequence, the coupling of various models can lead to significantly different trajectories of active tension due to diverging implementations of calcium dynamics. As this phenomenon is not to be expected in a healthy human heart, the aim of this thesis was to generate standardized tension development in coupled force- and cell models for atria and ventricles. The ventricular cell models according to O’Hara et al. [1], Tomek et al. [2] and Ten Tusscher and Panfilov [3] as well as the atrial models following Courtemanche et al. [4], Koivumäki et al. [5] and Maleckar et al. [6] were considered. The calcium sensitivity of the parameters of the force models according to Land et al. [7, 8] were evaluated by means of a sensitivity analysis and categorized by their influence on the active tension. Based on the findings obtained, a parameter optimization of the Land models was developed. The results obtained showed that standardized tension developments could only be achieved when coupling selected models. For the cell model according to Courtemanche et al. and all considered cell models of the ventricle the optimization using constant stretches in the interval λ = [0.85, 1.2] gave convincing results with an error ≤ 50 %. The error refers to the average relative error of each considered characteristic of the active tension. With the use of time-variable stretches, the optimization did not yield satisfactory results so far, because only Courtemanche et al. was found to be robust to changes in stretch with an error of 40.4 %. It could also be concluded that calcium transients with unusual behavior hamper the parameter optimization. The limitations of the optimization were confirmed by tissue simulations. With the simu- lations an alternative method for the re-parameterization of the force models was also investigated. However, the considered scaling of the parameter Tref showed a second contrac- tion in the atrium and a maximum force of ≈ 320 kPa in the ventricle. Thus, the optimization based on single cells was still the better method for generating physiologically justified tension development. Furthermore, the implementations of the cell models according to Courtemanche et al. and O’Hara et al. were adapted to take into account the sarcomere length dependent calcium binding to Troponin C (TnC) described in Land et al. To re-determine the calciumsensitive parameters, parameter estimation methods were developed based on the previously designed optimization. It was shown that the simulated intracellular calcium concentration of the rescaled feedback systems, regarding varying sarcomere lengths, partly behave in reverse to experimental findings. This might be explained by the insufficiently detailed implementation of the cell models with respect to the calcium handling.
Abstract:
Patients experiencing respiratory disorders often need to be mechanically ventilated to ensure sufficient gas exchange. An adequate lung monitoring, including regional lung ventilation and perfusion is believed to support a proper diagnosis and reduce ventilator associated lung injuries. Electrical Impedance Tomography (EIT) is a well established device used in clinical treatments for visualizing regional ventilation distributions. As a non-invasive, portable, and real time system, the possibility of monitoring regional perfusion distribution with EIT has gained interest over the past years. First studies have shown very promising results when combining the EIT technique with the injection of a saline solution indicator. For its clinical implementation, further research is necessary to validate the obtained distribution, prove the robustness of the applied methods, and gain spatial resolution. This work deals with adapting EIT reconstruction algorithms for the estimation of spatial pulmonary perfusion. The current algorithm shifts the perfusion distribution towards central and ventral parts of the torso, leading to a poor spatial resolution. To avoid reconstruction biases for pulmonary perfusion estimation, different approaches for the reconstruction of the conductivity distributions are studied on two different datasets: a simulation study and a preclinical study, carried out on pigs. First, three alternative methods for calculating the Jacobian matrix are studied. The Jacobian matrix is calculated assuming an inhomogeneous conductivity distribution, which better fits with the anatomical torso boundaries. Additionally, a configuration updating the conductivity distribution for each time step and one normalizing the sensitivity of the Jacobian matrix were considered. Secondly, four different regularization terms are compared: the classical regularization approaches of Tikhonov 0th and Tikhonov 2nd order, a regularization including temporal information, and a novel approach combining both, Tikhonov 0th and Tikhonov 2nd order approaches, based on the EIT sensitivity. Furthermore, an approach focused on combining images obtained from different smoothing algorithms based on the sensitivity of the Jacobian matrix has been proposed. The results show that assuming an inhomogeneous conductivity distribution generally leads to an improvement in the spatial perfusion distribution. Regarding the regularization term, it was proven that imposing less spatial smoothing at central parts of the torso should be considered for obtaining a better resolution of the images. The positive results suggest the necessity of considering a different EIT reconstruction algorithm for monitoring pulmonary perfusion. Summarizing, the proposed approaches in this work, including a priori information, lead to a better estimation of the spatial perfusion distribution and contribute to conclude the great potential of EIT for monitoring lung respiration.
Abstract:
Under atrial fibrillation (AF), cardiac tissue undergoes electrophysiological and structural remodeling. Structural remodeling is characterized by the formation of fibrotic tissue. Atrial fibrosis plays a role in sustaining the arrhythmia. Thus, one could say that "AF begets AF". Dealing with such a high level of complexity makes the understanding of the mechanisms underlying AF quite the challenging task. This work focuses on the effect of fibrosis density and transmurality on electrogram morphology and atrial fibrillation dynamics. For that purpose meshes of atrial substrate were simulated. Each mesh was constituted of an extracellular bath, intracellular tissue, fibrotic patch and a grid of sixteen electrodes. Density and transmurality of the fibrotic tissue were varied, so that in the end a total of 9 meshes was generated. The atrial substrates were electrically stimulated with two pulses of monophasic square pulse. The resulting bipolar electrograms were calculated for 3 bipolar orientations: perpendicular, diagonal and parallel. The intracardiac atrial electrograms were then analysed and compared according to 4 parameters: peak-to-peak amplitude, non linear energy operator (NLEO) active segment duration, sample entropy and spectral entropy. The goal was to investigate if one could distinguish different fibrosis densities and transmuralities from analysing electrogram morphology. Additionally, this thesis aimed to analyse the effect of varying those fibrotic tissue features on AF dynamics. For this work, results showed that increasing fibrosis density and transmurality slowed down wave propagation across the tissue but did not initiate reentry. It was also possible to distinguish certain transmuralities namely 0.5mm and 2mm relying on peak-to-peak amplitude, sample entropy or spectral entropy. Additionally, the results highlighted that transmural meshes with 40% density had longer active segments compared to the rest of the meshes. Detailed results and limitations of the study will be explained in further sections.
Abstract:
Klassische myoelektrische Steuerkonzepte für aktive Prothesen der oberen Extremitäten nutzen typischerweise ein oder zwei Elektroden am Unterarm. Der Wechsel zu fortschrittli- chen Ansätzen auf Basis der Mustererkennung erfordern für eine gute Klassifizierungsrate der Handgesten eine erhöhte Elektrodenanzahl. Gleichzeitig ist jedoch der verfügbare Bau- raum im Prothesenschaft stark limitiert. Da der Fokus bisheriger Veröffentlichungen zu einem Großteil auf der Erforschung leistungsstarker Merkmale und Algorithmen für die Mustererkennung liegt, soll mit dieser Arbeit eine quantitative Analyse des für die Klassen- unterscheidung notwendigen Informationsgehalts der EMG-Signale in Abhängigkeit von der Elektrodenanzahl durchgeführt werden. Dazu wurde an gesunden, nicht-amputierten Probanden ein Experiment mit 15 unterschied- lichen Handgesten durchgeführt. Das EMG wurde mit 16 Bipolarelektroden am Unterarm aufgezeichnet. Die Leistung des Merkmalsraums mit fünf Zeitbereich-Merkmalen wurde nach einer Dimensionsreduktion durch PCA, LDA und mRMR durch die Bestimmung der Klassentrennbarkeit mit drei Cluster-Metriken bewertet. In diesem Zuge wurde eine Modifi- kation des Davies-Bouldin Index vorgeschlagen, um die Aussagekraft für nicht-sphärische Cluster zu verbessern. Dazu wird für die Klassentrennung nicht-relevante Varianz innerhalb der Cluster durch Projektion der Datenpunkte aus der Indexberechnung ausgeschlossen. Die Bewertung wurde für alle möglichen Kombinationen der 16 Elektroden durchgeführt. Die Ergebnisse zeigen einen deutlich Anstieg der Klassentrennbarkeit, wenn die Elektroden- anzahl bis auf vier erhöht wird, mit weiteren Anstiegen um 20% bis 25% bei Nutzung von acht und zwölf Elektroden. Darüber hinaus ist der Zugewinn vernachlässigbar. Weiterhin ist in den Ergebnissen deutlich zu sehen, dass die Klassentrennbarkeit selbst bei Nutzung von acht und mehr EMG-Kanälen stark durch die genaue Positionierung der Elektroden beeinflusst wird.
Abstract:
Atrial Fibrillation (AF) is the most common cardiac arrhythmia in clinical practice and one of the leading causes for hospitalization and death. Rotational activities (rotors) and focal activities (focal sources) are being considered to drive and sustain AF. Identifying the presence of these mechanisms is fundamental for planning the best treatment to heal patients. Commonly AF is treated through ablation of the affected area guided by intracardiac catheters. By using a non-invasive method like the Electrocardiogram (ECG), characterization of the mechanism would be simplified and the identification process would be optimized. The ECG is widely used in clinical practice and it is therefore a perfect tool for early diagnosis and arrangement of targeted treatment. Using the ECG to determine the AF mechanism would enable physicians to directly chose the right approach for therapy and further examination. This project analyzed 12-lead ECG signals obtained by simulating rotors and focal sources driving AF on a computer model of the atria. These two mechanisms were simulated in the exact same position and with the same basic cycle length (bcl) making the results comparable and ensuring that the differences in the signals arise solely from the different mechanisms. In total 31 atrial scenarios were simulated each with a unique point of rotor stimulation and bcl. Each scenario was transformed on 8 different torso geometries resulting in a total number of 248 torso scenarios. For each of the torso scenarios the 12-lead ECG was obtained and analyzed with several signal processing methods resulting in 182 features. With alpha = 0.01, 77.47% of the features used in this work showed statistically significant differences between focal source and rotors. Furthermore, the features were analyzed using the Receiver Operator Curve (ROC), and a decision tree classifier was implemented for a multi-feature classification. These methods characterized features that perform especially well in the discrimination between rotors and focal sources. Deriving from the ROC analysis the most powerful features corresponding to AUC values between 0.9297 and 0.9111 were standard deviation of voltage amplitude of the vector combination of I, aVF, and V1, the mean organization index over 12 leads, and the standard deviation of R-value from PC4. Decision tree classification achieved 89.92% average accuracy on a pre-fixed test set over 100 trials. The methods and features presented in this thesis can potentially be used to determine the mechanism driving AF. This work demonstrated that the introduced methods can reliably distinguish the most simple cases of AF mechanisms. Based on these findings it might be possible to non-invasively characterize the mechanism of AF in patients and thus will support the physicians treating this arrhythmia.
Abstract:
Dilated cardiomyopathy is a cardiac abnormality affecting millions of people worldwide which complications are due to the several modifications that occur in the structure and function of the heart. The numerous aspects that have to be considered make this disease extremely hard to deal with, especially from an in-vivo or even in-vitro examination. In that context, computational cardiology has become a crucial aspect of modern medicine, enabling the understanding of complex pathologies and favoring the scientific breakthrough. In this light, and in order to gain insight into dilated cardiomyopathy, a detailed computational model of a failing heart is presented in this work. Both, electrophysiology and mechanics were considered in a three-dimensional model of the whole heart, in which diverse features, such as structural modifications, fiber reorganization, cellular alterations and other variables characteristic of the disease, were adopted and assessed. The results obtained from analysing these variables individually and combined show that cellular alterations, especially those affecting the electrophysiology, are mainly responsible for the poor function of the heart in dilated cardiomyopathy, whereas multi-scale changes, such as fiber reorganization and cellular uncoupling, have little effect on the mechanical behaviour. The structural remodeling and modifications in the passive properties and circulatory system are key aspects to take into account, which together with the cellular remodeling constitute the features that need to be included in a computational model of dilated cardiomyopathy.
Abstract:
During cerebral revascularization surgery, it is imperative to examine the perfusion of the treated region to preserve patients from fatal consequences, done by measuring the volume flow in single blood vessels. Weichelt et al. suggested to quantify the volume flow from contact free recorded fluorescence angiography video data, multiplying the vessel cross section by the observed fluorophor velocity. Compared to reference measurements, the method overestimates the volume flow. Depending on the vessel diameter d the deviations range from 7% (given as k = 1,07,d = 1,6mm) to 58% (given as k = 1,58,d = 4mm) [1]. The observed deviations are investigated in recent research. There is a flow velocity profile over the vessel cross section. There are varying amounts of intensity contributing to the video data coming from different depths within the vessel due to radiative transfer in turbid media. These varying amounts should be considered in an optic probability density function. So, one approach integrates the local relative blood velocity, weighted by the optic probability density function over the vessel cross section to approximate k. If the deviations can be explained by a combination of information depth and local blood velocity, the approximated k match the observed ones. In previous work, the optical weighting was obtained applying a Monte Carlo Multi Layer model, analyzing the deepest penetration depth of each photon. This implies many assumptions, especially regarding model geometry, information source and illumination modelling. The approximated k do not match the observations. [2] This work investigates the influence of use of optical weights from a Fluorescence Multi Cylinder Monte Carlo simulation instead of a Monte Carlo Multi Layer to assess the validity of assumptions made by using the optical weighting factors from Monte Carlo Multi Layer. Three aspects of the optic model were reimplemented to obtain the optic weights: 1. the fluorescence location of each photon was assumed to be the source of information given by this photon instead of the deepest penetration location 2. the Multi Layer geometry was changed to a Multi Cylinder geometry 3. homogeneous illumination was simulated instead of single point illumination It was found, that there are clear differences in approximated k-factors, obtained from optical weights from the Fluorescence Multi Cylinder Monte Carlo model, compared to the optical weights from Monte Carlo Multi Layer model. The deviations coming from model geometry, information source interpretation and illumination show a Root Square Mean Error of up to 38%. The assumptions made in previous work are not met.
Abstract:
Atrial fibrillation (AF) is the most prevalent cardiac arrhythmia that affects up to 2% of the general population. Recent studies have shown that unhealthy tissue substrate (e.g. fibrotic tissue) can be responsible for initiating and maintaining this cardiopathology. Various techniques are available to assess atrial anatomy (CT scans), detect fibrosis (LGE MRI) and provide insights on patients’ electrophysiology (electro-anatomical maps). The aim of this work is to present a way to co-register all this information on a single map as well as develop a user-friendly interface to allow this process to be carried out easily. Thus, to perform the co-registration of these maps, a pre-alignment of both maps is needed. This could be done fully automatically using a PCA method or manually choosing landmarks which correspond to the same locations in both anatomies. From this point, the source map is iteratively deformed to match the target geometry within a tolerance and, therefore, a map is obtained containing all the information from the multimodal datasets related to a patient. Once this result map is obtained, a mesh processing is performed in order to build a tetrahedral model and attach the information needed to run simulations. Thus, through this process a volumetric model is achieved in which the fiber orientations are included. Consequently, together with the fibrotic tissue already present in the original MRI map and a smoothed LAT signal incorporated through the co-registration method, it is possible to carry out personalized simulations. Finally, in-silico electrograms (EGMs) are computed and analyzed. Regarding the different pre-alignment methods, it was verified that establishing five landmarks in the atrial anatomy as a reference provided the best results. Moreover, from the results of the simulations it was checked that the patient-specific model was accurately built and corresponded adequately to the real physiological situation of the patient. In addition, EGMs revealed the presence of fragmentation as well as a significant reduction in the amplitude in areas of fibrotic tissue, agreeing in this way with what is stated in the literature.