Abstract:
Supervised training of neural networks requires large, diverse and well annotated data sets. In the medical field, this is often difficult to achieve due to constraints in time, expert knowledge and prevalence of an event. Artificial data augmentation can help to prevent overfitting and improve the detection of rare events as well as overall performance. However, most augmentation techniques use purely spatial transformations, which are not sufficient for video data with temporal correlations. In this paper, we present a novel methodology for workflow augmentation and demonstrate its benefit for event recognition in cataract surgery. The proposed approach increases the frequency of event alternation by creating artificial videos. The original video is split into event segments and a workflow graph is extracted from the original annotations. Finally, the segments are assembled into new videos based on the workflow graph. Compared to the original videos, the frequency of event alternation in the augmented cataract surgery videos increased by 26%. Further, a 3% higher classification accuracy and a 7.8% higher precision was achieved compared to a state-of-the-art approach. Our approach is particularly helpful to increase the occurrence of rare but important events and can be applied to a large variety of use cases.
Abstract:
BACKGROUND AND OBJECTIVE: Cardiac electrophysiology is a medical specialty with a long and rich tradition of computational modeling. Nevertheless, no community standard for cardiac electrophysiology simulation software has evolved yet. Here, we present the openCARP simulation environment as one solution that could foster the needs of large parts of this community. METHODS AND RESULTS: openCARP and the Python-based carputils framework allow developing and sharing simulation pipelines which automate in silico experiments including all modeling and simulation steps to increase reproducibility and productivity. The continuously expanding openCARP user community is supported by tailored infrastructure. Documentation and training material facilitate access to this complementary research tool for new users. After a brief historic review, this paper summarizes requirements for a high-usability electrophysiology simulator and describes how openCARP fulfills them. We introduce the openCARP modeling workflow in a multi-scale example of atrial fibrillation simulations on single cell, tissue, organ and body level and finally outline future development potential. CONCLUSION: As an open simulator, openCARP can advance the computational cardiac electrophysiology field by making state-of-the-art simulations accessible. In combination with the carputils framework, it offers a tailored software solution for the scientific community and contributes towards increasing use, transparency, standardization and reproducibility of in silico experiments.
Abstract:
The arrhythmogenesis of atrial fibrillation is associated with the presence of fibrotic atrial tissue. Not only fibrosis but also physiological anatomical variability of the atria and the thorax reflect in altered morphology of the P wave in the 12-lead electrocardiogram (ECG). Distinguishing between the effects on the P wave induced by local atrial substrate changes and those caused by healthy anatomical variations is important to gauge the potential of the 12-lead ECG as a non-invasive and cost-effective tool for the early detection of fibrotic atrial cardiomyopathy to stratify atrial fibrillation propensity. In this work, we realized 54,000 combinations of different atria and thorax geometries from statistical shape models capturing anatomical variability in the general population. For each atrial model, 10 different volume fractions (0-45%) were defined as fibrotic. Electrophysiological simulations in sinus rhythm were conducted for each model combination and the respective 12-lead ECGs were computed. P wave features (duration, amplitude, dispersion, terminal force in V1) were extracted and compared between the healthy and the diseased model cohorts. All investigated feature values systematically in- or decreased with the left atrial volume fraction covered by fibrotic tissue, however value ranges overlapped between the healthy and the diseased cohort. Using all extracted P wave features as input values, the amount of the fibrotic left atrial volume fraction was estimated by a neural network with an absolute root mean square error of 8.78%. Our simulation results suggest that although all investigated P wave features highly vary for different anatomical properties, the combination of these features can contribute to non-invasively estimate the volume fraction of atrial fibrosis using ECG-based machine learning approaches.
Abstract:
AIMS: The treatment of atrial fibrillation beyond pulmonary vein isolation has remained an unsolved challenge. Targeting regions identified by different substrate mapping approaches for ablation resulted in ambiguous outcomes. With the effective refractory period being a fundamental prerequisite for the maintenance of fibrillatory conduction, this study aims at estimating the effective refractory period with clinically available measurements. METHODS AND RESULTS: A set of 240 simulations in a spherical model of the left atrium with varying model initialization, combination of cellular refractory properties, and size of a region of lowered effective refractory period was implemented to analyse the capabilities and limitations of cycle length mapping. The minimum observed cycle length and the 25% quantile were compared to the underlying effective refractory period. The density of phase singularities was used as a measure for the complexity of the excitation pattern. Finally, we employed the method in a clinical test of concept including five patients. Areas of lowered effective refractory period could be distinguished from their surroundings in simulated scenarios with successfully induced multi-wavelet re-entry. Larger areas and higher gradients in effective refractory period as well as complex activation patterns favour the method. The 25% quantile of cycle lengths in patients with persistent atrial fibrillation was found to range from 85 to 190 ms. CONCLUSION: Cycle length mapping is capable of highlighting regions of pathologic refractory properties. In combination with complementary substrate mapping approaches, the method fosters confidence to enhance the treatment of atrial fibrillation beyond pulmonary vein isolation particularly in patients with complex activation patterns.
Abstract:
Research software has become a central asset in academic research. It optimizes existing and enables new research methods, implements and embeds research knowledge, and constitutes an essential research product in itself. Research software must be sustainable in order to understand, replicate, reproduce, and build upon existing research or conduct new research effectively. In other words, software must be available, discoverable, usable, and adaptable to new needs, both now and in the future. Research software therefore requires an environment that supports sustainability. Hence, a change is needed in the way research software development and maintenance are currently motivated, incentivized, funded, structurally and infrastructurally supported, and legally treated. Failing to do so will threaten the quality and validity of research. In this paper, we identify challenges for research software sustainability in Germany and beyond, in terms of motivation, selection, research software engineering personnel, funding, infrastructure, and legal aspects. Besides researchers, we specifically address political and academic decision-makers to increase awareness of the importance and needs of sustainable research software practices. In particular, we recommend strategies and measures to create an environment for sustainable research software, with the ultimate goal to ensure that software-driven research is valid, reproducible and sustainable, and that software is recognized as a first class citizen in research. This paper is the outcome of two workshops run in Germany in 2019, at deRSE19 - the first International Conference of Research Software Engineers in Germany - and a dedicated DFG-supported follow-up workshop in Berlin.
Abstract:
OBJECTIVE: Atrial flutter (AFl) is a common arrhythmia that can be categorized according to different self-sustained electrophysiological mechanisms. The non-invasive discrimination of such mechanisms would greatly benefit ablative methods for AFl therapy as the driving mechanisms would be described prior to the invasive procedure, helping to guide ablation. In the present work, we sought to implement recurrence quantification analysis (RQA) on 12-lead ECG signals from a computational framework to discriminate different electrophysiological mechanisms sustaining AFl. METHODS: 20 different AFl mechanisms were generated in 8 atrial models and were propagated into 8 torso models via forward solution, resulting in 1,256 sets of 12-lead ECG signals. Principal component analysis was applied on the 12-lead ECGs, and six RQA-based features were extracted from the most significant principal component scores in two different approaches: individual component RQA and spatial reduced RQA. RESULTS: In both approaches, RQA-based features were significantly sensitive to the dynamic structures underlying different AFl mechanisms. Hit rate as high as 67.7% was achieved when discriminating the 20 AFl mechanisms. RQA-based features estimated for a clinical sample suggested high agreement with the results found in the computational framework. CONCLUSION: RQA has been shown an effective method to distinguish different AFl electrophysiological mechanisms in a non-invasive computational framework. A clinical 12-lead ECG used as proof of concept showed the value of both the simulations and the methods. SIGNIFICANCE: The non-invasive discrimination of AFl mechanisms helps to delineate the ablation strategy, reducing time and resources required to conduct invasive cardiac mapping and ablation procedures.
Abstract:
Acute ischemic stroke is a major health problem with a high mortality rate and a high risk for permanent disabilities. Selective brain hypothermia has the neuroprotective potential to possibly lower cerebral harm. A recently developed catheter system enables to combine endovascular blood cooling and thrombectomy using the same endovascular access. By using the penumbral perfusion via leptomeningeal collaterals, the catheter aims at enabling a cold reperfusion, which mitigates the risk of a reperfusion injury. However, cerebral circulation is highly patient-specific and can vary greatly. Since direct measurement of remaining perfusion and temperature decrease induced by the catheter is not possible without additional harm to the patient, computational modeling provides an alternative to gain knowledge about resulting cerebral temperature decrease. In this work, we present a brain temperature model with a realistic division into gray and white matter and consideration of spatially resolved perfusion. Furthermore, it includes detailed anatomy of cerebral circulation with possibility of personalizing on base of real patient anatomy. For evaluation of catheter performance in terms of cold reperfusion and to analyze its general performance, we calculated the decrease in brain temperature in case of a large vessel occlusion in the middle cerebral artery (MCA) for different scenarios of cerebral arterial anatomy. Congenital arterial variations in the circle of Willis had a distinct influence on the cooling effect and the resulting spatial temperature distribution before vessel recanalization. Independent of the branching configurations, the model predicted a cold reperfusion due to a strong temperature decrease after recanalization (1.4-2.2 C after 25 min of cooling, recanalization after 20 min of cooling). Our model illustrates the effectiveness of endovascular cooling in combination with mechanical thrombectomy and its results serve as an adequate substitute for temperature measurement in a clinical setting in the absence of direct intraparenchymal temperature probes.
Abstract:
The contraction of the human heart is a complex process as a consequence of the interaction of internal and external forces. In current clinical routine, the resulting deformation can be imaged during an entire heart beat. However, the active tension development cannot be measured in vivo but may provide valuable diagnostic information. In this work, we present a novel numerical method for solving an inverse problem of cardiac biomechanics-estimating the dynamic active tension field, provided the motion of the myocardial wall is known. This ill-posed non-linear problem is solved using second order Tikhonov regularization in space and time. We conducted a sensitivity analysis by varying the fiber orientation in the range of measurement accuracy. To achieve RMSE <20% of the maximal tension, the fiber orientation needs to be provided with an accuracy of 10°. Also, variation was added to the deformation data in the range of segmentation accuracy. Here, imposing temporal regularization led to an eightfold decrease in the error down to 12%. Furthermore, non-contracting regions representing myocardial infarct scars were introduced in the left ventricle and could be identified accurately in the inverse solution (sensitivity >0.95). The results obtained with non-matching input data are promising and indicate directions for further improvement of the method. In future, this method will be extended to estimate the active tension field based on motion data from clinical images, which could provide important insights in terms of a new diagnostic tool for the identification and treatment of diseased heart tissue.
Abstract:
BACKGROUND Atrial !brillation (AF) is the most common supra- ventricular arrhythmia, characterized by disorganized atrial electri- cal activity, maintained by localized arrhythmogenic atrial drivers. Pulmonary vein isolation (PVI) allows to exclude PV-related drivers. However, PVI is less effective in patients with additional extra-PV arrhythmogenic drivers. OBJECTIVES To discriminate whether AF drivers are located near the PVs vs extra-PV regions using the noninvasive 12-lead electro- cardiogram (ECG) in a computational and clinical framework, and to computationally predict the acute success of PVI in these cohorts of data. METHODS AFdriverswereinducedin2computerizedatrialmodels and combined with 8 torso models, resulting in 1128 12-lead ECGs (80 ECGs with AF drivers located in the PVs and 1048 in extra-PV areas). A total of 103 features were extracted from the signals. Bi- nary decision tree classi!er was trained on the simulated data and evaluated using hold-out cross-validation. The PVs were subse- quently isolated in the models to assess PVI success Finally, the classi!er was tested on a clinical dataset (46 patients: 23 PV- dependent AF and 23 with additional extra-PV sources). RESULTS The classi!er yielded 82.6% speci!city and 73.9% sensi- tivity for detecting PV drivers on the clinical data. Consistency analysis on the 46 patients resulted in 93.5% results match. Applying PVI on the simulated AF cases terminated AF in 100% of the cases in the PV class. CONCLUSION Machine learning–based classi!cation of 12-lead- ECG allows discrimination between patients with PV drivers vs those with extra-PV drivers of AF. The novel algorithm may aid to identify patients with high acute success rates to PVI. KEYWORDS Atrial !brillation; Atrial ablation; Machine learning; Noninvasive; 12-lead electrocardiogram; Pulmonary vein isolation; Cardiac simulations (Cardiovascular Digital Health Journal 2021;2:126–136) © 2021 Heart Rhythm Society. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc- nd/4.0/).
Abstract:
Background: Hypertrophic cardiomyopathy (HCM) is typically caused by mutations in sarcomeric genes leading to cardiomyocyte disarray, replacement fibrosis, impaired contractility, and elevated filling pressures. These varying tissue properties are associ- ated with certain strain patterns that may allow to establish a diagnosis by means of non-invasive imaging without the necessity of harmful myocardial biopsies or con- trast agent application. With a numerical study, we aim to answer: how the variability in each of these mechanisms contributes to altered mechanics of the left ventricle (LV) and if the deformation obtained in in-silico experiments is comparable to values reported from clinical measurements. Methods: We conducted an in-silico sensitivity study on physiological and pathologi- cal mechanisms potentially underlying the clinical HCM phenotype. The deformation of the four-chamber heart models was simulated using a finite-element mechanical solver with a sliding boundary condition to mimic the tissue surrounding the heart. Furthermore, a closed-loop circulatory model delivered the pressure values acting on the endocardium. Deformation measures and mechanical behavior of the heart mod- els were evaluated globally and regionally. Results: Hypertrophy of the LV affected the course of strain, strain rate, and wall thickening—the root-mean-squared difference of the wall thickening between control (mean thickness 10 mm) and hypertrophic geometries (17 mm) was >10%. A reduc- tion of active force development by 40% led to less overall deformation: maximal radial strain reduced from 26 to 21%. A fivefold increase in tissue stiffness caused a more homogeneous distribution of the strain values among 17 heart segments. Fiber disarray led to minor changes in the circumferential and radial strain. A combination of pathological mechanisms led to reduced and slower deformation of the LV and halved the longitudinal shortening of the LA. Conclusions: This study uses a computer model to determine the changes in LV deformation caused by pathological mechanisms that are presumed to underlay HCM. This knowledge can complement imaging-derived information to obtain a more accu- rate diagnosis of HCM.
Abstract:
The term "In Silico Trial" indicates the use of computer modelling and simulation to evaluate the safety and efficacy of a medical product, whether a drug, a medical device, a diagnostic product or an advanced therapy medicinal product. Predictive models are positioned as new methodologies for the development and the regulatory evaluation of medical products. New methodologies are qualified by regulators such as FDA and EMA through formal processes, where a first step is the definition of the Context of Use (CoU), which is a concise description of how the new methodology is intended to be used in the development and regulatory assessment process. As In Silico Trials are a disruptively innovative class of new methodologies, it is important to have a list of possible CoUs highlighting potential applications for the development of the relative regulatory science. This review paper presents the result of a consensus process that took place in the InSilicoWorld Community of Practice, an online forum for experts in in silico medicine. The experts involved identified 46 descriptions of possible CoUs which were organised into a candidate taxonomy of nine CoU categories. Examples of 31 CoUs were identified in the available literature; the remaining 15 should, for now, be considered speculative.
Abstract:
In both clinical and computational studies, different pacing protocols are used to induce arrhythmia and non-inducibility is often considered as the endpoint of treatment. The need for a standardized methodology is urgent since the choice of the protocol used to induce arrhythmia could lead to contrasting results, e.g., in assessing atrial fibrillation (AF) vulnerabilty. Therefore, we propose a novel method-pacing at the end of the effective refractory period (PEERP)-and compare it to state-of-the-art protocols, such as phase singularity distribution (PSD) and rapid pacing (RP) in a computational study. All methods were tested by pacing from evenly distributed endocardial points at 1 cm inter-point distance in two bi-atrial geometries. Seven different atrial models were implemented: five cases without specific AF-induced remodeling but with decreasing global conduction velocity and two persistent AF cases with an increasing amount of fibrosis resembling different substrate remodeling stages. Compared with PSD and RP, PEERP induced a larger variety of arrhythmia complexity requiring, on average, only 2.7 extra-stimuli and 3 s of simulation time to initiate reentry. Moreover, PEERP and PSD were the protocols which unveiled a larger number of areas vulnerable to sustain stable long living reentries compared to RP. Finally, PEERP can foster standardization and reproducibility, since, in contrast to the other protocols, it is a parameter-free method. Furthermore, we discuss its clinical applicability. We conclude that the choice of the inducing protocol has an influence on both initiation and maintenance of AF and we propose and provide PEERP as a reproducible method to assess arrhythmia vulnerability.
Abstract:
Graphical visualization systems are a common clinical tool for displaying digital images and three-dimensional volumetric data. These systems provide a broad spectrum of information to support physicians in their clinical routine. For example, the field of radiology enjoys unrestricted options for interaction with the data, since information is pre-recorded and available entirely in digital form. However, some fields, such as microsurgery, do not benefit from this yet. Microscopes, endoscopes, and laparoscopes show the surgical site as it is. To allow free data manipulation and information fusion, 3D digitization of surgical sites is required. We aimed to find the number of cameras needed to add this functionality to surgical microscopes. For this, we performed in silico simulations of the 3D reconstruction of representative models of microsurgical sites with different numbers of cameras in narrow-baseline setups. Our results show that eight independent camera views are preferable, while at least four are necessary for a digital surgical site. In most cases, eight cameras allow the reconstruction of over 99% of the visible part. With four cameras, still over 95% can be achieved. This answers one of the key questions for the development of a prototype microscope. In future, such a system can provide functionality which is unattainable today
Abstract:
Background: Rate-varying S1S2 stimulation protocols can be used for restitution studies to characterize atrial substrate, ionic remodeling, and atrial fibrillation risk. Clinical restitution studies with numerous patients create large amounts of these data. Thus, an automated pipeline to evaluate clinically acquired S1S2 stimulation protocol data necessitates consistent, robust, reproducible, and precise evaluation of local activation times, electrogram amplitude, and conduction velocity. Here, we present the CVAR-Seg pipeline, developed focusing on three challenges: (i) No previous knowledge of the stimulation parameters is available, thus, arbitrary protocols are supported. (ii) The pipeline remains robust under different noise conditions. (iii) The pipeline supports segmentation of atrial activities in close temporal proximity to the stimulation artifact, which is challenging due to larger amplitude and slope of the stimulus compared to the atrial activity. Methods and Results: The S1 basic cycle length was estimated by time interval detection. Stimulation time windows were segmented by detecting synchronous peaks in different channels surpassing an amplitude threshold and identifying time intervals between detected stimuli. Elimination of the stimulation artifact by a matched filter allowed detection of local activation times in temporal proximity. A non-linear signal energy operator was used to segment periods of atrial activity. Geodesic and Euclidean inter electrode distances allowed approximation of conduction velocity. The automatic segmentation performance of the CVAR-Seg pipeline was evaluated on 37 synthetic datasets with decreasing signal-to-noise ratios. Noise was modeled by reconstructing the frequency spectrum of clinical noise. The pipeline retained a median local activation time error below a single sample (1 ms) for signal-to-noise ratios as low as 0 dB representing a high clinical noise level. As a proof of concept, the pipeline was tested on a CARTO case of a paroxysmal atrial fibrillation patient and yielded plausible restitution curves for conduction speed and amplitude. Conclusion: The proposed openly available CVAR-Seg pipeline promises fast, fully automated, robust, and accurate evaluations of atrial signals even with low signal-to-noise ratios. This is achieved by solving the proximity problem of stimulation and atrial activity to enable standardized evaluation without introducing human bias for large data sets.
Abstract:
The vascular function of a vessel can be qualitatively and intraoperatively checked by recording the blood dynamics inside the vessel via fluorescence angiography (FA). Although FA is the state of the art in proving the existence of blood flow during interventions such as bypass surgery, it still lacks a quantitative blood flow measurement that could decrease the recurrence rate and postsurgical mortality. Previous approaches show that the measured flow has a significant deviation compared to the gold standard reference (ultrasonic flow meter). In order to systematically address the possible sources of error, we investigated the error in transit time measurement of an indicator. Obtaining in vivo indicator dilution curves with a known ground truth is complex and often not possible. Further, the error in transit time measurement should be quantified and reduced. To tackle both issues, we first computed many diverse indicator dilution curves using an in silico simulation of the indicator’s flow. Second, we post-processed these curves to mimic measured signals. Finally, we fitted mathematical models (parabola, gamma variate, local density random walk, and mono-exponential model) to re-continualize the obtained discrete indicator dilution curves and calculate the time delay of two analytical functions. This re-continualization showed an increase in the temporal accuracy up to a sub-sample accuracy. Thereby, the Local Density Random Walk (LDRW) model performed best using the cross-correlation of the first derivative of both indicator curves with a cutting of the data at 40% of the peak intensity. The error in frames depends on the noise level and is for a signal-to-noise ratio (SNR) of 20dB and a sampling rate of fs = 60 Hz at f−1 · 0.25(±0.18), so this error is smaller than the distance between two consecutive s samples. The accurate determination of the transit time and the quantification of the error allow the calculation of the error propagation onto the flow measurement. Both can assist surgeons as an intraoperative quality check and thereby reduce the recurrence rate and post-surgical mortality.
Abstract:
Mathematical models of the human heart are evolving to become a cornerstone of precision medicine and support clinical decision making by providing a powerful tool to understand the mechanisms underlying pathophysiological conditions. In this study, we present a detailed mathematical description of a fully coupled multi-scale model of the human heart, including electrophysiology, mechanics, and a closed-loop model of circulation. State-of-the-art models based on human physiology are used to describe membrane kinetics, excitation-contraction coupling and active tension generation in the atria and the ventricles. Furthermore, we highlight ways to adapt this framework to patient specific measurements to build digital twins. The validity of the model is demonstrated through simulations on a personalized whole heart geometry based on magnetic resonance imaging data of a healthy volunteer. Additionally, the fully coupled model was employed to evaluate the effects of a typical atrial ablation scar on the cardiovascular system. With this work, we provide an adaptable multi-scale model that allows a comprehensive personalization from ion channels to the organ level enabling digital twin modeling
Abstract:
In patients with atrial fibrillation, intracardiac electrogram signal amplitude is known to decrease with increased structural tissue remodeling, referred to as fibrosis. In addition to the isolation of the pulmonary veins, fibrotic sites are considered a suitable target for catheter ablation. However, it remains an open challenge to find fibrotic areas and to differentiate their density and transmurality. This study aims to identify the volume fraction and transmurality of fibrosis in the atrial substrate. Simulated cardiac electrograms, combined with a generalized model of clinical noise, reproduce clinically measured signals. Our hybrid dataset approach combines and clinical electrograms to train a decision tree classifier to characterize the fibrotic atrial substrate. This approach captures different dynamics of the electrical propagation reflected on healthy electrogram morphology and synergistically combines it with synthetic fibrotic electrograms from experiments. The machine learning algorithm was tested on five patients and compared against clinical voltage maps as a proof of concept, distinguishing non-fibrotic from fibrotic tissue and characterizing the patient's fibrotic tissue in terms of density and transmurality. The proposed approach can be used to overcome a single voltage cut-off value to identify fibrotic tissue and guide ablation targeting fibrotic areas.
Abstract:
Aims Atrial cardiomyopathy (ACM) is associated with new-onset atrial fibrillation, arrhythmia recurrence after pulmonary vein isolation (PVI) and increased risk for stroke. At present, diagnosis of ACM is feasible by endocardial contact mapping of left atrial (LA) low-voltage substrate (LVS) or late gadolinium-enhanced magnetic resonance imaging, but their complexity limits a widespread use. The aim of this study was to assess non-invasive body surface electrocardiographic imaging (ECGI) as a novel clinical tool for diagnosis of ACM compared with endocardial mapping. Methods and results Thirty-nine consecutive patients (66 ± 9 years, 85% male) presenting for their first PVI for persistent atrial fibrillation underwent ECGI in sinus rhythm using a 252-electrode-array mapping system. Subsequently, high-density LA voltage and biatrial activation maps (mean 2090 ± 488 sites) were acquired in sinus rhythm prior to PVI. Freedom from arrhythmia recurrence was assessed within 12 months follow-up. Increased duration of total atrial conduction time (TACT) in ECGI was associated with both increased atrial activation time and extent of LA-LVS in endocardial contact mapping (r = 0.77 and r = 0.66, P < 0.0001 respectively). Atrial cardiomyopathy was found in 23 (59%) patients. A TACT value of 148 ms identified ACM with 91.3% sensitivity and 93.7% specificity. Arrhythmia recurrence occurred in 15 (38%) patients during a follow-up of 389 ± 55 days. Freedom from arrhythmia was significantly higher in patients with a TACT <148 ms compared with patients with a TACT ≥148 ms (82.4% vs. 45.5%, P = 0.019). Conclusion Analysis of TACT in non-invasive ECGI allows diagnosis of patients with ACM, which is associated with a significantly increased risk for arrhythmia recurrence following PVI.
Abstract:
Ventricular coordinates are widely used as a versatile tool for various applications that benefit from a description of local position within the heart. However, the practical usefulness of ventricular coordinates is determined by their ability to meet application-specific requirements. For regression-based estimation of biventricular position, for example, a symmetric definition of coordinate directions in both ventricles is important. For the transfer of data between different hearts as another use case, the consistency of coordinate values across different geometries is particularly relevant. To meet these requirements, we compare different approaches to compute coordinates and present Cobiveco, a symmetric, consistent and intuitive biventricular coordinate system that builds upon existing coordinate systems, but overcomes some of their limitations. A novel one-way transfer error is introduced to assess the consistency of the coordinates. Normalized distances along bijective trajectories between two boundaries were found to be superior to solutions of Laplace’s equation for defining coordinate values, as they show better linearity in space. Evaluation of transfer and linearity errors on 36 patient geometries revealed a more than 4-fold improvement compared to a state-of-the-art method. Finally, we show two application examples underlining the relevance for cardiac data processing. Cobiveco MATLAB code is available under a permissive open-source license.
Abstract:
The ECG is one of the most commonly used non-invasive tools to gain insights into the electrical functioning of the heart. It has been crucial as a foundation in the creation and validation of in silico models describing the underlying electrophysiological processes. However, so far, the contraction of the heart and its influences on the ECG have mainly been overlooked in in silico models. As the heart contracts and moves, so do the electrical sources within the heart responsible for the signal on the body surface, thus potentially altering the ECG. To illuminate these aspects, we developed a human 4-chamber electro-mechanically coupled whole heart in silico model and embedded it within a torso model. Our model faithfully reproduces measured 12-lead ECG traces, circulatory characteristics, as well as physiological ventricular rotation and atrioventricular valve plane displacement. We compare our dynamic model to three non-deforming ones in terms of standard clinically used ECG leads (Einthoven and Wilson) and body surface potential maps (BSPM). The non-deforming models consider the heart at its ventricular end-diastatic, end-diastolic and end-systolic states. The standard leads show negligible differences during P-Wave and QRS-Complex, yet during T-Wave the leads closest to the heart show prominent differences in amplitude. When looking at the BSPM, there are no notable differences during the P-Wave, but effects of cardiac motion can be observed already during the QRS-Complex, increasing further during the T-Wave. We conclude that for the modeling of activation (P-Wave/QRS-Complex), the associated effort of simulating a complete electro-mechanical approach is not worth the computational cost. But when looking at ventricular repolarization (T-Wave) in standard leads as well as BSPM, there are areas where the signal can be influenced by cardiac motion of the heart to an extent that should not be ignored.
Abstract:
We aim to provide a critical appraisal of basic concepts underlying signal recording and processing technologies applied for (i) atrial fibrillation (AF) mapping to unravel AF mechanisms and/or identifying target sites for AF therapy and (ii) AF detection, to optimize usage of technologies, stimulate research aimed at closing knowledge gaps, and developing ideal AF recording and processing technologies. Recording and processing techniques for assessment of electrical activity during AF essential for diagnosis and guiding ablative therapy including body surface electrocardiograms (ECG) and endo- or epicardial electrograms (EGM) are evaluated. Discussion of (i) differences in uni-, bi-, and multi-polar (omnipolar/Laplacian) recording modes, (ii) impact of recording technologies on EGM morphology, (iii) global or local mapping using various types of EGM involving signal processing techniques including isochronal-, voltage- fractionation-, dipole density-, and rotor mapping, enabling derivation of parameters like atrial rate, entropy, conduction velocity/direction, (iv) value of epicardial and optical mapping, (v) AF detection by cardiac implantable electronic devices containing various detection algorithms applicable to stored EGMs, (vi) contribution of machine learning (ML) to further improvement of signals processing technologies. Recording and processing of EGM (or ECG) are the cornerstones of (body surface) mapping of AF. Currently available AF recording and processing technologies are mainly restricted to specific applications or have technological limitations. Improvements in AF mapping by obtaining highest fidelity source signals (e.g. catheter–electrode combinations) for signal processing (e.g. filtering, digitization, and noise elimination) is of utmost importance. Novel acquisition instruments (multi-polar catheters combined with improved physical modelling and ML techniques) will enable enhanced and automated interpretation of EGM recordings in the near future.
Abstract:
Atrial flutter (AFL) is a common atrial arrhythmia typically characterized by electrical activity propagating around specific anatomical regions. It is usually treated with catheter ablation. However, the identification of rotational activities is not straightforward, and requires an intense effort during the first phase of the electrophysiological (EP) study, i.e., the mapping phase, in which an anatomical 3D model is built and electrograms (EGMs) are recorded. In this study, we modeled the electrical propagation pattern of AFL (measured during mapping) using network theory (NT), a well-known field of research from the computer science domain. The main advantage of NT is the large number of available algorithms that can efficiently analyze the network. Using directed network mapping, we employed a cycle-finding algorithm to detect all cycles in the network, resembling the main propagation pattern of AFL. The method was tested on two subjects in sinus rhythm, six in an experimental model of in-silico simulations, and 10 subjects diagnosed with AFL who underwent a catheter ablation. The algorithm correctly detected the electrical propagation of both sinus rhythm cases and in-silico simulations. Regarding the AFL cases, arrhythmia mechanisms were either totally or partially identified in most of the cases (8 out of 10), i.e., cycles around the mitral valve, tricuspid valve and figure-of-eight reentries. The other two cases presented a poor mapping quality or a major complexity related to previous ablations, large areas of fibrotic tissue, etc. Directed network mapping represents an innovative tool that showed promising results in identifying AFL mechanisms in an automatic fashion. Further investigations are needed to assess the reliability of the method in different clinical scenarios.
Abstract:
The human heart is a masterpiece of the highest complexity coordinating multi-physics aspects on a multi-scale range. Thus, modeling the cardiac function to reproduce physiological characteristics and diseases remains challenging. Especially the complex simulation of the blood's hemodynamics and its interaction with the myocardial tissue requires a high accuracy of the underlying computational models and solvers. These demanding aspects make whole-heart fully-coupled simulations computationally highly expensive and call for simpler but still accurate models. While the mechanical deformation during the heart cycle drives the blood flow, less is known about the feedback of the blood flow onto the myocardial tissue. To solve the fluid-structure interaction problem, we suggest a cycle-to-cycle coupling of the structural deformation and the fluid dynamics. In a first step, the displacement of the endocardial wall in the mechanical simulation serves as a unidirectional boundary condition for the fluid simulation. After a complete heart cycle of fluid simulation, a spatially resolved pressure factor (PF) is extracted and returned to the next iteration of the solid mechanical simulation, closing the loop of the iterative coupling procedure. All simulations were performed on an individualized whole heart geometry. The effect of the sequential coupling was assessed by global measures such as the change in deformation and-as an example of diagnostically relevant information-the particle residence time. The mechanical displacement was up to 2 mm after the first iteration. In the second iteration, the deviation was in the sub-millimeter range, implying that already one iteration of the proposed cycle-to-cycle coupling is sufficient to converge to a coupled limit cycle. Cycle-to-cycle coupling between cardiac mechanics and fluid dynamics can be a promising approach to account for fluid-structure interaction with low computational effort. In an individualized healthy whole-heart model, one iteration sufficed to obtain converged and physiologically plausible results.
Abstract:
Electrical impedance tomography is clinically used to trace ventilation related changes in electrical conductivity of lung tissue. Estimating regional pulmonary perfusion using electrical impedance tomography is still a matter of research. To support clinical decision making, reliable bedside information of pulmonary perfusion is needed. We introduce a method to robustly detect pulmonary perfusion based on indicator-enhanced electrical impedance tomography and validate it by dynamic multidetector computed tomography in two experimental models of acute respiratory distress syndrome. The acute injury was induced in a sublobar segment of the right lung by saline lavage or endotoxin instillation in eight anesthetized mechanically ventilated pigs. For electrical impedance tomography measurements, a conductive bolus (10% saline solution) was injected into the right ventricle during breath hold. Electrical impedance tomography perfusion images were reconstructed by linear and normalized Gauss-Newton reconstruction on a finite element mesh with subsequent element-wise signal and feature analysis. An iodinated contrast agent was used to compute pulmonary blood flow via dynamic multidetector computed tomography. Spatial perfusion was estimated based on first-pass indicator dilution for both electrical impedance and multidetector computed tomography and compared by Pearson correlation and Bland-Altman analysis. Strong correlation was found in dorsoventral (r = 0.92) and in right-to-left directions (r = 0.85) with good limits of agreement of 8.74% in eight lung segments. With a robust electrical impedance tomography perfusion estimation method, we found strong agreement between multidetector computed and electrical impedance tomography perfusion in healthy and regionally injured lungs and demonstrated feasibility of electrical impedance tomography perfusion imaging.
Abstract:
The electrocardiogram (ECG) is a standard cost-efficient and non-invasive tool for the early detection of various cardiac diseases. Quantifying different timing and amplitude features of and in between the single ECG waveforms can reveal important information about the underlying (dys-)function of the heart. Determining these features requires the detection of fiducial points that mark the on- and offset as well as the peak of each ECG waveform (P wave, QRS complex, T wave). Manually setting these points is time-consuming and requires a physician’s expert knowledge. Therefore, the highly modular ECGdeli toolbox for MATLAB was developed, which is capable of filtering clinically recorded 12-lead ECG signals and detecting the fiducial points, also called delineation. It is one of the few open toolboxes offering ECG delineation for P waves, T Waves and QRS complexes. The algorithms provided were evaluated with the QT database, an ECG database comprising 105 signals with fiducial points annotated by clinicians. The median difference between the fiducial points set by the boundary detection algorithm and the clinical annotations serving as a ground truth is less than 4 samples (16 ms) for the P wave and the QRS complex markers.
Abstract:
Computer modeling of the electrophysiology of the heart has undergone significant progress. A healthy heart can be modeled starting from the ion channels via the spread of a depolarization wave on a realistic geometry of the human heart up to the potentials on the body surface and the ECG. Research is advancing regarding modeling diseases of the heart. This article reviews progress in calculating and analyzing the corresponding electrocardiogram (ECG) from simulated depolarization and repolarization waves. First, we describe modeling of the P-wave, the QRS complex and the T-wave of a healthy heart. Then, both the modeling and the corresponding ECGs of several important diseases and arrhythmias are delineated: ischemia and infarction, ectopic beats and extrasystoles, ventricular tachycardia, bundle branch blocks, atrial tachycardia, flutter and fibrillation, genetic diseases and channelopathies, imbalance of electrolytes and drug-induced changes. Finally, we outline the potential impact of computer modeling on ECG interpretation. Computer modeling can contribute to a better comprehension of the relation between features in the ECG and the underlying cardiac condition and disease. It can pave the way for a quantitative analysis of the ECG and can support the cardiologist in identifying events or non-invasively localizing diseased areas. Finally, it can deliver very large databases of reliably labeled ECGs as training data for machine learning.
Abstract:
Large-scale electrophysiological simulations to obtain electrocardiograms (ECG) carry the potential to pro- duce extensive datasets for training of machine learning classifiers to, e.g., discriminate between different cardiac pathologies. The adoption of simulations for these purposes is limited due to a lack of ready-to- use models covering atrial anatomical variability. We built a bi-atrial statistical shape model (SSM) of the endocardial wall based on 47 segmented human CT and MRI datasets using Gaussian process morphable models. Generalization, specificity, and compact- ness metrics were evaluated. The SSM was applied to simulate atrial ECGs in 100 random volumetric instances. The first eigenmode of our SSM reflects a change of the total volume of both atria, the second the asym- metry between left vs. right atrial volume, the third a change in the prominence of the atrial appendages. The SSM is capable of generalizing well to unseen geometries and 95% of the total shape variance is cov- ered by its first 24 eigenvectors. The P waves in the 12-lead ECG of 100 random instances showed a duration of 109 . 7 ±12 . 2 ms in accordance with large cohort studies. The novel bi-atrial SSM itself as well as 100 exemplary instances with rule-based augmentation of atrial wall thickness, fiber orientation, inter-atrial bridges and tags for anatomical structures have been made publicly available. This novel, openly available bi-atrial SSM can in future be employed to generate large sets of realistic atrial geometries as a basis for in silico big data approaches.
Abstract:
Background In acute respiratory distress syndrome (ARDS), non-ventilated perfused regions coexist with non-perfused ventilated regions within lungs. The number of unmatched regions might reflect ARDS severity and affect the risk of ventilation-induced lung injury. Despite pathophysiological relevance, unmatched ventilation and perfusion are not routinely assessed at the bedside. The aims of this study were to quantify unmatched ventilation and perfusion at the bedside by electrical impedance tomography (EIT) investigating their association with mortality in patients with ARDS and to explore the effects of positive end-expiratory pressure (PEEP) on unmatched ventilation and perfusion in subgroups of patients with different ARDS severity based on PaO2/FiO(2) and compliance. Methods Prospective observational study in 50 patients with mild (36%), moderate (46%), and severe (18%) ARDS under clinical ventilation settings. EIT was applied to measure the regional distribution of ventilation and perfusion using central venous bolus of saline 5% during end-inspiratory pause. We defined unmatched units as the percentage of only ventilated units plus the percentage of only perfused units. Results Percentage of unmatched units was significantly higher in non-survivors compared to survivors (32[27-47]% vs. 21[17-27]%, p < 0.001). Percentage of unmatched units was an independent predictor of mortality (OR 1.22, 95% CI 1.07-1.39, p = 0.004) with an area under the ROC curve of 0.88 (95% CI 0.79-0.97, p < 0.001). The percentage of ventilation to the ventral region of the lung was higher than the percentage of ventilation to the dorsal region (32 [27-38]% vs. 18 [13-21]%, p < 0.001), while the opposite was true for perfusion (28 [22-38]% vs. 36 [32-44]%, p < 0.001). Higher percentage of only perfused units was correlated with lower dorsal ventilation (r = - 0.486, p < 0.001) and with lower PaO2/FiO(2) ratio (r = - 0.293, p = 0.039). Conclusions EIT allows bedside assessment of unmatched ventilation and perfusion in mechanically ventilated patients with ARDS. Measurement of unmatched units could identify patients at higher risk of death and could guide personalized treatment.
Abstract:
Cranio-maxillofacial surgery often alters the aesthetics of the face which can be a heavy burden for patients to decide whether or not to undergo surgery. Today, physicians can predict the post-operative face using surgery planning tools to support the patient’s decision-making. While these planning tools allow a simulation of the post-operative face, the facial texture must usually be captured by another 3D texture scan and subsequently mapped on the simulated face. This approach often results in face predictions that do not appear realistic or lively looking and are therefore ill-suited to guide the patient’s decision-making. Instead, we propose a method using a generative adversarial network to modify a facial image according to a 3D soft-tissue estimation of the post-operative face. To circumvent the lack of available data pairs between pre- and post-operative measurements we propose a semi-supervised training strategy using cycle losses that only requires paired open-source data of images and 3D surfaces of the face’s shape. After training on “in-the-wild” images we show that our model can realistically manipulate local regions of a face in a 2D image based on a modified 3D shape. We then test our model on four clinical examples where we predict the post-operative face according to a 3D soft-tissue prediction of surgery outcome, which was simulated by a surgery planning tool. As a result, we aim to demonstrate the potential of our approach to predict realistic post-operative images of faces without the need of paired clinical data, physical models, or 3D texture scans.
Abstract:
During atrial fibrillation, cardiac tissue undergoes different remodeling processes at different scales from the molecular level to the tissue level. One central player that contributes to both electrical and structural remodeling is the myofibroblast. Based on recent experimental evidence on myofibroblasts’ ability to contract, we extended a biophysical myofibroblast model with Ca2+ handling components and studied the effect on cellular and tissue electrophysiology. Using genetic algorithms, we fitted the myofibroblast model parameters to the existing in vitro data. In silico experiments showed that Ca2+ currents can explain the experimentally observed variability regarding the myofibroblast resting membrane potential. The presence of an L-type Ca2+ current can trigger automaticity in the myofibroblast with a cycle length of 799.9 ms. Myocyte action potentials were prolonged when coupled to myofibroblasts with Ca2+ handling machinery. Different spatial myofibroblast distribution patterns increased the vulnerable window to induce arrhythmia from 12 ms in non-fibrotic tissue to 22 ± 2.5 ms and altered the reentry dynamics. Our findings suggest that Ca2+ handling can considerably affect myofibroblast electrophysiology and alter the electrical propagation in atrial tissue composed of myocytes coupled with myofibroblasts. These findings can inform experimental validation experiments to further elucidate the role of myofibroblast Ca2+ handling in atrial arrhythmogenesis.
Abstract:
This contribution is part of a project concerning the creation of an artificial dataset comprising 3D head scans of craniosynostosis patients for a deep-learning-based classification. To conform to real data, both head and neck are required in the 3D scans. However, during patient recording, the neck is often covered by medical staff. Simply pasting an arbitrary neck leaves large gaps in the 3D mesh. We therefore use a publicly available statistical shape model (SSM) for neck reconstruction. However, most SSMs of the head are constructed using healthy subjects, so the full head reconstruction loses the craniosynostosis-specific head shape. We propose a method to recover the neck while keeping the pathological head shape intact. We propose a Laplace- Beltrami-based refinement step to deform the posterior mean shape of the full head model towards the pathological head. The artificial neck is created using the publicly available Liverpool-Y ork-Model. W e apply our method to construct artificial necks for head scans of 50 scaphocephaly patients. Our method reduces mean vertex correspondence error by approximately 1.3 mm compared to the ordinary posterior mean shape, preserves the pathological head shape, and creates a continuous transition between neck and head. The presented method showed good results for reconstructing a plausible neck to craniosynostosis patients. Easily generalized it might also be applicable to other pathological shapes.
Abstract:
Clinical and computational studies highlighted the role of atrial anatomy for atrial fibrillation vulnerability. However, personalized computational models are often generated from electroanatomical maps, which might lack important anatomical structures like the appendages, or from imaging data which are potentially affected by segmentation uncertainty. A bi-atrial statistical shape model (SSM) covering relevant structures for electrophysiological simulations was shown to cover atrial shape variability. We hypothesized that it could, therefore, also be used to infer the shape of missing structures and deliver ready-to-use models to assess atrial fibrillation vulnerability in silico. We implemented a highly automatized pipeline to generate a personalized computational model by fitting the SSM to the clinically acquired geometries. We applied our framework to a geometry coming from an electroanatomical map and one derived from magnetic resonance images (MRI). Only landmarks belonging to the left atrium and no information from the right atrium were used in the fitting process. The left atrium surface-to-surface distance between electroanatomical map and a fitted instance of the SSM was 2.26+-1.95 mm. The distance between MRI segmentation and SSM was 2.07+-1.56 mm and 3.59+-2.84 mm in the left and right atrium, respectively. Our semi-automatic pipeline provides ready-to-use personalized computational models representing the original anatomy well by fitting a SSM. We were able to infer the shape of the right atrium even in the case of using information only from the left atrium.
Abstract:
Individualized computer models of the geometry of the human heart are often based on mag- netic resonance images (MRI) or computed tomography (CT) scans. The stress distribution in the imaged state cannot be measured but needs to be estimated from the segmented geometry, e.g. by an iterative algorithm. As the convergence of this algorithm depends on different geometrical conditions, we system- atically studied their influence. Beside various shape alterations, we investigated the chamber volume, as well as the effect of material parameters. We found a marked influence of passive material parameters: increasing the model stiffness by a factor of ten halved the residual norm in the first iteration. Flat and concave areas led to a reduced robustness and convergence rate of the unloading algorithm. With this study, the geometric effects and modeling aspects governing the unloading algorithm’s convergence are identified and can be used as a basis for further improvement.
Abstract:
Mitral regurgitation alters the flow conditions in the left ventricle. To account for quantitative changes and to investigate the behavior of different flow components, a realistic computational model of the whole human heart was employed in this study. While performing fluid dynamics simulations, a scalar transport equation was solved to analyze vortex formation and ventricular wash-out for different regurgitation severities. Additionally, a particle tracking algorithm was implemented to visualize single components of the blood flow. We confirmed a significantly lowered volume of the direct flow component as well as a higher vorticity in the diseased case.
Abstract:
Today a variety of models describe the physiological behavior of the heart on a cellular level. The intracellular calcium concentration plays an important role, since it is the main driver for the active contraction of the heart. Due to different implementations of the calcium dynamics, simulating cardiac electromechanics can lead to severely different behaviorsof the active tension when coupling the same tension model with different electrophysiological models. To handle these variations, we present an optimization tool that adapts the parameters of the most recent, human based tension model. The goal is to generate a physiologically valid tension development when coupled to an electrophysiological cellular model independent of the specifics of that model's calcium transient. In this work, we focus ona ventricular cell model. In order to identify the calcium-sensitive parameters, a sensitivity analysis of the tension model was carried out. In a further step, the cell model was adapted to reproduce the sarcomere length-dependent behavior of troponin C. With a maximum relative deviationof 20.3% per defined characteristic of the tension development, satisfactory results could be obtained for isometric twitch tension. Considering the length-dependent troponin handling, physiological behavior could be reproduced. In conclusion, we propose an algorithm to adapt the tension development model to any calcium transient input toachieve a physiologically valid active contraction on a cellular level. As a proof of concept, the algorithm is successfully applied to one of the most recent human ventricular cell models. This is an important step towards fullycoupled electromechanical heart models, which are a valuable tool in personalized health care
Abstract:
In order to be used in a clinical context, numerical simulation tools have to strike a balance between accuracy and low computational effort. For re- producing the pumping function of the human heart numerically, the physical domains of cardiac continuum mechanics and fluid dynamics have a significant relevance. In this context, fluid-structure interaction between the heart muscle and the blood flow is particularly important: Myocardial tension development and wall deformation drive the blood flow. However, the degree to which the blood flow has a retrograde effect on the cardiac mechanics in this multi-physics problem remains unclear up to now. To address this question, we implemented a cycle-to-cycle coupling based on a finite element model of a patient-specific whole heart geometry. The deforma- tion of the cardiac wall over one heart cycle was computed using our mechanical simulation framework. A closed loop circulatory system model as part of the simulation delivered the chamber pressures. The displacement of the endo- cardial surfaces and the pressure courses of one cycle were used as boundary conditions for the fluid solver. After solving the Navier-Stokes equations, the relative pressure was extracted for all endocardial wall elements from the three dimensional pressure field. These local pressure deviations were subsequently returned to the next iteration of the continuum mechanical simulation, thus closing the loop of the iterative coupling procedure. Following this sequential coupling approach, we simulated three iterations of mechanic and fluid simulations. To characterize the convergence, we evaluated the time course of the normalized pressure field as well as the euclidean distance between nodes of the mechanic simulation in subsequent iterations. For the left ventricle (LV), the maximal euclidean distance of all endocardial wall nodes was smaller than 2mm between the first and second iteration. The maximal distance between the second and third iteration was 70μm, thus the limit of necessary cycles was already reached after two iterations. In future work, this iterative coupling approach will have to prove its abil- ity to deliver physiologically accurate results also for diseased heart models. Altogether, the sequential coupling approach with its low computational effort delivered promising results for modeling fluid-structure interaction in cardiac simulations.
Abstract:
Patients suffering from a cerebrovascular disease, which causes the hypoperfusion of the brain, can undergo revascularization surgery as treatment. It is often performed as an open surgery and its goal is to restore the vascular function, in particular the flow of blood. There- fore, an anastomosis (connection of arteries) is installed to augment flow into a hypoperfused area. Complications occur in approximately 10% of the cases, partly related to an insufficient flow augmentation. Hence, the blood flow should be checked intraoperatively to assess the intervention’s quality and intervene rapidly to prevent a negative patient outcome. The current state-of-the-art measurement device is the ultrasonic transit time flow probe. It provides a quantitative flow value but needs to be placed around the vessel. This is cumber- some and holds the risk of contamination, vessel compromise, and rupture. An alternative method is the indocyanine green (ICG) fluorescence angiography (FA), which is a camera-based method. It is the state-of-the-art method in the high resolution anatomical visualization and it is able to provide the surgeon with a qualitative functional imaging of vessels in the field of view. Approaches to quantify the blood flow via ICG FA failed to obtain trustworthy flow values so far. This thesis analyzes and improves the capability of ICG FA to provide quantitative values by 1. clarifying on how accurate the measurement can be. 2. proposing methods to improve the accuracy. 3. deriving the existence of a systemic error. 4. proposing a method to compensate for the systemic error. 5. providing an end-to-end workflow from video data input to flow value output. 6. validating the proposed methods and the workflow in an ex vivo and in vivo study. The proposed measurement in this thesis is based on the systemic mean transit time theorem for single input and single output systems. To calculate the flow, the transit time of a bolus for a certain distance and the cross sectional area of the vessel need to be obtained. Methods were developed to obtain the blood volume flow, and to identify and quantify the sources of errors in this measurement. The statistical errors in measuring the transit distance and transit time of the ICG bolus as well as the cross sectional area of the vessel are often neglected in research and thus were quantified in this thesis using in silico models. It revealed that the error is too large and requires methods to reduce it.....
Abstract:
In many patients suffering from severely impaired gas exchange of the lungs, regional pulmonary ventilation and perfusion are not aligned. Especially, if patients are suffering from the acute respiratory distress syndrome, very heterogeneous distributions of ventilation and perfusion are observed, and patients need to be artificially ventilated and monitored in an intensive care unit, in order to ensure sufficient gas exchange. In severely injured lungs, it is very challenging to find an optimal trade off between recruiting collapsed regions by applying high pressures and volumes, while protecting the lung from further damage caused by the externally applied pressure. In order to ensure lung protective ventilation and to optimize and to support clinical decision making, a growing need for bedside monitoring of regional lung ventilation, as well as regional perfusion, has been reported. Electrical Impedance Tomography (EIT) is a non-invasive, radiation-free and portable system, which has raised interest especially among physicians treating critically ill patients in ICUs. It provides high temporal sampling and a functional spatial resolution, which allows to visualize and monitor dynamic (patho-) physiological processes. Medical EIT research has mainly focused on estimating spatial ventilation distributions, and commercially available systems have proven that EIT is a valuable extension for clinical decision making during mechanical ventilation. Estimating pulmonary perfusion with EIT nevertheless has not been established yet and might represent the missing link to enable the analysis of pulmonary gas exchange at the bedside. Though some publications have shown the principle feasibility of indicator-enhanced EIT to estimate spatial distributions of pulmonary blood flow, the methods need to be optimized and validated against gold-standards of pulmonary perfusion monitoring. Additionally, further research is needed to understand the underlying physiological information of EIT perfusion estimations. With this thesis, we aim to contribute to the question, whether EIT can be applied clinically to provide spatial information of pulmonary blood flow alongside regional ventilation to potentially assess pulmonary gas exchange at the bedside. Spatial distributions of perfusion were estimated by injecting a conductive saline indicator bolus, to trace the passage of the indicator during its progression through the vascular system of the lungs. We developed and compared different dynamic EIT reconstruction methods as well as perfusion parameter estimations, to be able to robustly assess pulmonary blood flow. The estimated regional EIT perfusion distributions were validated against gold-standard lung perfusion measurement techniques. A first validation has been conducted using data of an experimental animal study, where multidetector Computed Tomography was used as comparative lung perfusionmeasure. On top, a comprehensive preclinical animal study has been conducted to investigate pulmonary perfusion with indicator-enhanced EIT and Positron Emission Tomography during multiple different experimental states. Besides a thorough method comparison, we aimed to investigate the clinical applicability of the indicator-enhanced EIT perfusion measurement by above all analyzing the minimal indicator concentration, which allows robust perfusion estimations and presents no harm to the patient. Besides the experimental validation studies, we conducted two in-silico investigations to firstly evaluate the sensitivity of EIT to the passage of a conductive indicator through the lungs in front of severely heterogeneous pulmonary backgrounds. Secondly, we studied the physiological contributors to the reconstructed EIT perfusion image to find basic limitations of the method. We concluded, that pulmonary perfusion estimation based on indicator-enhanced EIT shows great potential to be applied in clinical practice, since we were able to validate it against two established perfusion measurement techniques and provided valuable information about the physiological contributors to the estimated EIT perfusion distributions.
Abstract:
The electrocardiogram (ECG) is the standard measurement device of the electrical heart activity. It is highly available and allows for a quick, inexpensive, and non-invasive monitor- ing. This is especially important for the diagnosis of cardiovascular disease (CVD) which is one of the major concerns for the health care system in Europe. CVD causes costs of C210 billion and is responsible for 3.9 million deaths (45% of all deaths) a year. Apart from risk factors, chronic kidney disease (CKD) and structural changes in the heart tissue are underlying pathologies causing CVD. Both diseases can lead to life-threatening arrhythmia. This is why the following two pathologies connected to CVD are focused on in this thesis: Electrolyte imbalances in CKD patients and ectopic foci in the ventricles autonomously triggering an excitation. In both cases, the overall goal is to develop methods with the help of simulated signals supporting diagnosis. In the first project, ECG simulations are used to optimize a signal processing workflow for an ECG-based estimation of blood potassium concentration ([K+]b) and blood calcium concentration ([Ca2+]b). The findings from the simulation studies are incorporated into two [K+]b estimation methods which are evaluated on patient data. Mean absolute estimation errors were 0.37 mmol/l for a patient-specific approach and 0.48 mmol/l for a global approach with patient-specific adjustment. Advantages compared to existing approaches are extensively discussed. All algorithms being important for a signal processing workflow are published under an open source license. The second project aims at estimating the location of ectopic foci with the surface ECG without knowing the individual geometry of the patient. 1,766,406 simulated ECG signals (body surface potential maps (BSPMs)) are utilized to train two convolutional neural net- works (CNNs): The first estimates start and end of the depolarization, the second uses the depolarization part in the BSPM to localize the excitation origin. This CNN is designed to be able to show multiple solutions in the case of several possible excitation origins. The smallest median localization errors were 1.54 mm on the test set for the simulated and 37 mm for the patient data. Hence, the combination of the two CNNs yields a reliable method for the localization of ectopic foci on simulated and on patient data, although patient signals were not used during training. The results from the two projects demonstrate how simulated data can be used to develop and improve adequate ECG signal processing methods and how diagnosis can be supported. Furthermore, the potential of the combination of simulations and CNNs for overcoming the problem of unavailable clinical datasets as well as for finding estimation models being valid for different patients is demonstrated. The proposed methods can be used to accelerate diagnosis and is therefore likely to improve the outcome of the patients.
Abstract:
The human heart is a complex organ involving the interaction of different phenomena. On the one hand, blood is flowing through the heart chambers and applies pressure on the inner surfaces. On the other hand, the heart is surrounded by a pericardial sac, which influences the chamber motion. Furthermore, electrical waves propagate through the heart tissue to activate a contraction force, which, added to the passive force, works against the chamber pressure and results in the deformation of the myocardium. In the last decades, advanced computational models of the heart phenomena have been created and included in simulation frameworks to study the human heart beat. ....
Abstract:
Atrial fibrillation (AF) is the most common supraventricular arrhythmia in clinical practice. There is increasing evidence from a mechanistic point of view that pathological atrial substrate (fibrosis) plays a central role in the maintenance and perpetuation of AF. AF is treated by ablation of fibrotic substrate. However, detection of such substrate is an ongoing challenge as demonstrated by poor clinical ablation outcomes. Therefore, the main topic of this work is the characterization of atrial substrate. Determining signal characteristics at fibrotic substrate sites could make detection and subsequently ablation of such sites easier in future. Additionally, understanding of how these sites uphold AF can increase positive outcome of AF ablation procedures. Lastly, restitution information could be a further tool of substrate characterization that could help with distinction of pathological and non-pathological sites and therefore further improve ablation outcome. In this thesis two approaches for substrate characterization are presented. Firstly, substrate was characterized by proposing electrogram characteristics that defined sites maintaining AF, which after ablation terminated AF. This study was performed on 21 patients in whom low-voltage-guided ablation after pulmonary vein isolation terminated clinical persistent AF. Successful termination sites of AF displayed distinct electrogram patterns with short local cycle lengths that included fractionated and low-voltage potentials that were locally highly consistent and covered a majority of the local AF cycle length. Most of these areas also exhibited pathologic delayed atrial late potentials and fractionated electrograms in sinus rhythm. Secondly, restitution information of local amplitude and local conduction velocity (CV) was acquired and used to infer information on the underlying substrate. Restitution data was gained from 22 AF patients from two clinics by using a S1S2 protocol between pacing intervals of 180 ms to 500 ms. To obtain restitution data from the patient group, an automated algorithm capable of reading, segmenting, and analyzing large amounts of stimulation protocol data had to be developed. This algorithm was developed as part of this work and is called CVAR-Seg. The CVAR-Seg algorithm provided noise-robust signal segmentation up until noise levels far exceeding expected clinical noise levels. CVAR-Seg was released as open source to the community and due to its modular arrangement, enables easy replacement of each of the single process steps by alternative methods according to the user’s needs. Additionally, a novel method called inverse double ellipse method was established to determine local CV within the scope of this study. This inverse double ellipse method estimated CV, fiber orientation and anisotropy factor from any electrode arrangement and reproduced in-silico CV, fiber orientation and CV anisotropy more accurately and more robust than the current state-of-the-art method. Furthermore, the method proved to be real-time capable and thus a valid consideration to implement in clinical electrophysiology systems. This would enable instantaneous localized measurement of atrial substrate information, gaining a CV map, an anisotropy ratio map, and a fiber map simultaneously during one mapping procedure. Restitution information of the patient cohort was evaluated using the CVAR-Seg pipeline and the inverse double ellipse method to acquire amplitude and CV restitution curves. Restitution curves were fitted using a mono-exponential function. The fit parameters representing the restitution curves were used to discern differences in restitution properties between pathological and non-pathological substrate. The result was that clinically defined low voltage (LV) zones were characterized by a reduced amplitude asymptote and a steep decay with increased pacing rate, whereas CV curves showed a reduced CV asymptote and a high range of decay values. Moreover, restitution differences within the atrial body at the posterior and anterior wall were compared, since literature reports revealed inconclusive results. In this work, the posterior atrial wall was found to contain amplitude and CV restitution curves with higher asymptote and more moderate curvature than the anterior atrial wall. To move beyond the empirically described manually chosen threshold used currently, the parameter space spanned by the fit parameters of the amplitude and CV restitution curves was searched for naturally occurring clusters. While clusters were present, their inadequate separation from each other indicated a continuous progression of the amplitude curves as well as the CV curves with the level of the substrate pathology. Lastly, an easier and faster method to acquire restitution data was proposed that is based on acquisition of the maximum slope and provides comparable information content to a full restitution curve. This work presents two novel methods, the CVAR-Seg algorithm and the inverse double ellipse fit that expedite and refine evaluation of S1S2 protocols and estimation of local CV. Furthermore, this work defines characteristics of pathological tissue that help identify sources of arrhythmia. Thus, this work may help to improve the therapy of AF in the future.
Abstract:
The electrocardiogram (ECG) in general, and the 12-lead ECG in particular, is one of the most common and widely available digital device that can be found in clinical facilities to measure the electrical activity of the heart. Therefore, it is considered the gold standard tool for this purpose. It is an inexpensive and non-invasive monitoring device that allows for rapid diagnosis of cardiovascular diseases (CVD). Among the most common CVDs there are atrial fibrillation (AFib or AF) and atrial flutter (AFlut or AFl). These two arrhythmias play a central role in the world’s healthcare systems, being among the main reasons for hospitalization, and responsible for very high costs in all countries. Moreover, even if they are not a direct cause of death, they can lead to multiple complications up to heart failure. For the reasons mentioned above, AFib, and AFlut are the focus of this thesis. The content of this thesis is divided into two projects. The overall goal is to develop methods with the help of biosignal processing, electrophysiological simulations, and machine learning to characterize the arrhythmia, support diagnosis, and predict complications or therapy outcomes. In the first project, in silico 12-lead ECGs produced from simulations at multiscale level are used to develop two signal processing algorithms for several AFlut mechanisms char- acterization: individual component, and spatial reduced recurrence quantification analysis (icRQA, and srRQA, respectively). Moreover, an analysis of the influence that the atrial and torso models have on the cardiac simulation results, thus on the resulting ECGs, is described. The findings from these two previous analyses are incorporated into the final study of the project: hybrid (in silico plus clinical data) and feature-based machine learning discrimination of three main AFlut categories (cavotricuspid isthmus-dependent, peri-mitral, and other left atrium AFlut classes). The two RQA algorithms allowed us to extract relevant features for AFlut differentiation. Analysis of models’ influence suggested that many atrial geometries should be used in the computational framework to avoid overfitting and thus leading to the incapacity of such in silico data in the clinical practice use. The final hybrid classifier demonstrated how an automatic and non-invasive discrimination of different AFlut mechanisms is possible using appropriate features, computational simulations, and taking into account the findings of the previous studies. The second project aims at estimating the location of AFib drivers with the surface ECG. Rotors and focal sources are simulated and considered as AFib drivers. A machine learning approach only trained on in silico 12-lead ECGs is implemented to discriminate between AFib drivers located near the pulmonary veins (PVs) vs. extra-PVs atrial areas. Moreover, the success of acute AFib termination by ablation procedure is studied and linked to the clinical relevance that such classifier may have in clinical practice. The last study of this second project aims at the prediction of one of the AFib complications (i.e., heart failure) using clinical single-lead ECG signals. Machine learning enabled the identification of AFib drivers located near PVs, also suggesting that PV isolation (PVI) is the most suitable therapy to terminate the arrhythmia in such cases. On the contrary, when proceeding with PVI for AFib drivers located outside the PV areas, the arrhythmia did not terminate. In these cases, physicians should plan further ablation procedures. Moreover, the use of a classifier trained only on simulated data and demonstrating to be effective on clinical test data may open the door to the use of in silico data for machine learning. To conclude, the successful prediction of AFib-induced heart failure has proven the existence of a link between some AFib cases and this serious complication, thus providing physicians with a tool to recognize when urgent action is needed to reduce patient safety risks. In all studies in which in silico ECGs are used to develop and tune the machine learning algorithms, tests on clinical data are performed to demonstrate the real applicability of these methods in healthcare. Advantages compared to existing approaches are discussed and all the studies have been published, or are under review, in peer-reviewed journals or in a conference proceeding. The results from the two projects demonstrate how simulated data can be used to develop, and improve adequate ECG signal processing methods, and how diagnosis, and therapy planning can be supported. Furthermore, the potential of the combination of simulations and machine learning for overcoming the problem of clinical data not available in large scale is demonstrated. The proposed methods can be used to support ablation procedure planning, arrhythmia diagnosis, complication prediction, and invasive procedure time reduction, and it is therefore likely to improve the outcome of the patients.
Abstract:
Eine möglichst genaue Vorhersage eines realen Prozesses an einem Modell zu treffen ist aktuell eine große Herausforderung in allen Bereichen der Medizin. Dies gilt auch für die Kardiologie [1]. Eine der Hauptmotivationen, welche die Entwicklung computerunterstützter Modellierungen und Simulationen in der Kardiologie antreibt, ist die Möglichkeit maßge- schneiderte Therapien virtuell zu testen, bevor man diese an Patienten anwendet[2]. Eine weitere ist z.B. die Möglichkeit der Zusammenführung von experimentellen und klinischen Daten [1], welche in vivo, in vitro oder ex vivo gesammelt sein können. Mithilfe einer pas- senden quantitativen Analyse können so genaue in silico Modelle erstellt werden [1]. Dabei ermöglichen fortgeschritten entwickelte Modelle die Beobachtung jeglicher Vorgänge, ohne an ethische Vorgaben gebunden zu sein. So erlauben erfolgreiche Simulationen Vorhersagen zu möglichen Krankheitsverläufen. Patientenspezifische Computersimulationen gehen noch einen Schritt weiter. In diesen wird versucht, mithilfe von Patientendaten einen digitalen Zwilling zu erstellen. Beispielsweise lässt sich mithilfe von Magnetresonanztomographie (MRT) Daten ein dreidimensionales mechanisches Modell des Herzens erstellen. Durch zusätzliche Elektrokardiogramm- (EKG) und 4D-Fluss-MRT-Daten kann das Computermodell weiter personalisiert werden und er- möglicht so erste Rückschlüsse auf Erkrankungen [2]. Ein solches patientenspezifisches Computermodell des Herzens wird unter anderem auch am Institut für Biomedizinische Technik (IBT) des Karlsruher Instituts für Technologie (KIT) entwickelt. Diese Arbeit soll einen Beitrag zur Herzklappenmodellierung leisten. Im folgenden Abschnitt wird der aktuelle Stand des Modells wiedergegeben sowie abschließend die Ziele dieser Arbeit dargestellt.
Abstract:
Although the 3-D anatomy of the heart can be obtained from computed tomography or magnetic resonance imaging, the fiber orientation of cardiac muscle, which significantly affects the electrophysiological characteristics of the heart, can hardly be obtained in vivo. In this work, a highly automated pipeline for annotating fibers and anatomical regions in both atria is introduced. Using k-means clustering, openings of the atria (e.g., pulmonary veins, mitral valve) were identified and given as boundary conditions for solving nine Laplace problems with Dirichlet boundary conditions. A rule-based method was used to derive the fiber orientation and the annotation of the anatomical regions from the Laplace solutions. Three atrial models from different sources were used to benchmark the pipeline. The calculated fiber arrangement was regionally compared by visual inspection and faithfully reproduced clinical and experimental data from literature. Furthermore, a Gaussian process together with a rule-based method is presented that can automatically fit a right atrium for a given real left atrium. Finally the local activation time maps computed from simulations during sinus rhythm were compared to assess the influence of different combinations of left and right atria. It cloud be seen that the fast conduction regions and interatrial bridges have strong influence on the propagation of electrical stimuli.
Abstract:
Classic otoscopes are modern diagnostic instruments that allow users to examine and evaluate human or animal ears, including their tympanic membranes. In the last years and along with new technologies like the development of the laser after 1960, traditional otoscopes were further improved by the introduction of and combination with other optical systems and technologies. One advance in otoscope technology also involved the introduction/functional integration of optical alignments and system components that were initially used in (surgical) endoscopes, and which gave rise to new devices called “otoendoscopes” or (sometimes) “telescope otoendoscopes” (Olympus America, 2012) – a de- vice classification that fuses and unites classic otoscopes with systematic concepts of endoscopic systems. Those otoscope setups offer distinct advantages compared to tra- ditional otoscopes, like a larger Field of View (FOV) and Depth of Field (DOF), tailored (variable) optical magnifications, refocus abilities for an extended focus/object distance range in front of the otoscope (by simple axial shifts of system components like the ocular), or an additional exit pupil stability (i.e., improved lateral tolerances in case of human eye/head movements). This master’s thesis aims to pick up an older, already drafted HEINE visual otoen- doscope design and carry out a feasibility study regarding the realizability of a hybrid otoendoscope: A device which features not only the “classic” visual system part (to be used by the examiner’s eye) and merges in components from endoscopic system concepts, but additionally contains a second, digital beam path for a simultaneously camera-based examination. Based on the already existing (visual-only) otoendoscope design, such a hybridized system offers further possibilities to create, share and archive image data in parallel to the common, visual in-ear examination and generates additional data for, e.g., digital patient recordings during a regular examination scenario. At the end, the thesis’ development project outlines possible design limitations, conceptual tradeoffs, and provides answers to three hypothetic questions: 1. Is the system design to be realized with a simple (optical) fix-focus design approach? 2. Does the system solution appropriately yield the desired simultaneous and synchro- nized observation of the same in-ear plane by the eye and the camera chip? 3. Are exactly all given/applied (optical) design requirements fulfilled and covered by both beam paths simultaneously? To approach the realization of this new and unique hybrid otoendoscope concept, dif- ferent previously unknown design requirements (prospective benchmarks) for the new prototype’s FOV diameter, its resolution and its DOF capabilities are first derived and set according to analyzed reference data of comparable, already existing commercial oto- and otoendoscope solutions. The thesis project then continues with the modification of HEINE’s existing visual otoendoscope layout and the optics design of the new hybridized otoendoscope system, such that, at the project’s end, the assembled prototype can finally be tested, evaluated and validated via the defined (contrast) evaluation methods and with respect to the settled design specifications. The design requirement determination founds on different image characterization methods (pixel data evaluation and histogram/edge contrast criteria) which are accord- ingly developed and introduced to characterize the commercial reference devices with respect to their imaging performance of defined spatial structures in front of the otoscope tips. The FOV size determination is done via a pixel-based evaluation of the image (area) with reference to the overall image size and a given spatial frequency component on the test chart. The contrast criteria, relevant for the benchmarking of the devices’ resolution and DOF performances, found on the assessment of spatially aligned, rectangular struc- tures on a USAF 1951 resolution test chart whose average pixel (area) gray scale values (histogram contrast criterion) or gray scale line profiles of the edges between two adjacent rectangles (edge contrast criterion) yield the respective contrast characteristics. As for the project results, the eventually optimized and assembled system solution could not be realized via a fix-focus design approach since the performance requirements could, during the optical system simulation, not be fulfilled for the given otoendoscope design concept. Instead, a suitable system solution was found via a fixed intermediate image (plane) position buried inside the system, and whose information content changes, depending on the axial shifts (axial positioning) of the otoendoscope’s relay group, thus forwarding the addressed object plane information to the static intermediate image plane. This intermediate image plane is optically accessed by both beam paths, such that a synchronized observation of the same object plane is guaranteed and the hybrid otoendoscope’s functionality is realized. However, the characterization results show that, although the “beam path synchronization” (device hybridization) was achieved, the prototype’s performance only satisfies one of three optic design requirements (the FOV requirement), while the required resolution performances and DOF depths could not entirely be fulfilled. Those device benchmarks result, additionally supported by simulated opto-mechanical tolerances of the designed camera objective, primarily from given (project-related) design and manufacturing constraints, such that, e.g., for the digital beam path, more than 80 % of the 100 simulated, randomly manufactured and assembled optical alignments (Monte Carlo systems) did not yield sufficient resolution and DOF outcomes. Nonetheless, the basic device functionality proved to be feasible/realizable, and was, in principle, implemented, such that – for the project outlook – the hybrid otoendoscope prototype concept needs to be refined via the adjustment of the underlying (optics) design parameters and requirements. Additionally, prospective researchers will also have to address, contact and question the actual users directly to clarify further, central improvement topics which primarily involve the device’s usability, the refined and user- oriented parameter weighting of the device’s (optical) design concept, and the device’s (optical) calibration process (which, so far and as hypothetically proposed, could not sufficiently be achieved with reference to the eye’s retinal image only).
Abstract:
The availability of wearable medical devices on the market is rapidly increasing, con- firming the trend towards unobtrusive wearable home-care, but bulky and obtrusive cardiorespiratory monitoring is still the gold standard. A possible explanation could be the lack of knowledge regarding the best placement on the body of new multimodal sensor patches. This research project aims to participate in the field of multimodal wearable monitoring devices by targeting this lack of knowledge regarding their optimal placement on the body. Therefore the best position for each measured signal in terms of signal quality is assessed. The studied signals are ECG, IP, and heart and lung sounds. This project proposes a performance mapping of the upper body through a conducted study based on a sensor patch prototype designed by the TU Berlin EMSP team. A study including fifteen subjects and comparing nine upper body patch positions has been done. The influence of movement on all signals measured by the patch device has also been studied. Six minutes of data have been retrieved for each position, meaning a total of 54 minutes raw data per subject. The quality of each signal at each position and for phases with and without movement has then been quantified via defined performance metrics. Their condensation to one metric per signal then enabled a first verdict about the optimal positioning for each assessed biosignal. It has been found that the ECG signal performed best on the lower left sternal border in all cases, while the IP’s maximum quality was reached between the 1st and 2nd ICS near the right midclavicular line without movement and on the least impacted position in the presence of movement. The stethoscope best detected heart sounds in the direct heart proximity, and respiratory sounds were best monitored on the upper right part of the chest. The IMU signals were all best on the lower chest positions in the presence of movement, as these positions are the least affected. Without movement, heart sounds could best be measured on the left side either directly near the heart (AccZ and GyrY) or at a lower point (GyrX), while the respiratory vibrations were best detected on the sternum for the AccZ and on the right and left lower chest for the GyrX and GyrY signals respectively. The found results mean that there is no universal best position for all measured signals, but that the placement of such patch devices should be application-specific by determining the most important signals in each case.
Abstract:
According to the World Health Organization (WHO), cardiovascular diseases (CVDs) are the leading cause of death globally. In clinical practice, cardiac ultrasound (US) imaging is widely used by cardiologists to determine clinically relevant parameters such as ejection fraction (EF) or the thickness of the myocardium for the diagnosis of CVDs. Measuring these parameters manually is tedious and error-prone. Thus, in this thesis, a deep learning approach for left ventricle (LV) segmentation in three-dimensional (3D) transesophageal echocardiography (TEE) images is developed and the effect of incorporation of prior anatom- ical information on the segmentation result is investigated. This thesis is one of the first studies to demonstrate TEE image segmentation with the U-Net convolutional neural network architecture. Therefore, the hyperparameters of the segmenta- tion pipeline are optimized by using a random search. The main contribution of this thesis is the evaluation of the effect of global image rotations on the segmentation results. Hence, it is shown that a normalization of the scan angle (a global rotation around the z-axis) of TEE images can improve the segmentation results significantly for images with a scan angle larger than 60°. Secondly, it is shown that the LV Dice Similarity Coefficient (DSC) can be significantly improved if the images are rotated based on prior knowledge about anatomical structures. The network’s performance is thereby robust with respect to imprecisely defined axes if the standard deviation of the angles at which the rotation matrix is tilted is less than 15°. Furthermore, a convolutional neural network is trained on images that are cropped around a given region of interest. It is shown that for some parameter sets, inference time was 1.5 times faster without a decrease in DSC, but for other settings, the cropping of images affected quality of LV segmentation negatively. The best experimental setup in this thesis achieved a DSC of 0.927, Hausdorff Distance (HD) of 0.115, and Normalized Surface Distance (NSD) of 0.969 for the left ventricular cavity on the test set. Thereby, the network was trained on 184 manually annotated and 982 automatically annotated images of 84 patients.
Abstract:
Today, most people do not know Shape Memory Alloys (SMAs) even though they have many use cases in e.g., medical field, aerospace and automotive. This research project aims to investigate the possibility of using miniaturized SMA engines for drilling in medical applications. Specifically, it modifies an existing concept to meet the technical requirements for drilling in medical applications. In this context, an SMA engine is defined as an engine that transforms heat through SMAs into the mechanical rotation of a shaft. To test the hypothesis that miniaturized SMA can be used for drilling in medical applica- tions, an evaluation of already existing engine concepts was made. This evaluation helps to select the most promising engine concept to meet the requirements. Thereafter, a math- ematical model of the engine’s specifications was created. In addition to the mathematical model, experimentation with a prototype was made to figure the engine’s real behaviour out. The results of the calculations refute the hypothesis of a higher efficiency, because they indicate an efficiency decrease of 88% to common electrical micro motors. On the other hand, the results indicate that a miniaturized SMA engine has a huge advantage in high torque compared to electronic micro motors. The research project indicates that the electrical connection and the wire’s force transmission are the main challenges to focus on. These challenges showed that on the one hand the Nitinol wire is pulled out of the electrical clamping connection by reason of the tensile stress to stretch it. On the other hand, it showed that the Nitinol wire must be guided around the shaft because it urges to shift in axial direction. Even though by reason of the occurred challenges a working prototype could not be built in this research project, solutions were selected methodically to overcome the challenges. The results suggest that an SMA engine can be designed for drilling in medical applications, but cannot supersede electronic micro motors. The downsizing of the modified SMA engine is limited because of the mechanical parts that are needed to transmit the wire’s contraction to the rotation of a shaft. Also further downsizing lowers the output power specifications because their dependency on the distance between the wire and the shaft axis. Regardless, this research reveals that by reason of the high torque SMA en- gines provide compared to electric micro motors, future research can investigate their use for torque dependent applications e.g., a grabbing tool.
Abstract:
Ventricular ectopic beats are caused by foci that spontaneously depolarize in the ventricles and may induce life-threatening tachy-arrhythmia. The common clinical therapy is to localize the focus and then apply catheter ablation to it. Currently used localization methods are either invasive or require information about the patient-specific heart geometries that are costly to generate. Therefore, in order to localize the ectopic foci non-invasively and without the requirement of patient-specific geometries, a deep learning approach is proposed in this work. This approach is trained on a simulated dataset containing 1.8 million body surface potential maps (BSPMs) of ventricular ectopic beats. BSPMs are electrocardiogram (ECG) signals that are recorded by a matrix of electrodes on the patient’s torso. The proposed localization approach is threefold. After the prepossessing of the BSPMs, a convolutional neural network (CNN) is trained to estimate the begin and end of the activation window. Based on these predictions, the BSPMs are clipped and resampled to a fixed length. They are then passed into a localization neural network (NN) which is trained to predict the coordinates of the ectopic foci. For the localization approach, three different NNs are compared that mainly differ by their task formulation. One is formulated as a multi-task problem combining a regression with two binary classifications, the second as a fuzzy classification over barycentric coordinates and the third combines two binary classifications with a fuzzy classification over combined barycentric coordinates. The results on the simulation dataset are precise: The NN to predict the begin and end of the activation window achieves results with a mean test error smaller than 2 ms. The results of that network on the clinical dataset are plausible, but the ground truth is not known. All three localization networks achieve a mean geodesic validation error smaller than 2 mm. Without further ado, the results delivered by the localization NNs on a clinical dataset are not sufficiently accurate, but approaches to solve this in future work are proposed. Since the application of (deep learning) algorithms in medicine poses many safety require- ments, an approach for the explanation of the implemented NNs was applied: Saliency maps, which quantify the influence of the input features with respect to a specified NN output, are applied. The results conform to human intuition: the information of electrodes on the front of the torso is more influential to the NNs’ predictions than the information of electrodes on the back of the torso. Additionally, the information contained in the first time samples of the BSPMs affect the localization predictions more than of the later time samples. Therefore, this work should provide a suitable starting point for further investigations of the explainability of the proposed localization approach. To summarize, the proposed deep learning approach delivered precise results for the local- ization of ventricular ectopic foci on a large simulated dataset. The implemented NNs were only trained on simulated data and their application to a clinical dataset revealed that the current NNs have to be further optimized for clinical data. However, the algorithm does not require patient-specific heart geometries which is a novelty in this field and overcomes shortcomings of existing methods. For the first time, an explainability method was applied to an NN-based localization approach. Findings complied to human intuition: First time samples and electrodes on the front of the torso are most influential for the NNs’ prediction. The combination of NNs and explainability methods enables the development for an exact and non-invasive localization procedure that accelerates the treatment of the patient and improves the outcome.
Abstract:
Eine elektrophysiologische Untersuchung unterstützt die Diagnosestellung bei Herzrhyth- musstörungen. Die gemessenen intrakardialen Elektrogramme liefern Informationen über die Erregungsausbreitung und das Gewebe. Es wird zwischen einem unipolaren Elektrogramm (uEGM) und einem bipolaren Elektrogramm (bEGM) unterschieden. Bei den uEGMs wird die Potentialdifferenz zwischen einer Elektrode im Herzen und einer Referenzelektrode außerhalb des Herzens gemessen. Bei den bEGMs wird die Potentialdifferenz zwischen zwei Elektroden im Herzen gemessen. Aktuell werden im klinischen Einsatz vor allem bEGMs verwendet, da sie einige Vorteile gegenüber den uEGMs haben. Vorherige Arbeiten zeigten für gewisse Aspekte, dass die uEGMs und die bEGMs unterschiedliche Informationen liefern können. In dieser Arbeit wurde jeweils die Morphologie der beiden Elektrogrammarten bei ver- schiedenen Wellenausbreitungsphänomenen untersucht. Anhand der Charakteristika der Morphologie wurden die beiden Elektrogrammarten anschließend verglichen. Die Wellen- ausbreitungsphänomene waren die gesunde Referenz, langsam leitende Gebiete, Blocklinien und Wellenfrontkollisionen. Die verwendeten Daten waren klinische Messungen im linken Atrium von Patienten mit Vorhofflattern. Zunächst wurden die Abschnitte der uEGMs und bEGMs, die Aktivität enthielten, mithilfe des nichtlinearen Energie-Operators detektiert und extrahiert. Für valide aktiven Segmente wurden Merkmale sowie Kreuzkorrelationen berechnet, um die Morphologie zu vergleichen. Zum Abschluss wurden die detektierten Morphologien mithilfe von Clustering-Algorithmen in Gruppen unterteilt. Die Ergebnisse zeigten, dass sich die Morphologie der Elektrogramme bei den Wellenaus- breitungsphänomenen unterschied. Die Übergänge der Merkmale waren jedoch fließend und die Bereiche der Merkmale überschnitten sich für die verschiedenen Wellenausbrei- tungsphänomene. Die gesunde Referenz unterschied sich meistens von den anderen Wellen- ausbreitungsphänomenen. Die langsam leitenden Gebiete, die Wellenfrontkollisionen und die Blocklinien konnten anhand der Merkmale nicht eindeutig voneinander getrennt werden. Beim Vergleich der uEGMs und der bEGMs zeigten sich Unterschiede in den Charakteristi- ka. Es bestätigte sich somit, dass die uEGMs und bEGMs unterschiedliche Informationen beinhalten. Mithilfe des Clusterings konnten verschiedene Morphologien gruppiert und so typische Morphologien identifiziert werden. Die berechneten Cluster stimmten nicht mit den Wellenausbreitungsphänomenen überein.
Abstract:
Vorhofflimmern ist die häufigste, anhaltende Art von Arrhythmie bei Erwachsenen und be- einträchtigte im Jahr 2016 mehr als 43 Millionen Menschen weltweit [1]. Zu den aktuellen diagnostischen und therapeutischen Methoden gehört die Durchführung eines Katheter- verfahrens, um Einblicke in den Zustand des Gewebes zu gewinnen, die den Arzt bei der Ablationstherapieplanung unterstützen. Hierzu wird das atriale Gewebe mit einem bipha- sischen Puls extrazellulär stimuliert mit dem Resultat, dass sich eine Depolarisationswelle über das linke Atrium ausbreitet. Anschließend werden die Aktivierungszeiten des Gewe- bes über die Messelektroden des Katheters erfasst. In dieser Arbeit wurde für die Modellierung der Depolarisationswelle ein schnelles, ver- einfachtes Modell, die sogenannte Fast Marching Methode (FMM) verwendet, welche die anisotrope Ausbreitung der Wellenfront durch ein eikonales Modell reproduziert. In einer Simulationsstudie wurden in-silico erfasste Aktivierungszeiten (LATs) mithilfe eines nicht- linearen, iterativen trust-region-reflective Algorithmus mit den durch die FMM berechne- ten LATs minimiert und damit eine Rekonstruktion der Gewebeeigenschaften ermöglicht. Das Ziel dieser Arbeit ist die Ermittlung der Leitungsgeschwindigkeit, des Anisotropiever- hältnisses und der Faserrichtung des Gewebes. Hierfür soll die Robustheit und Präzision der Parameterbestimmung unter Verwendung der FMM analysiert und evaluiert werden. Die Genauigkeit der FMM wurde zu Beginn durch Variation der Gitterauflösung der FMM sowie der Startwerte des Optimierungsalgorithmus evaluiert. Dabei wurde gezeigt, dass die Gitterauflösung der FMM mit der Simulationszeit und der Parametergüte korreliert, sodass anhand dieser Ergebnisse ein optimaler Kompromiss aus Parametergüte und Kal- kulationszeit getroffen wurde. Des Weiteren korreliert der Startwert des Optimierungsal- gorithmus mit dem Wert der Kostenfunktion. Es hat sich gezeigt, dass Startwerte, die unterhalb der zugrundeliegenden Gewebeeigenschaften liegen, einen geringeren Fehler der Parameterbestimmung erzeugen. Daran anschließend wurde die Robustheit der FMM in einer Simulationsstudie, mit unterschiedlichen Gewebe- und Leitungseigenschaften, bewertet. Über alle definierten Fälle und ohne Vorprozesse wurde die longitudinale CV mit einer maximalen Abweichung von 170 mm und der Anisotropiefaktor mit einer maximalen Abweichung von 1 zur Grundwahrheit geschätzt. Dennoch wurde in dieser Arbeit gezeigt, dass die FMM eine robuste und präzise Rekon- struktion der Gewebeeigenschaften ermöglicht.
Abstract:
Atrial fibrillation (AFib) is one of the most common cardiac arrhythmias. Even though AFib is not a direct cause of death, it can cause fatal consequences, such as stroke or heart attack, and it affects the patient’s quality of life considerably. Two mechanisms identified as atrial fibrillation drivers that will be studied in this work are focal and rotational activities in the atria. Currently, AFib is treated through ablation of the affected tissue. This process often involves complex procedures to locate the cause for AF. Localizing and characterizing the driving mechanism of AFib beforehand could help physicians to accelerate the catheter ablation, thus reducing both costs and the risk for complications during the process. This project analyzed 12-lead ECG signals obtained by simulating rotors and focal sources driving AFib on a computer model of the atria. We started off with 2 atria models and 8 torso geometries to simulate AFib drivers. Throughout the analysis process, we utilized several signal processing methods, a greedy-forward feature selection, a randomized cross-validation technique and a decision tree as well as a neural network as classifiers. For the localization, we focused on three areas of the atrium, the pulmonary vein (PV), the non-PV areas of the left atrium (LA) and the right atrium (RA). Our final, most important classification was a hierarchical classifier, considering both drivers and all three locations, averaged at 57% accuracy. It proved particularly strong in discerning rotational activities in the PV. The findings showed that rotors and focal sources can be distinguished precisely, rotors can be located fairly well, whereas focal sources prove more complicated to localize when using a machine learning approach on associated 12-lead electrocardiogram (ECG) signals. The features and methods presented in this work can be used to ultimately help physicians classify a given AFib case in clinical practice with little risk and time.
Abstract:
Die Optische Kohärenztomographie-Angiographie ist ein nichtinvasives Bildgebungsverfah- ren, welches durchblutete Gefäße in vivo visualisiert. In der Ophthalmologie ermöglicht es dem Arzt Krankheiten wie beispielsweise die altersbedingte Makuladegeneration und die diabetische Retinopathie zu überwachen, da sie pathologische Änderungen in den Blutge- fäßstrukturen der Retina induzieren können. Die Beurteilung des Zustandes der Blutgefäße wird meist durch eine visuelle Betrachtung der en face OCT-A Aufnahmen durchgeführt. Diese Untersuchung unterliegt jedoch dem subjektiven Eindruck des Arztes und kann durch die komplexen Strukturen innerhalb der Retina beeinträchtigt werden. Aus diesem Grund ist es von großem Interesse automatisch Gefäßparameter zu extrahieren, die eine objektive Quantifizierung der Gefäßstrukturen ermöglichen. Geeignete Parameter könnten die Diagno- se vereinfachen und die Behandlung im Therapieverlauf verbessern. Daher wird in dieser Thesis ein Algorithmus vorgestellt, der automatisch die Durchmesser der Gefäße in en face OCT-A Aufnahmen berechnet. Dazu wird zuerst die en face OCT-A Aufnahme segmentiert, indem der Gefäßdetektor namens Optimally Oriented Flux (OOF) angewendet wird und verbundene Strukturen ermittelt werden. Nach der Unterscheidung zwischen den Blutgefäßen und dem Hintergrund wird die Mittellinie der individuellen Gefäße mithilfe eines Thinning-Algorithmus extrahiert. Zusätzlich werden invalide Pixel detektiert und eine Fortführung der Mittellinie bis zum Ende der Gefäße durchgeführt. Abschließend wird der Durchmesser der Gefäße berechnet, indem an den Pixeln der Mittellinie ein Kreis ausgedehnt wird, oder der Abstand zwischen beiden Gefäßrändern entlang der orthogonalen Strukturrichtung berechnet wird. Der Algorithmus wurde an in vivo en face OCT-A Aufnahmen getestet, sodass keine Referenzdaten vorhanden waren und keine vollständige Validierung durchgeführt werden konnte. Stattdessen wurde durch den Vergleich der Durchmesser von zwei verschiedenen Schichten der Retina (Superficial Vascular Complex (SVC) und Deep Vascular Complex (DVC)), die unterschiedliche Gefäßstrukturen aufweisen, ein Plausibilitätstest durchgeführt. Weiterhin wurde die Reproduzierbarkeit des mittleren Gefäßdurchmessers, der Gefäßdichte und der gesamten Gefäßdurchmesser bei en face OCT-A Aufnahmen des selben Auges, die wenige Minuten voneinander aufgenommen wurden, untersucht....
Abstract:
Craniosynostosis is defined as premature fusion of one or more cranial sutures in an infant skull. The prevalence of craniosynostosis is 3.14 to 6 per 10000 live births. This head deformity results in cranial malformation and can lead to facial asymmetry, as well as functional consequences such as increased intracranial pressure, deafness, visual impairment and cognitive deficits. Currently, computed tomography is the primary image technique used in craniosynostosis diagnosis. Although CT is an accurate diagnosis tool, it exposes the infant to ionizing radiation. Therefore, we investigated an automated, radiation-free, patient friendly and new classification method without CT scans using a CNN. For the classification of craniosynostosis, this project utilized ResNet18 network in a pretrained version. Images which were based on the imbalanced real data set and to be classified were generated out of the 3D triangulated surface meshes provided from a 3D photogrammetry scanner at Heidelberg University Hospital. Apart from the real data set, a synthetic data set was created using Statistical Shape Model . In this project, the cranial information regarding 3D head surface was visualized through 2D distance maps. For the generation of them, we defined an origin, extracted the distances with Triangle Ray Intersection Algorithm and assembled the $224\times 224$ pixel images. We conducted a proof of concept study on synthetic data and study on influences of noise on landmarks, different map representation as well as scaling. We report that the proof of concept study was performed successfully and results on real data show an average accuracy of 83.5\%. The best approach on the real data was the 2D second type, non-scaled distance maps. However, the results may be distorted due to overfitting and small data set. As future studies we can think of hyperparameter optimization to avoid overfitting, usage of larger and balanced data set.
Abstract:
The operations on human eye requires distance measurement. For example eye length from corneal surface to retina, width of cornea, curvature of cornea and lens, distance between cornea and lens, and so on. These parameters can be measured by Swept- Source Optical Coherence Tomography (SS-OCT) system. This system includes an optical interferometer and a sweeping light source. The price of commercial SS-OCT system is high and this limited its wide-spread usage. Thus replacing the light source with wavelength tunable Vertical-Cavity Surface-Emitting Laser (VCSEL) becomes an interesting alternative. The price of VCSEL is a few hundred euro compared with tens of thousand euro. Beside its price, there are several advantages of current-tuned VCSEL as light source. First, it is safe since there is no additional heating setup. Second, it is small so that it can be miniaturized. In this thesis project the feasibility of current-tuned VCSEL is investigated. The coherence length of VCSEL is usually a few meters, thus it fulfills the requirement for eye geometry. The tuning range of VCSEL is estimated with spectrometer, while the current is on sweeping mode. Due to large integration time, a bigger envelope is formed. The FWHM of the system gives primary estimation of bandwidth. A free space Michelson interferometer is set up in order to find the best driving function. The relationship between emitted wavelength of VCSEL and injecting current is nonlinear. Thus, the curvature-changing driving current is used and interference signal from one fixed position is stored for all of them. The reflection peak is later calculated in order to find the highest reflection peak. After finding the best driving current, the interference signal is measured at 4 different path length position. These data are used to calculated axial resolution change with respect to path length difference and phase stability. From one article we learned that the special mode of VCSEL might split at far field when the VCSEL is driven by different constant current. If this effect is actually happening, it would seriously affect the coupling efficiency of light into fiber in future. Thus another measurement is carried on via splitting the light into two paths. First one is directly measure and the second path went through pinhole before it is measured. The ratio is calculated to determine if it is a constant. The results suggests that there is no mode-splitting effect.
Abstract:
Optical Coherence Tomography Angiography (OCTA) is a promising imaging tool for disease diagnosis. Many studies have used quantification methods for objective assessment of the OCTA data. However, investigations into the loss of information in OCTA images due to the segmentation of vessels have not been explored. This thesis will, therefore, use Differential Box Counting (DBC), a texture-based quantification tool that does not require segmentation, to assess the loss of information due to segmentation. This thesis investigated the differences in P-values and ICC values between three different fractal dimension measurements: box counting dimension, information dimension, and differential box counting. Box counting and information dimension measurements require a segmentation prepossessing step for calculation. The data collected includes OCTA images of 9 valid eyes in two time points that were short enough so that no expected changes to the vasculature are observed. All fractal dimension methods, including the non-segmentation- based result were found to be statistically significant, although the non-segmentation-based algorithms were found to have higher ICC values. DBC is thought to be more sensitive to changes in the OCTA images when compared to the segmentation-based algorithms. Thus, variances in the OCTA images as well as artefacts in the OCTA image affect the non-segmentation-based measurements more than the segmentation-based measurements.
Abstract:
Computermodelle können zum besseren Verständnis der Herzphysiologie beitragen und damit helfen, geeignete Therapien zu finden oder vorhandene Therapien zu verbessern. Diese Modelle können ferner verwendet werden, um die vielfältigen Wechselwirkungen zwischen Herz und Kreislaufsystem zu untersuchen. Am Karlsruher Institut für Technologie (KIT) wurden bereits mehrere solcher Modelle entwickelt, die unterschiedliche Mechanismen der Herzfunktion wie beispielsweise die Deformation, den Blutfluss oder die elektrische Akti- vierung und Repolarisation simulieren. Um die physiologischen Gegebenheiten annähernd realistisch abzubilden, ist es notwendig, das Zusammenwirken dieser Modelle nachzuvollzie- hen. Die vorliegende Arbeit befasst sich mit Modellen der Mechanik und der Fluiddynamik. Der Ausgangspunkt war folgender: Im bereits vorhandenen parametrischen Kreislaufmodell (Mechanik) basiert die Mitralklappenimplementierung auf Parametern, die ursprünglich für die Aortenklappe optimiert worden waren. Der Zweck dieser Arbeit bestand darin, die mathematische Beschreibung der Aortenklappe für die Mitralklappe anzupassen. Dafür wurde ein vereinfachtes Modell des Kreislaufsystems implementiert, das den linken Vorhof, die Mitralklappe und den linken Ventrikel umfasst. Der Druckverlauf im linken Ventrikel wurde mit der mechanischen Simulation unter Vorgabe unterschiedlicher Drücke im Vor- hof nachgebildet. Anschließend wurden die Ergebnisse der mechanischen Simulation als Grundlage für die fluiddynamische Simulation verwendet. Durch den Vergleich der Drucker- gebnisse der beiden Simulationen wurde das Klappenmodell bewertet, und die Parameter der Mitralklappenbeschreibung wurden optimiert. Die zugrundeliegende Annahme war, dass die fluiddynamische Simulation das Klappenverhalten annähernd realistisch abbildet. Die Ergebnisse zeigen, dass durch die Veränderung der Mitralklappenimplementierung die Mechanik zwar beeinflusst wurde, allerdings weniger als erwartet. Auf die Ergebnisse der fluiddynamischen Simulation hatten diese Änderungen dagegen einen wesentlich stärkeren Einfluss. Es lässt sich schlussfolgern, dass die Anpassung der Mitralklappenimplementie- rung das Zusammenwirken der beiden Simulationen erheblich verbessert hat. Auch wenn diese Modelle nicht in der Lage sind, die Wirkungen und Wechselwirkungen des Herz- Kreislaufsystems vollständig zu beschreiben, so hat diese Arbeit doch einen Beitrag geleistet, das mechanische und das fluiddynamische Modell aufeinander abzustimmen und deren Wechselwirkung besser nachzuvollziehen.
Abstract:
Diabetes Mellitus is one of the rising chronic diseases of our time. The need for non- invasive glucose sensors compatible with the continuous monitoring of glucose levels and thus close monitoring of Diabetes Mellitus is difficult to fulfill. This thesis aims to investigate a specific approach that could result in non-invasive glucose sensing in the long term. We do so by designing, constructing, and characterizing an optical detection system to perform non-invasive spectroscopy of tissue. The system employs a quantum cascade laser as a mid-infrared excitation source, respon- sible for inducing a thermal change in the specimen under investigation. This thermal change varies the thermal radiation emitted by the specimen that an infrared sensor subsequently detects. The temperature increase experienced by the sample is due to the radiationless relaxation of the previously excited vibrational states, which are unique to each specimen and thus offer the possibility to identify molecules with high specificity. The system’s performance is tested on several phantoms. All results report high signal-to- noise (SNR), with the signals of interest being three to four orders of magnitude larger than the dark noise. The spectra show robustness against external and internal factors, with deviations from the mean of less than 2%, a value sufficiently low for the pursued application. Furthermore, the acquisition of spectra at the fastest scanning speed available is demonstrated. Another feature that we evaluate is the linearity of the signal’s amplitude with the analyte’s concentration. The preliminary study of the method’s linearity is promising, but further tests are necessary for a solid validation of the implemented system as a spectrometer.
Abstract:
Atrial fibrosis is a disease with collagen deposited into the atrial tissue that can sustaine electrical rotors and can cause atrial fibrillation (AFib). AFib is a possible cause of stroke, heart failure and sudden cardiac death. The number of patients with AFib will double in the next decades and until today, only 50 % of patients with AFib can be treated effectively. The goal of this thesis is to investigate whether the degree of fibrosis on the atria can be detected by analyzing certain features of the electrocardiogram (ECG). The ECG is used as it is a non-invasive, cheap examination procedure. The 12-lead ECG that is used in this thesis can be found in every hospital. Based on a shape model [1], various shapes are created from a mean shape to determine whether the changes of features extracted from the ECG later can be linked to the change of the volume fraction of fibrosis or if they change because a difference of the atrial shapes causes them. Then, parts of the atrial volume of the shape model are replaced with fibrotic tissue in steps of 5 % up to 45 % of the total atrial tissue. Based on an atrial surface mesh and a torso mesh, a transfer matrix is calculated and the 12-lead ECGs are determined. For this thesis, 80 variations of shapes were used with ten degrees of fibrosis on each shape. The ECGs are then analyzed and certain features of the ECG, such as the P-wave terminal force of the V1 lead (PTFV1) value, the P-wave duration and the volume of the atria, are extracted. The results of this thesis show a correlation of the extracted features with the degree of fibrosis. Depending on the regression model that is used, the R-squared value varies between 0.49 and 0.61 (with 1 being a perfect match between the predicted values and the true results) with an error (root mean square error, RMSE) of 5.89 % to 6.73 %. The results imply the ECG features change due to adding fibrosis to the differnet shapes. The PTFV1 value and the P-wave duration both grow due to a slower excitation of the fibrotic tissue compared to the surrounding tissue.
Abstract:
n dieser Arbeit werden zunächst vier Hypothesen aufgestellt, anhand welcher die Aussagekraft der Konstanzprüfung in Bezug auf die Bildqualität und Dosis von Röntgengeräten bewertet werden soll. In einem ersten Schritt werden die theoretischen Grundlagen der Röntgenstrahlung und deren Entstehung sowie des Strahlenschutzes erarbeitet. Anschließend werden die verwendeten Methoden vorgestellt, bevor im vierten Kapitel die Ergebnisse vorgestellt werden. Darauf aufbauend werden im folgenden Kapitel die Ergebnisse bewertet bevor final die die Implikation und Aussicht diskutiert wird. Zusammenfassend ist festzustellen, dass die Konstanzprüfung nur unzureichend eine Aussage über das gerätespezifische Optimum in Bezug auf Bildqualität und Dosis einer Röntgenanlage treffen kann. Für die Erarbeitung eines Konzepts benötigt es über diese Arbeit hinaus noch weitere Versuche, um die ersten Erkenntnisse, die erarbeitet werden konnten, weiter zu bestätigen. In this paper, four hypotheses are first developed to evaluate the validity of the constancy test in terms of image quality and dose of X-ray equipment. In a first step, the theoretical foundations of X-ray radiation and its formation as well as radiation protection are developed. The used methods will be displayed before the results are presented in the fourth chapter. Based on this, the following chapter evaluates the results before finally discussing the implications and prospects. To summarize, the constancy test is not able to give a sufficient assessment of the device- specific optimum in terms of image quality and dose of an X-ray machine. In addition to this paper, the development of a concept requires further attempts to confirm the initial findings.
Abstract:
In this study, the effects of motion sickness on brain activity were analyzed. Motion sickness was triggered with a dynamic virtual reality environment which made the subjects feel like beeing at sea on a small boat. Before and after the simulation the subjects had to answer a motion sickness questionnaire. During the experiment, brain activity was acquired with electroencephalography (EEG). Spectral analysis of the EEG was done and the brain activity during the experiment was compared to a baseline measurement. With the answers to the questionnaire different groups were defined (divided by age, gender, or motion sickness susceptibility) and their behavior was compared. 32 out of 147 participants got motion sick according to an index calculated from the questionnaire answers. Statistical analysis showed that most differences in brain activity between baseline and experiment, occurred in the alpha and beta bands. Machine Learning did not give good results on classifying motion sick and not motion sick people, however, classification for gender prediction had an AUC-ROC of up to 0.69.
Abstract:
Background: In neurosurgeries today it is common to use surgical microscopes in order to enable surgical interventions on millimetre scale. A defocused view in the region of interest can restrict the ability of the surgeon to act. Because the instrument tips were found to be regions of interest, this thesis explores blur detection at instrument tips for neurosurgical images. To account for sub-domain shifts (e.g. different hospitals, surgeries, instruments) in neurosurgery, an emphasis is put on performance evaluations across neurosurgical sub- domains. Methods: As of now, no medical blur data sets are available. Therefore, synthetic and real blur data sets are generated using a simulation environment and a surgical microscope in a laboratory set-up. Existing non-medical blur data sets suffer from perception-bias introduced during human annotation. This thesis presents two data generation methods for the acquisition of physically-based blur annotations at instrument tips. Generated image data is split into three sub-domains: one big data set containing synthetic images and two smaller data sets consisting of real images. All three data sets represent different visual properties to enable cross-domain performance evaluations. A context-specific approach, using Convolutional Neural Networks, was developed and the Laplacian Variance was chosen as a context-independent approach for a comparative analysis. Results: Trained on synthetic data, the context-specific approach shows better in-domain and out-of-domain performance than the context-independent approach. Additionally, the context-specific approach, trained on synthetic data only, shows better performance on unseen real data sets, even if the context-independent approach used these real data sets for self-optimization. In this case, the out-of-domain performance of the context-specific approach is higher than the in-domain performance of the context-independent approach. Conclusion: The context-specific approach demonstrated workability and substantial perfor- mance across neurosurgical sub-domains. While it was observed that the context-independent approach uses features irrelevant to the task, the context-specific approach can benefit from its ability to build internal representations and context understanding. It is assumed that this results in better generalisation across sub-domains, even if those sub-domains remain unknown.
Abstract:
Atrial fibrillation (AF) is the most prevalent arrhythmia in the world, affecting 1-1.5% of the general population. It is commonly associated with fibrosis of cardiac tissue that involves the formation and deposition of extracellular matrix (ECM), playing an important role in the onset and maintenance of AF. Many treatments currently exist to try to stop this disease. Among them, electroanatomical mapping consists in the measurement using an array of electrodes in an intravenous catheter near or on the surface of the heart, obtaining the amplitude of the electrogram (EGM). Depending on its value, it is considered the cardiac area evaluated as fibrotic or healthy tissue and, it is ablated or not respectively using the catheter ablation procedure. However, this technique is not very reliable and its success rate is very suboptimal. Thus, the main objective of this work is to explore an inverse problem reconstruction by obtaining the signals from the EGMs of simulated reentrant activity in fibrotic patches. By doing so, it will be possible to obtain a map of the extracellular potentials (EPs) of the cardiac tissue to localize strategic ablation points that promote AF. To achieve this, it is performed the inverse problem through the implementation in MATLAB software of the second-order Tikhonov regularization, L-curve method, Generalized Singular Value Decomposition (GSVD), depth normalization and Boundary Element Method (BEM). Additionally, it has been calculated the activation times (ATs) and repolarization times (RTs), and localised the singularity points using phase singularity (PS) analysis. The simulation setup is a patch with dimensions 50x50x1 mm and a spatial resolution of 0.2 mm, where the mesh of electrodes has been: 8x8, 12x12 and 16x16 at a distance from the tissue of 0.5 mm or 1 mm. Four cases have been performed: healthy tissue by applying a planar wave front and with reentry, and fibrotic tissue by applying a planar wave front and with reentry. In addition, two more cases have been done when the electrode-source distance (ESD) was considerably greater (2 mm and 10 mm). The results obtained for each case have been the reconstruction of the EPs, the AT and RT maps of the electrodes and the reconstructed tissue, the RMSE graphs of the AT and RT computed between electrodes and the closest point to each one, and the phase and singularity points map. Although the reconstructions of EPs were visually similar to the ground truth, the AT and RT maps still need to be improved as it is difficult to identify the propagation of the characteristic voltage pattern. Regarding the calculated RMSE, values around 59 ms and 64 ms for the detection of AT and RT respectively have been obtained for the case of fibrotic tissue with reentry, which are quite acceptable. Also, there is less error with the 16x16 electrode grid that covers completely the simulated tissue. As the ESD is increased (2 mm or 10 mm), greater errors and dispersion of values are obtained due to the lower precision of the electrodes. However, through PS analysis, the phase maps show various singularity points, making an approximate detection on singularity maps. For this reason, although the implemented algorithm needs to be improved in order to get better results, it is possible to localise approximately with this method the reentrant points in simulated fibrotic tissue.
Abstract:
Die quantitative Indocyaningrün-Fluoreszenzangiographie (ICG-FA) stellt eine Alternative für die absoluten Volumenflussbestimmung zu den etablierten, quantitativen Methoden dar, die durchweg auf direkten Kontakt mit dem Blutgefäß angewiesen sind. Eines der Kernelemente der quantitativen ICG-FA ist, neben der mittleren Flussgeschwindigkeit, die Vermessung des Gefäßinnendurchmessers zur Berechnung der Querschnittsfläche des blutführenden Teils. Nach der These von Nakagawa et al. ist der Lumendurchmesser direkt aus der ICG-FA Aufnahme extrahierbar. Innerhalb dieser Arbeit wird diese These anhand von starren, mehrschichtigen Polyu- rethanphantomen eines ICG gefüllten Blutgefäßes überprüft. Diese Gefäßphantome bilden hierzu die optischen Eigenschaften eines Blutgefäßes ab und besitzen einen definierten Innen- und Außendurchmesser, die als ground truth für die Vermessungen dienen. Eine Variation der optischen Eigenschaften der Phantomummantelung ermöglicht des Weiteren eine selektive Untersuchung der Einflüsse der einzelnen Parameter auf die Messung. Die Vermessung basiert auf der Methode von Naber et al. , welche mithilfe eines modifizierten ICG-FA Gefäßphantoms und einem quantifizierten Messaufbau validiert wird. Zuletzt ist die Überprüfung der Übertragbarkeit der gewonnenen Erkenntnisse in die Realität durch die Vermessung einer ex vivo Kaninchenaorta erfolgt. Die Ergebnisse zeigen, dass die Vermessung basierend auf der Methode von Naber et al. und dem quantifizierten Messaufbau möglich ist und das Verfahren keinen zusätzlichen Fehler induziert. Die Vermessungsergebnisse der Gefäßphantome zeigen, dass in einer ICG-FA Aufnahme unabhängig von der Dicke und Streueigenschaften der Phantomummantelung nur der Außendurchmesser erfasst werden kann. Erst eine Anpassung des Brechungsindexes der Umgebung an das Phantom ermöglicht die Vermessung des Innendurchmessers, wobei eine unvollständige Anpassung eine vergrößerte Darstellung zufolge hat. Die Resultate der ex vivo Kaninchenaorta suggerieren, dass das gleiche Phänomen ebenfalls bei tierischem Gewebe auftritt. Es wird postuliert, dass dieser Effekt auf der internen Totalreflexion der Fluoreszenzstrahlung an der Grenzfläche Gefäßwand zur Umgebung entspringt und eine Brechungsindexanpassung dieses Phänomen unterbindet. Die Vergrößerung ist auf das Verhalten der Gefäßwand als Zylinderlinse im Falle einer unvollständigen Anpassung zurückzuführen....
Abstract:
Atrial fibrillation (AF) is the globally most prevalent cardiac arrhythmia and is characterized by the chaotic electrical excitation of atrial tissue which results in abnormal atrial contrac- tions. AF causes atrial contractions to lose synchronization with ventricular contractions, effectively reducing cardiac output and thus perpetuating an increased risk of dizziness, fatigue, and even stroke. The onset, development, and maintenance of AF is linked to the presence of fibrosis in the atria, as atrial fibrotic substrate forms conduction barriers and regions of slow conduction which enhance the development of re-entry circuits. These re-entry circuits embody the characteristic atrial electrical activity associated with AF. Since the presence of fibrotic substrate in the atria is linked to AF, the identification of this fibrotic atrial substrate would therefore identify prospective patients who are at risk of developing AF in the future. Preventative treatment of these patients could therefore begin sooner. The state of the art demonstrates the successful application of deep learning (DL) and neural networks for the classification of healthy and pathological electrocardiograms (ECGs). Ad- ditionally, P wave features extracted from ECG signals have been shown to correlate with the presence of atrial fibrosis. This work aimed to compare a P wave feature based neural network with two DL neural networks for the binary classification of healthy and fibrotic P wave signals which stemmed from simulated and clinical ECG datasets. The feature based method served as a benchmark for the two DL methods and required the initial extraction of 15 P wave features: the P wave duration, P wave dispersion, P wave terminal force in lead V1, and the maximum P wave amplitudes of all 12 ECG leads. The first DL method was a long short-term memory neural network (LSTM) and was implemented to investigate its classification performance and its automatic temporal feature extraction capabilities from P wave time series. The second DL method was a convolutional neural network (CNN) in which the pre-trained AlexNet was applied to explore its classification performance as well as its automatic spectral feature extraction capabilites from P wave scalograms. These P wave scalograms were obtained by applying the continuous wavelet transform (CWT) to the input P waves. ...
Abstract:
Atrial fibrillation (AFib) is the most common sustained arrhythmia in western societies. As a result of structural and electrical variations in the left atrium, pulmonary vein isolation (PVI) has been shown to be effective in treating paroxysmal AFib, but ineffective in persistent AFib. In the management of persistent AFib, PVI plus additionally targeting arrhythmogenic substrate has been used. One technique to identify this substrate is to locate areas with bipolar peak to peak amplitudes < 0.5 mV . Typically, the voltage mapping is conducted during sinus rhythm (SR) for catheter ablation. The purpose of this work is to identify differences and similarities compared to mapping in AFib. 28 patients were prospectively enrolled and mapped using a LASSOTM catheter (15–20 mm diameter) with 20 electrodes (electrode size: 1 mm; spacing: 2-6-2 mm). Voltage mapping with high density (above 800 points) was performed with the CARTO® 3 mapping system both in SR and AFib. Voltage values were provided by the CARTO® 3 system by taking the peak-to-peak amplitude of one beat in the 2.5 second signal (hereinafter referred to as the CARTO value). The average percentage of reliable peaks obtained in the Electrograms (EGMs) was 64.95% in the AFib, slightly higher than in the SR (62.84%). By comparing the CARTO value with the calculated median and average values of reliable peak voltages, the CARTO value matches them on average 66.91% of the time in SR, but only up to 28.87% in AFib. If less than 0.5 mV is assumed to be the ground truth for identifying low voltage areas (LVAs) in SR, less than 0.22 mV was found to be the optimal threshold for LVAs identification in AFib. Using this threshold value the average accuracy value across all patients was 68.68%, with a maximum accuracy of 90.82% in some patients. Similarly, in AFib, a CFAEMean value of less than 80 ms is a standard for identifying a bipolar EGM as a continuous complex fractionated atrial electrogram (CFAE). By comparison this to the duration in SR, greater than 62ms was found to be the best threshold for identifying non-discrete EGM. However, the match is not very good, with the average accuracy across all patients being only 54.93%. When mapping in AFib, the signals can be noisy, therefore, each detected peak can’t be treated as reliable, the reliability calculation can be used to find out the reliable peaks. Although the signal in AFib is noisy, it is possible that contains more information, mapping in AFib should not be ignored. In SR, the CARTO value can represent the whole 2.5s signal well. However, in AFib it does not perform as good as in SR. Voltage mapping in AFib for targeting LVAs should be considered. In addition to prolonged duration, deflection, amplitude, fractionation occupying ratio, etc. should be compared to continuous CFAE in AFib to identify if a better marker in SR can be found to detect these areas.
Abstract:
Existing methods for outgoing quality control of manufactured parts are accurate, but slow and expensive. With the ever increasing popularity of direct digital manufacturing, a future need for fast and automated inspection of additively manufactured parts in large volumes is expected. Leveraging the rapidly improving processing power and sensing capability of consumer smartphones and tablets, their use to fill a projected gap in rapid, automatic, low-cost part inspection is envisioned. This requires investigation of the feasibility of the comparison of physical 3D objects with ground truth computer aided design (CAD) models on mobile consumer devices. To avoid computationally expensive direct comparison in 3D, an analysis-by-synthesis approach to detection of real object deviation from CAD models in 2D images is proposed, based on comparison from different views and using mobile device camera pose estimation from AR tracking. A mobile testbed for application of this approach to different use-cases was developed, using a Samsung Galaxy S20+ smartphone, the modelbased tracking library VisionLib and the game engine Unity. The testbed supports integration of custom image processing pipelines, using OpenCV, for detection and comparison of use-case specific features between real objects and ground truth CAD models. The testbed implementation was verified through tests based on the example use-case of straight edge length measurement on 3D-printed polymer parts. The tests have proven the viability and feasibility of 2D image-based 3D object comparison on mobile devices, using an analysisby- synthesis approach based on AR-tracking. A key limitation of this approach arises from the requirement for robust trackability of 3D objects of interest using mode-based tracking. In future, the testbed can be used to investigate the application of this comparison approach to a range of different quality control use-cases. Transfer of the comparison approach to seemingly unrelated use-cases, e.g. in medicine, with the same basic need for 3D object comparison is also conceivable.
Abstract:
Atrial fibrillation (AFib) is responsible for complications and excess death and its prevalence is expected to further increase in the future, due to the increasing life expectancy of humans and age being a major risk factor [1] [2]. At the same time the treatment options available for AFib are far from optimal. Radio-frequency ablation has the potential to be highly effective with comparably small side effects. The major limitation for the success of radio-frequency ablation is the difficulty to correctly locate the areas responsible for abnormal conduction. Personalized computational models to test strategies for the ablation procedure could be of great benefit for the success rate of this treatment. [3] Bidomain models have the required accuracy to theoretically assist the clinicians, but their high computational cost makes them inapplicable in clinical environments [4]. Eikonal models on the other hand could potentially be used in clinics due to their low computational costs, but lack the required accuracy, because of their inability to capture certain electrophys- iological effects. The goal of this project is to refine the eikonal model by implementing the influence of geometrical factors on conduction velocity (CV) into the eikonal model. This was achieved by using previously published data about the influence of geometrical factors on CV obtained with bidomain simulations to create a regression model. These geo- metrical factors were muscle curvature, muscle thickness and bath loading. The regression model was then implemented into the eikonal model’s local speed function (LSF) aiming to increase the accuracy of the eikonal model. The implementation of the regression formula into the eikonal model was followed by the evaluation of the improvements made by the regression model. This was achieved by creat- ing two dimensional simulation setups to compare the eikonal model before and after the implementation of the regression formula to bidomain simulations. By applying the same stimulus on the same mesh, the eikonal and bidomain simulations became comparable. The muscle tissue model used to create the mesh and run the eikonal and bidomain simulations was a two dimensional rectangle with a variable muscle thickness and the possibility to bend it to different uniform curvatures. The use of this mesh allowed to study the influence of muscle thickness and curvature on the simulations. ....