Abstract:
Introduction: Photogrammetric surface scans provide a radiation-free option to assess and classify craniosynostosis. Due to the low prevalence of craniosynostosis and high patient restrictions, clinical data is rare. Synthetic data could support or even replace clinical data for the classification of craniosynostosis, but this has never been studied systematically. Methods: We test the combinations of three different synthetic data sources: a statistical shape model (SSM), a generative adversarial network (GAN), and image-based principal component analysis for a convolutional neural network (CNN)-based classification of craniosynostosis. The CNN is trained only on synthetic data, but validated and tested on clinical data. Results: The combination of a SSM and a GAN achieved an accuracy of more than 0.96 and a F1-score of more than 0.95 on the unseen test set. The difference to training on clinical data was smaller than 0.01. Including a second image modality improved classification performance for all data sources. Conclusion: Without a single clinical training sample, a CNN was able to classify head deformities as accurate as if it was trained on clinical data. Using multiple data sources was key for a good classification based on synthetic data alone. Synthetic data might play an important future role in the assessment of craniosynostosis.
Abstract:
PURPOSE: Synthetic realistic-looking bronchoscopic videos are needed to develop and evaluate depth estimation methods as part of investigating vision-based bronchoscopic navigation system. To generate these synthetic videos under the circumstance where access to real bronchoscopic images/image sequences is limited, we need to create various realistic-looking image textures of the airway inner surface with large size using a small number of real bronchoscopic image texture patches. METHODS: A generative adversarial networks-based method is applied to create realistic-looking textures of the airway inner surface by learning from a limited number of small texture patches from real bronchoscopic images. By applying a purely convolutional architecture without any fully connected layers, this method allows the production of textures with arbitrary size. RESULTS: Authentic image textures of airway inner surface are created. An example of the synthesized textures and two frames of the thereby generated bronchoscopic video are shown. The necessity and sufficiency of the generated textures as image features for further depth estimation methods are demonstrated. CONCLUSIONS: The method can generate textures of the airway inner surface that meet the requirements for the texture itself and for the thereby generated bronchoscopic videos, including "realistic-looking," "long-term temporal consistency," "sufficient image features for depth estimation," and "large size and variety of synthesized textures." Besides, it also shows advantages with respect to the easy accessibility to required data source. A further validation of this approach is planned by utilizing the realistic-looking bronchoscopic videos with textures generated by this method as training and test data for some depth estimation networks.
Abstract:
A minimally-invasive manipulator characterized by hyper-redundant kinematics and embedded sensing modules is presented in this work. The bending angles (tilt and pan) of the robot tip are controlled through tendon-driven actuation; the transmission of the actuation forces to the tip is based on a Bowden-cable solution integrating some channels for optical fibers. The viability of the real-time measurement of the feedback control variables, through optoelectronic acquisition, is evaluated for automated bending of the flexible endoscope and trajectory tracking of the tip angles. Indeed, unlike conventional catheters and cannulae adopted in neurosurgery, the proposed robot can extend the actuation and control of snake-like kinematic chains with embedded sensing solutions, enabling real-time measurement, robust and accurate control of curvature, and tip bending of continuum robots for the manipulation of cannulae and microsurgical instruments in neurosurgical procedures. A prototype of the manipulator with a length of 43 mm and a diameter of 5.5 mm has been realized via 3D printing. Moreover, a multiple regression model has been estimated through a novel experimental setup to predict the tip angles from measured outputs of the optoelectronic modules. The sensing and control performance has also been evaluated during tasks involving tip rotations.
Abstract:
Digital twins of patients' hearts are a promising tool to assess arrhythmia vulnerability and to personalize therapy. However, the process of building personalized computational models can be challenging and requires a high level of human interaction. We propose a patient-specific Augmented Atria generation pipeline (AugmentA) as a highly automated framework which, starting from clinical geometrical data, provides ready-to-use atrial personalized computational models. AugmentA identifies and labels atrial orifices using only one reference point per atrium. If the user chooses to fit a statistical shape model to the input geometry, it is first rigidly aligned with the given mean shape before a non-rigid fitting procedure is applied. AugmentA automatically generates the fiber orientation and finds local conduction velocities by minimizing the error between the simulated and clinical local activation time (LAT) map. The pipeline was tested on a cohort of 29 patients on both segmented magnetic resonance images (MRI) and electroanatomical maps of the left atrium. Moreover, the pipeline was applied to a bi-atrial volumetric mesh derived from MRI. The pipeline robustly integrated fiber orientation and anatomical region annotations in 38.4 ± 5.7 s. In conclusion, AugmentA offers an automated and comprehensive pipeline delivering atrial digital twins from clinical data in procedural time.
Abstract:
Cardiovascular diseases account for 17 million deaths per year worldwide. Of these, 25% are categorized as sudden cardiac death, which can be related to ventricular tachycardia (VT). This type of arrhythmia can be caused by focal activation sources outside the sinus node. Catheter ablation of these foci is a curative treatment in order to inactivate the abnormal triggering activity. However, the localization procedure is usually time-consuming and requires an invasive procedure in the catheter lab. To facilitate and expedite the treatment, we present two novel localization support techniques based on convolutional neural networks (CNNs) that address these clinical needs. In contrast to existing methods, our approaches were designed to be independent of the patient-specific geometry and directly applicable to surface ECG signals, while also delivering a binary transmural position. Moreover, one of the method's outputs can be interpreted as several ranked solutions. The CNNs were trained on a dataset containing only simulated data and evaluated both on simulated test data and clinical data. On a novel large and open simulated dataset, the median test error was below 3 mm. The median localization error on the unseen clinical data ranged from 32 mm to 41 mm without optimizing the pre-processing and CNN to the clinical data. Interpreting the output of one of the approaches as ranked solutions, the best median error of the top-3 solutions decreased to 20 mm on the clinical data. The transmural position was correctly detected in up to 82% of all clinical cases. These results demonstrate a proof of principle to utilize CNNs to localize the activation source without the intrinsic need for patient-specific geometrical information. Furthermore, providing multiple solutions can assist physicians in identifying the true activation source amongst more than one possible location. With further optimization to clinical data, these methods have high potential to accelerate clinical interventions, replace certain steps within these procedures and consequently reduce procedural risk and improve VT patient outcomes.
Abstract:
Atrial fibrillation (AF) is one of the most commoncardiac diseases. However, a complete understanding of howto treat patients suffering from AF is still not achieved. Asthe isolation of the pulmonary veins in the left atrium (LA)is the standard treatment for AF, the role of the right atrium(RA) in AF is rarely considered. We investigated the impactof including the RA on arrhythmia vulnerability in silico. Wegenerated a dataset of five mono-atrial (LA) and five bi-atrialmodels with three different electrophysiological (EP) setupseach, regarding different states of AF-induced remodelling.For every model, a pacing protocol was run to induce reen-tries from a set of stimulation points. The average share ofinducing points across all EP setups was 0.0, 0.8 and 6.7 %for the mono-atrial scenario, 0.5, 27.3 and 37.9 % for the bi-atrial scenario. The increase in inducibility of LA stimula-tion points from mono- to bi-atrial scenario was 0.91 ± 2.03%,34.55 ± 14.9 % and 44.2 ± 14.9 %, respectively. In this study,the RA had a marked impact on the results of the vulnerabilityassessment that needs to be further investigated.
Abstract:
The application of machine learning approachesin medical technology is gaining more and more attention.Due to the high restrictions for collecting intraoperative patientdata, synthetic data is increasingly used to support the trainingof artificial neural networks. We present a pipeline to createa statistical shape model (SSM) using 28 segmented clinicalliver CT scans. Our pipeline consists of four steps: data pre-processing, rigid alignment, template morphing, and statisti-cal modeling. We compared two different template morphingapproaches: Laplace-Beltrami-regularized projection (LBRP)and nonrigid iterative closest points translational (N-ICP-T)and evaluated both morphing approaches and their corre-sponding shape model performance using six metrics. LBRPachieved a smaller mean vertex-to-nearest-neighbor distances(2.486±0.897 mm) than N-ICP-T (5.559±2.413 mm). Gen-eralizationand specificity errors for LBRP were consistentlylower than those of N-ICP-T. The first principal componentsof the SSM showed realistic anatomical variations. The perfor-mance of the SSM was comparable to a state-of-the-art model.
Abstract:
Introduction: 3D surface scan-based diagnosis of craniosynostosis is a promising radiation-free alternative to traditional diagnosis using computed tomography. The cra- nial index (CI) and the cranial vault asymmetry index (CVAI) are well-established clinical parameters that are widely used. However, they also have the benefit of being easily adaptable for automatic diagnosis without the need of extensive prepro- cessing.Methods: We propose a multi-height-based classification ap- proach that uses CI and CVAI in different height layers and compare it to the initial approach using only one layer. We use ten-fold cross-validation and test seven different classi- fiers. The dataset of 504 patients consists of three types of craniosynostosis and a control group consisting of healthy and non-synostotic subjects.Results: The multi-height-based approach improved classifica- tion for all classifiers. The k-nearest neighbors classifier scored best with a mean accuracy of 89 % and a mean F1-score of 0.75.Conclusion: Taking height into account is beneficial for the classification. Based on accepted and widely used clinical pa- rameters, this might be a step towards an easy-to-understand and transparent classification approach for both physicians and patients.
Abstract:
Optical Coherence Tomography (OCT) is a stan- dard imaging procedure in ophthalmology. OCT Angiography is a promising extension, allowing for fast and non-invasive imaging of the retinal vasculature analyzing multiple OCT scans at the same place. Local variance is examined and highlighted. Despite its introduction in the clinic, unanswered questions remain when it comes to signal generation. Multi- phase fluids like intralipid, milk-water solutions and human blood cells were applied in phantom studies shedding light on some of the mechanisms. The use of hydrogel beads allows for the generation of alternative blood models for OCT and OCT Angiography. Beads were produced in Hannover, their size was measured and their long term stability was assessed. Then, beads were shipped to Karlsruhe, where OCT imaging resulted in first insights. The hydrogel acts as a diffusion barrier, which enables a clear distinction of bead and fluid when scattering particles were added. Further on, the scattering medium be- low the bead showed increased signal intensity. We conclude that the inside of the bead structure shows enhanced transmis- sion compared to the plasma substitute with dissolved TiO2 surrounding it. Beads were found clumped and deformed af- ter shipping, an issue to be addressed in further investigations. Nevertheless, hydrogel beads are promising as a blood model for OCT Angiography investigations, offering tunable optical parameters within the blood substitute solution.
Abstract:
During cerebral revascularization surgeries, blood flow values help surgeons to monitor the quality of the pro- cedure, e.g., to avoid cerebral hyperperfusion syndrome due to excessively enhanced perfusion. The state-of-the-art technique is the ultrasonic flow probe that has to be placed around the blood vessel. This causes contact between probe and vessel, which, in the worst case, leads to rupture. The recently devel- oped intraoperative indocyanine green (ICG) Quantitative Flu- orescence Angiography (QFA) is an alternative technique that overcomes this risk. However, it has been shown by the devel- oper that the calculated flow has deviations. After determining the bolus transit time as the most sensitive parameter in flow calculation, we propose a new two-step uncertainty reduction method for flow calculation. The first step is to generate more data in each measurement that results in functions of the pa- rameters. Noise can then be reduced in a second step. Two methods for this step are compared. The first method fits the model for each parameter function separately and calculates flow from models, while the second one fits multiple parame- ter functions together. The latter method is proven to perform best by in silico tests. Besides, this method reduces the de- viation of flow comparing to original QFA as expected. Our approach can be generally used in all QFA applications using two-point theory. Further development is possible if number of dimensions of the achieved parameter data are broadened that results in even more data for processing in the second step.
Abstract:
PURPOSE: Depth estimation is the basis of 3D reconstruction of airway structure from 2D bronchoscopic scenes, which can be further used to develop a vision-based bronchoscopic navigation system. This work aims to improve the performance of depth estimation directly from bronchoscopic images by training a depth estimation network on both synthetic and real datasets. METHODS: We propose a cGAN-based network Bronchoscopic-Depth-GAN (BronchoDep-GAN) to estimate depth from bronchoscopic images by translating bronchoscopic images into depth maps. The network is trained in a supervised way learning from synthetic textured bronchoscopic image-depth pairs and virtual bronchoscopic image-depth pairs, and simultaneously, also in an unsupervised way learning from unpaired real bronchoscopic images and depth maps to adapt the model to real bronchoscopic scenes. RESULTS: Our method is tested on both synthetic data and real data. However, the tests on real data are only qualitative, as no ground truth is available. The results show that our network obtains better accuracy in all cases in estimating depth from bronchoscopic images compared to the well-known cGANs pix2pix. CONCLUSIONS: Including virtual and real bronchoscopic images in the training phase of the depth estimation networks can improve depth estimation's performance on both synthetic and real scenes. Further validation of this work is planned on 3D clinical phantoms. Based on the depth estimation results obtained in this work, the accuracy of locating bronchoscopes with corresponding pre-operative CTs will also be evaluated in comparison with the current clinical status.
Abstract:
Mechanistic cardiac electrophysiology models allow for personalized simulations of the electrical activity in the heart and the ensuing electrocardiogram (ECG) on the body surface. As such, synthetic signals possess known ground truth labels of the underlying disease and can be employed for validation of machine learning ECG analysis tools in addition to clinical signals. Recently, synthetic ECGs were used to enrich sparse clinical data or even replace them completely during training leading to improved performance on real-world clinical test data. We thus generated a novel synthetic database comprising a total of 16,900 12 lead ECGs based on electrophysiological simulations equally distributed into healthy control and 7 pathology classes. The pathological case of myocardial infraction had 6 sub-classes. A comparison of extracted features between the virtual cohort and a publicly available clinical ECG database demonstrated that the synthetic signals represent clinical ECGs for healthy and pathological subpopulations with high fidelity. The ECG database is split into training, validation, and test folds for development and objective assessment of novel machine learning algorithms.
Abstract:
AIMS: Electro-anatomical voltage, conduction velocity (CV) mapping, and late gadolinium enhancement (LGE) magnetic resonance imaging (MRI) have been correlated with atrial cardiomyopathy (ACM). However, the comparability between these modalities remains unclear. This study aims to (i) compare pathological substrate extent and location between current modalities, (ii) establish spatial histograms in a cohort, (iii) develop a new estimated optimized image intensity threshold (EOIIT) for LGE-MRI identifying patients with ACM, (iv) predict rhythm outcome after pulmonary vein isolation (PVI) for persistent atrial fibrillation (AF). METHODS AND RESULTS: Thirty-six ablation-naive persistent AF patients underwent LGE-MRI and high-definition electro-anatomical mapping in sinus rhythm. Late gadolinium enhancement areas were classified using the UTAH, image intensity ratio (IIR >1.20), and new EOIIT method for comparison to low-voltage substrate (LVS) and slow conduction areas <0.2 m/s. Receiver operating characteristic analysis was used to determine LGE thresholds optimally matching LVS. Atrial cardiomyopathy was defined as LVS extent ≥5% of the left atrium (LA) surface at <0.5 mV. The degree and distribution of detected pathological substrate (percentage of individual LA surface are) varied significantly (P < 0.001) across the mapping modalities: 10% (interquartile range 0-14%) of the LA displayed LVS <0.5 mV vs. 7% (0-12%) slow conduction areas <0.2 m/s vs. 15% (8-23%) LGE with the UTAH method vs. 13% (2-23%) using IIR >1.20, with most discrepancies on the posterior LA. Optimized image intensity thresholds and each patient's mean blood pool intensity correlated linearly (R2 = 0.89, P < 0.001). Concordance between LGE-MRI-based and LVS-based ACM diagnosis improved with the novel EOIIT applied at the anterior LA [83% sensitivity, 79% specificity, area under the curve (AUC): 0.89] in comparison to the UTAH method (67% sensitivity, 75% specificity, AUC: 0.81) and IIR >1.20 (75% sensitivity, 62% specificity, AUC: 0.67). CONCLUSION: Discordances in detected pathological substrate exist between LVS, CV, and LGE-MRI in the LA, irrespective of the LGE detection method. The new EOIIT method improves concordance of LGE-MRI-based ACM diagnosis with LVS in ablation-naive AF patients but discrepancy remains particularly on the posterior wall. All methods may enable the prediction of rhythm outcomes after PVI in patients with persistent AF.
Abstract:
Purpose: To evaluate the impact of lens opacity on the reliability of optical coherence tomog- raphy angiography metrics and to find a vessel caliber threshold that is reproducible in cataract patients.Methods: A prospective cohort study of 31 patients, examining one eye per patient, by applying 33mm macular optical coherence tomography angiography before (18.94±12.22days) and 3 months (111 ± 23.45 days) after uncomplicated cataract surgery. We extracted superficial (SVC) and deep vascular plexuses (DVC) for further analysis and evaluated changes in image contrast, vessel metrics (perfusion density, flow deficit and vessel-diameter index) and foveal avascular area (FAZ). Results: After surgery, the blood flow signal in smaller capillaries was enhanced as image contrast improved. Signal strength correlated to average lens density defined by objective measurement in Scheimpflug images (Pearson’s r: –.40, p: .027) and to flow deficit (r1⁄4 –.70, p<.001). Perfusion density correlated to the signal strength index (r1⁄4.70, p<.001). Vessel metrics and FAZ area, except for FAZ area in DVC, were significantly different after cataract surgery, but the mean change was approximately 3–6%. A stepwise approach in extracting vessels according to their pixel caliber showed a threshold of > 6 pixels caliber ($20–30 mm) was comparable before and after lens removal.Conclusion: In patients with cataract, OCTA vessel metrics should be interpreted with caution. In addition to signal strength, contrast and pixel properties can serve as supplementary quality met- rics to improve the interpretation of OCTA metrics. Vessels with $20–30 mm in caliber seem to be reproducible.
Abstract:
INTRODUCTION: Improved sinus rhythm (SR) maintenance rates have been achieved in patients with persistent atrial fibrillation (AF) undergoing pulmonary vein isolation plus additional ablation of low voltage substrate (LVS) during SR. However, voltage mapping during SR may be hindered in persistent and long-persistent AF patients by immediate AF recurrence after electrical cardioversion. We assess correlations between LVS extent and location during SR and AF, aiming to identify regional voltage thresholds for rhythm-independent delineation/detection of LVS areas. (1) Identification of voltage dissimilarities between mapping in SR and AF. (2) Identification of regional voltage thresholds that improve cross-rhythm substrate detection. (3) Comparison of LVS between SR and native versus induced AF. METHODS: Forty-one ablation-naive persistent AF patients underwent high-definition (1 mm electrodes; >1200 left atrial (LA) mapping sites per rhythm) voltage mapping in SR and AF. Global and regional voltage thresholds in AF were identified which best match LVS < 0.5 mV and <1.0 mV in SR. Additionally, the correlation between SR-LVS with induced versus native AF-LVS was assessed. RESULTS: Substantial voltage differences (median: 0.52, interquartile range: 0.33-0.69, maximum: 1.19 mV) with a predominance of the posterior/inferior LA wall exist between the rhythms. An AF threshold of 0.34 mV for the entire left atrium provides an accuracy, sensitivity and specificity of 69%, 67%, and 69% to identify SR-LVS < 0.5 mV, respectively. Lower thresholds for the posterior wall (0.27 mV) and inferior wall (0.3 mV) result in higher spatial concordance to SR-LVS (4% and 7% increase). Concordance with SR-LVS was higher for induced AF compared to native AF (area under the curve[AUC]: 0.80 vs. 0.73). AF-LVS < 0.5 mV corresponds to SR-LVS < 0.97 mV (AUC: 0.73). CONCLUSION: Although the proposed region-specific voltage thresholds during AF improve the consistency of LVS identification as determined during SR, the concordance in LVS between SR and AF remains moderate, with larger LVS detection during AF. Voltage-based substrate ablation should preferentially be performed during SR to limit the amount of ablated atrial myocardium.
Abstract:
Machine learning (ML) methods for the analysis of electrocardiography (ECG) data are gaining importance, substantially supported by the release of large public datasets. However, these current datasets miss important derived descriptors such as ECG features that have been devised in the past hundred years and still form the basis of most automatic ECG analysis algorithms and are critical for cardiologists' decision processes. ECG features are available from sophisticated commercial software but are not accessible to the general public. To alleviate this issue, we add ECG features from two leading commercial algorithms and an open-source implementation supplemented by a set of automatic diagnostic statements from a commercial ECG analysis software in preprocessed format. This allows the comparison of ML models trained on clinically versus automatically generated label sets. We provide an extensive technical validation of features and diagnostic statements for ML applications. We believe this release crucially enhances the usability of the PTB-XL dataset as a reference dataset for ML methods in the context of ECG data.
Abstract:
Clonogenic assays are routinely used to evaluate the response of cancer cells to external radiation fields, assess their radioresistance and radiosensitivity, estimate the performance of radiotherapy. However, classic clonogenic tests focus on the number of colonies forming on a substrate upon exposure to ionizing radiation, and disregard other important characteristics of cells such their ability to generate structures with a certain shape. The radioresistance and radiosensitivity of cancer cells may depend less on the number of cells in a colony and more on the way cells interact to form complex networks. In this study, we have examined whether the topology of 2D cancer-cell graphs is influenced by ionizing radiation. We subjected different cancer cell lines, i.e. H4 epithelial neuroglioma cells, H460 lung cancer cells, PC3 bone metastasis of grade IV of prostate cancer and T24 urinary bladder cancer cells, cultured on planar surfaces, to increasing photon radiation levels up to 6 Gy. Fluorescence images of samples were then processed to determine the topological parameters of the cell-graphs developing over time. We found that the larger the dose, the less uniform the distribution of cells on the substrate—evidenced by high values of small-world coefficient (cc), high values of clustering coefficient (cc), and small values of characteristic path length (cpl). For all considered cell lines, 𝑠𝑤>1 for doses higher or equal to 4 Gy, while the sensitivity to the dose varied for different cell lines: T24 cells seem more distinctly affected by the radiation, followed by the H4, H460 and PC3 cells. Results of the work reinforce the view that the characteristics of cancer cells and their response to radiotherapy can be determined by examining their collective behavior—encoded in a few topological parameters—as an alternative to classical clonogenic assays.
Abstract:
Purpose Primary central nervous system lymphoma (PCNSL) is a rare, aggressive form of extranodal non-Hodgkin lym- phoma. To predict the overall survival (OS) in advance is of utmost importance as it has the potential to aid clinical decision-making. Though radiomics-based machine learning (ML) has demonstrated the promising performance in PCNSL, it demands large amounts of manual feature extraction efforts from magnetic resonance images beforehand. deep learning (DL) overcomes this limitation.Methods In this paper, we tailored the 3D ResNet to predict the OS of patients with PCNSL. To overcome the limitation of data sparsity, we introduced data augmentation and transfer learning, and we evaluated the results using r stratified k-fold cross-validation. To explain the results of our model, gradient-weighted class activation mapping was applied.Results We obtained the best performance (the standard error) on post-contrast T1-weighted (T1Gd)—area under curve = 0.81(0.03), accuracy = 0.87(0.07), precision = 0.88(0.07), recall = 0.88(0.07) and F1-score = 0.87(0.07), while compared with ML-based models on clinical data and radiomics data, respectively, further confirming the stability of our model. Also, we observed that PCNSL is a whole-brain disease and in the cases where the OS is less than 1 year, it is more difficult to distinguish the tumor boundary from the normal part of the brain, which is consistent with the clinical outcome. Conclusions All these findings indicate that T1Gd can improve prognosis predictions of patients with PCNSL. To the best of our knowledge, this is the first time to use DL to explain model patterns in OS classification of patients with PCNSL. Future work would involve collecting more data of patients with PCNSL, or additional retrospective studies on different patient populations with rare diseases, to further promote the clinical role of our model.
Abstract:
PURPOSE: Atrial fibrillation is one of the most frequent cardiac arrhythmias in the industrialized world and ablation therapy is the method of choice for many patients. However, ablation scars alter the electrophysiological activation and the mechanical behavior of the affected atria. Different ablation strategies with the aim to terminate atrial fibrillation and prevent its recurrence exist but their impact on the performance of the heart is often neglected. METHODS: In this work, we present a simulation study analyzing five commonly used ablation scar patterns and their combinations in the left atrium regarding their impact on the pumping function of the heart using an electromechanical whole-heart model. We analyzed how the altered atrial activation and increased stiffness due to the ablation scars affect atrial as well as ventricular contraction and relaxation. RESULTS: We found that systolic and diastolic function of the left atrium is impaired by ablation scars and that the reduction of atrial stroke volume of up to 11.43% depends linearly on the amount of inactivated tissue. Consequently, the end-diastolic volume of the left ventricle, and thus stroke volume, was reduced by up to 1.4 and 1.8%, respectively. During ventricular systole, left atrial pressure was increased by up to 20% due to changes in the atrial activation sequence and the stiffening of scar tissue. CONCLUSION: This study provides biomechanical evidence that atrial ablation has acute effects not only on atrial contraction but also on ventricular performance. Therefore, the position and extent of ablation scars is not only important for the termination of arrhythmias but is also determining long-term pumping efficiency. If confirmed in larger cohorts, these results have the potential to help tailoring ablation strategies towards minimal global cardiovascular impairment.
Abstract:
Background and Objective: Planning the optimal ablation strategy for the treatment of complex atrial tachycardia (CAT) is a time consuming task and is error-prone. Recently, directed network mapping, a technology based on graph theory, proved to efficiently identify CAT based solely on data of clinical interventions. Briefly, a directed network was used to model the atrial electrical propagation and reentrant activities were identified by looking for closed-loop paths in the network. In this study, we propose a recommender system, built as an optimization problem, able to suggest the optimal ablation strategy for the treatment of CAT.Methods: The optimization problem modeled the optimal ablation strategy as that one interrupting all reentrant mechanisms while minimizing the ablated atrial surface. The problem was designed on top of directed network mapping. Considering the exponential complexity of finding the optimal solution of the problem, we introduced a heuristic algorithm with polynomial complexity. The proposed algorithm was applied to the data of i) 6 simulated scenarios including both left and right atrial flutter; and ii) 10 subjects that underwent a clinical routine.Results: The recommender system suggested the optimal strategy in 4 out of 6 simulated scenarios. On clinical data, the recommended ablation lines were found satisfactory on 67% of the cases according to the clinician’s opinion, while they were correctly located in 89%. The algorithm made use of only data collected during mapping and was able to process them nearly real-time.Conclusions: The first recommender system for the identification of the optimal ablation lines for CAT, based solely on the data collected during the intervention, is presented. The study may open up interesting scenarios for the application of graph theory for the treatment of CAT.
Abstract:
Primary Central Nervous System Lymphoma (PCNSL) is an aggressive neoplasm with a poor prognosis. Although therapeutic progresses have significantly improved Overall Survival (OS), a number of patients do not respond to HD–MTX-based chemotherapy (15–25%) or experience relapse (25–50%) after an initial response. The reasons underlying this poor response to therapy are unknown. Thus, there is an urgent need to develop improved predictive models for PCNSL. In this study, we investigated whether radiomics features can improve outcome prediction in patients with PCNSL. A total of 80 patients diagnosed with PCNSL were enrolled. A patient sub-group, with complete Magnetic Resonance Imaging (MRI) series, were selected for the stratification analysis. Following radiomics feature extraction and selection, different Machine Learning (ML) models were tested for OS and Progression-free Survival (PFS) prediction. To assess the stability of the selected features, images from 23 patients scanned at three different time points were used to compute the Interclass Correlation Coefficient (ICC) and to evaluate the reproducibility of each feature for both original and normalized images. Features extracted from Z-score normalized images were significantly more stable than those extracted from non-normalized images with an improvement of about 38% on average (p-value < 10−12). The area under the ROC curve (AUC) showed that radiomics-based prediction overcame prediction based on current clinical prognostic factors with an improvement of 23% for OS and 50% for PFS, respectively. These results indicate that radiomics features extracted from normalized MR images can improve prognosis stratification of PCNSL patients and pave the way for further study on its potential role to drive treatment choice.
Abstract:
AIMS: The long-term success rate of ablation therapy is still sub-optimal in patients with persistent atrial fibrillation (AF), mostly due to arrhythmia recurrence originating from arrhythmogenic sites outside the pulmonary veins. Computational modelling provides a framework to integrate and augment clinical data, potentially enabling the patient-specific identification of AF mechanisms and of the optimal ablation sites. We developed a technology to tailor ablations in anatomical and functional digital atrial twins of patients with persistent AF aiming to identify the most successful ablation strategy. METHODS AND RESULTS: Twenty-nine patient-specific computational models integrating clinical information from tomographic imaging and electro-anatomical activation time and voltage maps were generated. Areas sustaining AF were identified by a personalized induction protocol at multiple locations. State-of-the-art anatomical and substrate ablation strategies were compared with our proposed Personalized Ablation Lines (PersonAL) plan, which consists of iteratively targeting emergent high dominant frequency (HDF) regions, to identify the optimal ablation strategy. Localized ablations were connected to the closest non-conductive barrier to prevent recurrence of AF or atrial tachycardia. The first application of the HDF strategy had a success of >98% and isolated only 5-6% of the left atrial myocardium. In contrast, conventional ablation strategies targeting anatomical or structural substrate resulted in isolation of up to 20% of left atrial myocardium. After a second iteration of the HDF strategy, no further arrhythmia episode could be induced in any of the patient-specific models. CONCLUSION: The novel PersonAL in silico technology allows to unveil all AF-perpetuating areas and personalize ablation by leveraging atrial digital twins.
Abstract:
The bidomain model and the finite element method are an established standard to mathematically describe cardiac electrophysiology, but are both suboptimal choices for fast and large-scale simulations due to high computational costs. We investigate to what extent simplified approaches for propagation models (monodomain, reaction-Eikonal and Eikonal) and forward calculation (boundary element and infinite volume conductor) deliver markedly accelerated, yet physiologically accurate simulation results in atrial electrophysiology. <i>Methods:</i> We compared action potential durations, local activation times (LATs), and electrocardiograms (ECGs) for sinus rhythm simulations on healthy and fibrotically infiltrated atrial models. <i>Results:</i> All simplified model solutions yielded LATs and P waves in accurate accordance with the bidomain results. Only for the Eikonal model with pre-computed action potential templates shifted in time to derive transmembrane voltages, repolarization behavior notably deviated from the bidomain results. ECGs calculated with the boundary element method were characterized by correlation coefficients <inline-formula><tex-math notation="LaTeX">$>$</tex-math></inline-formula>0.9 compared to the finite element method. The infinite volume conductor method led to lower correlation coefficients caused predominantly by systematic overestimations of P wave amplitudes in the precordial leads. <i>Conclusion:</i> Our results demonstrate that the Eikonal model yields accurate LATs and combined with the boundary element method precise ECGs compared to markedly more expensive full bidomain simulations. However, for an accurate representation of atrial repolarization dynamics, diffusion terms must be accounted for in simplified models. <i>Significance:</i> Simulations of atrial LATs and ECGs can be notably accelerated to clinically feasible time frames at high accuracy by resorting to the Eikonal and boundary element methods.
Abstract:
Background: Progressive atrial fibrotic remodeling has been reported to be associated with atrial cardiomyopathy (ACM) and the transition from paroxysmal to persistent atrial fibrillation (AF). We sought to identify the anatomical/structural and electrophysiological factors involved in atrial remodeling that promote AF persistency.Methods: Consecutive patients with paroxysmal (n = 134) or persistent (n = 136) AF who presented for their first AF ablation procedure were included. Patients underwent left atrial (LA) high-definition mapping (1,835 ± 421 sites/map) during sinus rhythm (SR) and were randomized to training and validation sets for model development and evaluation. A total of 62 parameters from both electro-anatomical mapping and non-invasive baseline data were extracted encompassing four main categories: (1) LA size, (2) extent of low-voltage-substrate (LVS), (3) LA voltages and (4) bi-atrial conduction time as identified by the duration of amplified P-wave (APWD) in a digital 12-lead-ECG. Least absolute shrinkage and selection operator (LASSO) and logistic regression were performed to identify the factors that are most relevant to AF persistency in each category alone and all categories combined. The performance of the developed models for diagnosis of AF persistency was validated regarding discrimination, calibration and clinical usefulness. In addition, HATCH score and C2HEST score were also evaluated for their performance in identification of AF persistency.Results: In training and validation sets, APWD (threshold 151 ms), LA volume (LAV, threshold 94 mL), bipolar LVS area < 1.0 mV (threshold 4.55 cm2) and LA global mean voltage (GMV, threshold 1.66 mV) were identified as best determinants for AF persistency in the respective category. Moreover, APWD (AUC 0.851 and 0.801) and LA volume (AUC 0.788 and 0.741) achieved better discrimination between AF types than LVS extent (AUC 0.783 and 0.682) and GMV (AUC 0.751 and 0.707). The integrated model (combining APWD and LAV) yielded the best discrimination performance between AF types (AUC 0.876 in training set and 0.830 in validation set). In contrast, HATCH score and C2HEST score only achieved AUC < 0.60 in identifying individuals with persistent AF in current study.Conclusion: Among 62 electro-anatomical parameters, we identified APWD, LA volume, LVS extent, and mean LA voltage as the four determinant electrophysiological and structural factors that are most relevant for AF persistency. Notably, the combination of APWD with LA volume enabled discrimination between paroxysmal and persistent AF with high accuracy, emphasizing their importance as underlying substrate of persistent AF.
Abstract:
The KCNQ1 gene encodes the α-subunit of the cardiac voltage-gated potassium (Kv) channel KCNQ1, also denoted as Kv7.1 or KvLQT1. The channel assembles with the ß-subunit KCNE1, also known as minK, to generate the slowly activating cardiac delayed rectifier current IKs, a key regulator of the heart rate dependent adaptation of the cardiac action potential duration (APD). Loss-of-function variants in KCNQ1 cause the congenital Long QT1 (LQT1) syndrome, characterized by delayed cardiac repolarization and a QT interval prolongation in the surface electrocardiogram (ECG). Autosomal dominant loss-of-function variants in KCNQ1 result in the LQT syndrome called Romano-Ward syndrome (RWS), while autosomal recessive variants affecting function, lead to Jervell and Lange-Nielsen syndrome (JLNS), associated with deafness. The aim of this study was the characterization of novel KCNQ1 variants identified in patients with RWS to widen the spectrum of known LQT1 variants, and improve the interpretation of the clinical relevance of variants in the KCNQ1 gene. We functionally characterized nine human KCNQ1 variants using the voltage-clamp technique in Xenopus laevis oocytes, from which we report seven novel variants. The functional data was taken as input to model surface ECGs, to subsequently compare the functional changes with the clinically observed QTc times, allowing a further interpretation of the severity of the different LQTS variants. We found that the electrophysiological properties of the variants correlate with the severity of the clinically diagnosed phenotype in most cases, however, not in all. Electrophysiological studies combined with in silico modelling approaches are valuable components for the interpretation of the pathogenicity of KCNQ1 variants, but assessing the clinical severity demands the consideration of other factors that are included, for example in the Schwartz score.
Abstract:
Life-threatening cardiac arrhythmias require immediate defibrillation. For state-of-the-art shock treatments, a high field strength is required to achieve a sufficient success rate for terminating the complex spiral wave (rotor) dynamics underlying cardiac fibrillation. However, such high energy shocks have many adverse side effects due to the large electric currents applied. In this study, we show, using 2D simulations based on the Fenton-Karma model, that also pulses of relatively low energy may terminate the chaotic activity if applied at the right moment in time. In our simplified model for defibrillation, complex spiral waves are terminated by local perturbations corresponding to conductance heterogeneities acting as virtual electrodes in the presence of an external electric field. We demonstrate that time series of the success rate for low energy shocks exhibit pronounced peaks which correspond to short intervals in time during which perturbations aiming at terminating the chaotic fibrillation state are (much) more successful. Thus, the low energy shock regime, although yielding very low temporal average success rates, exhibits moments in time for which success rates are significantly higher than the average value shown in dose-response curves. This feature might be exploited in future defibrillation protocols for achieving high termination success rates with low or medium pulse energies.
Abstract:
Imaging distributions of radioactive sources plays a substantial role in nuclear medicine as well as in monitoring nuclear waste and its deposit. Coded Aperture Imaging (CAI) has been proposed as an alternative to parallel or pinhole collimators, but requires image reconstruction as an extra step. Multiple reconstruction methods with varying run time and computational complexity have been proposed. Yet, no quantitative comparison between the different reconstruction methods has been carried out so far. This paper focuses on a comparison based on three sets of hot-rod phantom images captured with an experimental γ-camera consisting of a Tungsten-based MURA mask with a 2 mm thick 256 × 256 pixelated CdTe semiconductor detector coupled to a Timepix© readout circuit. Analytical reconstruction methods, MURA Decoding, Wiener Filter and a convolutional Maximum Likelihood Expectation Maximization (MLEM) algorithm were compared to data-driven Convolutional Encoder-Decoder (CED) approaches. The comparison is based on the contrast-to-noise ratio as it has been previously used to assess reconstruction quality. For the given set-up, MURA Decoding, the most commonly used CAI reconstruction method, provides robust reconstructions despite the assumption of a linear model. For single image reconstruction, however, MLEM performed best of all analytical reconstruction methods, but took on average 45 times longer than MURA Decoding. The fastest reconstruction method is the Wiener Filter with a run time 4.3 times faster compared to MURA Decoding and a mediocre quality. The CED with a specifically tailored training set was able to succeed the most commonly used MURA decoding on average by a factor between 1.37 and 2.60 and an equal run time.
Abstract:
Artificial intelligence technology is trending in nearly every medical area. It offers the possibility for improving analytics, therapy outcome, and user experience during therapy. In dialysis, the application of artificial intelligence as a therapy-individualization tool is led more by start-ups than consolidated players, and innovation in dialysis seems comparably stagnant. Factors such as technical requirements or regulatory processes are important and necessary but can slow down the implementation of artificial intelligence due to missing data infrastructure and undefined approval processes. Current research focuses mainly on analyzing health records or wearable technology to add to existing health data. It barely uses signal data from treatment devices to apply artificial intelligence models. This article, therefore, discusses requirements for signal processing through artificial intelligence in health care and compares these with the status quo in dialysis therapy. It offers solutions for given barriers to speed up innovation with sensor data, opening access to existing and untapped sources, and shows the unique advantage of signal processing in dialysis compared to other health care domains. This research shows that even though the combination of different data is vital for improving patients' therapy, adding signal-based treatment data from dialysis devices to the picture can benefit the understanding of treatment dynamics, improving and individualizing therapy.
Abstract:
Objective: Diagnosis of craniosynostosis using photogrammetric 3D surface scans is a promising radiation-free alternative to traditional computed tomography. We propose a 3D surface scan to 2D distance map conversion enabling the usage of the first convolutional neural networks (CNNs)-based classification of craniosynostosis. Benefits of using 2D images include preserving patient anonymity, enabling data augmentation during training, and a strong under-sampling of the 3D surface with good classification performance.Methods: The proposed distance maps sample 2D images from 3D surface scans using a coordinate transformation, ray casting, and distance extraction. We introduce a CNNbased classification pipeline and compare our classifier to alternative approaches on a dataset of 496 patients. We investigate into low-resolution sampling, data augmentation, and attribution mapping.Results: Resnet18 outperformed alternative classifiers on our dataset with an F1-score of 0.964 and an accuracy of 98.4 %. Data augmentation on 2D distance maps increased performance for all classifiers. Under-sampling allowed 256-fold computation reduction during ray casting while retaining an F1-score of 0.92. Attribution maps showed high amplitudes on the frontal head.Conclusion: We demonstrated a versatile mapping approach to extract a 2D distance map from the 3D head geometry increasing classification performance, enabling data augmentation during training on 2D distance maps, and the usage of CNNs. We found that low-resolution images were sufficient for a good classification performance.Significance: Photogrammetric surface scans are a suitable craniosynostosis diagnosis tool for clinical practice. Domain transfer to computed tomography seems likely and can further contribute to reducing ionizing radiation exposure for infants.
Abstract:
BACKGROUND: Electrical impedance measurements have become an accepted tool for monitoring intracardiac radio frequency ablation. Recently, the long-established generator impedance was joined by novel local impedance measurement capabilities with all electrical circuit terminals being accommodated within the catheter. OBJECTIVE: This work aims at in silico quantification of distinct influencing factors that have remained challenges due to the lack of ground truth knowledge and the superposition of effects in clinical settings. METHODS: We introduced a highly detailed in silico model of two local impedance enabled catheters, namely IntellaNav MiFi™ OI and IntellaNav Stablepoint™, embedded in a series of clinically relevant environments. Assigning material and frequency specific conductivities and subsequently calculating the spread of the electrical field with the finite element method yielded in silico local impedances. The in silico model was validated by comparison to in vitro measurements of standardized sodium chloride solutions. We then investigated the effect of the withdrawal of the catheter into the transseptal sheath, catheter-tissue interaction, insertion of the catheter into pulmonary veins, and catheter irrigation. RESULTS: All simulated setups were in line with in vitro experiments and in human measurements and gave detailed insight into determinants of local impedance changes as well as the relation between values measured with two different devices. CONCLUSION: The in silico environment proved to be capable of resembling clinical scenarios and quantifying local impedance changes. SIGNIFICANCE: The tool can assists the interpretation of measurements in humans and has the potential to support future catheter development.
Abstract:
Die Vorteile der Datennutzung im Gesundheitswesen sind mittler- weile so offensichtlich, dass es fahrlässig wäre, diese nicht umzu- setzen. Der vorliegende IMPULS will Anstoß für eine sichere und souveräne Nutzung von Gesundheitsdaten geben. Dazu werden Chancen, Hemmnisse sowie Diskussionspunkte und Handlungs- felder aufgezeigt und in Bezug gesetzt zu aktuellen Gesetzesvor- haben in diesem Bereich. Das Papier richtet sich vor allem an politische Entscheidungsträgerinnen und Entscheidungsträger und soll Wege aufzeigen, wie der Datenschatz zum Wohle der Patientinnen und Patienten gehoben werden kann.Aufbauend auf einer Bestandsaufnahme des heutigen Gesund- heitssystems und einer Analyse der bestehenden Hürden und Hindernisse haben wir Handlungsfelder identifiziert, in denen die zuständigen Akteure aktiv werden müssen:Die Datenfreigabe ist die alles entscheidende Grundlage für die Datennutzung. In einem dermaßen komplexen System wie dem Gesundheitswesen reicht eine binäre Entweder-oder-Entscheidung nicht aus; ein abgestuftes, differenziertes Einwilligungsverfahren ist nötig, um einen souveränen Umgang mit den Gesundheitsdaten jeder und jedes Einzelnen zu ermöglichen. Es gibt heute Möglich- keiten, die feingranulare Einwilligung zur Nutzung der Daten so zu gestalten, dass sie relativ schnell und gut informiert durchgeführt werden kann, zum Beispiel mithilfe des Mobiltelefons.Nur Daten mit einer hinreichenden Datenqualität sind für die Nutzung sowohl in der medizinischen Versorgung als auch in Forschung und Entwicklung verwendbar. Daher sind einheitliche Standards und Formate dringend nötig.Alle öffentlichen und privaten Akteure, die Gesundheitsdaten erheben, sollten durch die Datenbereitstellung an gemeinsamen Gesundheitsdatenräumen beteiligt sein. Dabei braucht es neben einer möglichst weitreichenden Veröffentlichung von Daten auch klare Regelungen zum Schutz von geistigem Eigentum und damit der Wettbewerbsfähigkeit der Beteiligten. Zudem müssen neben staatlichen Forschungseinrichtungen wie der Universitäts- medizin auch Unternehmen der Pharma- und der Medizintechnik- branche Zugriff auf die Daten erhalten, damit die Ergebnisse der Forschung bei den Millionen von Patientinnen und Patienten auch tatsächlich ankommen.Die Datenweitergabe sollte im Sinne der Sicherheit soweit möglich in anonymisierter und aggregierter Form erfolgen;gleichzeitig sollten angesichts des potenziellen medizinischen Mehrwerts unter bestimmten Umständen auch eine Nutzung pseudonymisierter und personalisierter Daten möglich sein. Einrichtungen und Unternehmen, die Gesundheitsdaten für die allgemeine Nutzung zur Verfügung stellen, sollten auch einen besseren Zugang zu solchen Daten erhalten. Die Publikation datenbasierter Forschungsergebnisse sollte die Regel sein.Für den Bereich Infrastruktur und Datensicherheit ist darauf zu achten, dass Datengewinnung, Datenbereitstellung und Datenfreigabe konsequent getrennt, also in unterschiedlichen Einrichtungen angesiedelt werden, um Datenmissbrauch so gut wie möglich zu verhindern. Eine schnellere, robuste und sichere Infrastruktur für Gesundheitsdaten ist hierfür Grund- voraussetzung; bei deren Erarbeitung müssen alle Beteiligten konsequent eingebunden werden, auch im Hinblick auf gute Benutzerschnittstellen.Die Datennutzung sollte im Sinne einer Value-based Healthcare erfolgen und den Fokus zudem auch auf präventive Angebote und den Ausbau von telemedizinischen Leistungen legen. Dazu braucht es neue Metriken zur umfassenden Bewertung von Gesundheit und zur Integration neuer Leistungen in die Ver- sorgung.Die digitale Gesundheitskompetenz muss durch Aus- und Weiter- bildung auf allen Ebenen – von den Patientinnen und Patienten über die Ärzteschaft und das Pflegepersonal bis hin zu Presse und anderen Medien – besser werden. Wir benötigen dringend mehr und exzellent ausgebildete IT-Expertinnen und -Experten für den Gesundheitsbereich, zum Beispiel Medical Data Scientists.Die öffentliche Meinungsbildung über das Thema Datennutzung im Gesundheitswesen sollte neben den berechtigten Datenschutz- anliegen auch die Vorteile berücksichtigen und einen öffentlichen Diskurs über Datenschutzmöglichkeiten und den Mehrwert der Datennutzung anregen.Zur Innovationsförderung auf Basis von Datennutzung braucht es einheitliche Rahmenbedingungen auf nationaler und europäischer Ebene, um Rechtssicherheit zu schaffen. Gleich- zeitig müssen datengetriebene Ansätze und neue Diagnose- und Therapiemöglichkeiten, zum Beispiel durch KI, in der Zulassung gleichwertig mit klassischen Verfahren berücksichtigt werdenDigitalisierung und Datennutzung erlauben durch Auto- matisierung und Personalisierung ein nachhaltiges und zu- kunftsfähiges Gesundheitswesen, welches das Patientenwohl ins Zentrum stellt und Gesundheit ganzheitlich betrachtet.
Abstract:
UNLABELLED: Cases of vaccine breakthrough, especially in variants of concern (VOCs) infections, are emerging in coronavirus disease (COVID-19). Due to mutations of structural proteins (SPs) (e.g., Spike proteins), increased transmissibility and risk of escaping from vaccine-induced immunity have been reported amongst the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Remdesivir was the first to be granted emergency use authorization but showed little impact on survival in patients with severe COVID-19. Remdesivir is a prodrug of the nucleoside analogue GS-441524 which is converted into the active nucleotide triphosphate to disrupt viral genome of the conserved non-structural proteins (NSPs) and thus block viral replication. GS-441524 exerts a number of pharmacological advantages over Remdesivir: (1) it needs fewer conversions for bioactivation to nucleotide triphosphate; (2) it requires only nucleoside kinase, while Remdesivir requires several hepato-renal enzymes, for bioactivation; (3) it is a smaller molecule and has a potency for aerosol and oral administration; (4) it is less toxic allowing higher pulmonary concentrations; (5) it is easier to be synthesized. The current article will focus on the discussion of interactions between GS-441524 and NSPs of VOCs to suggest potential application of GS-441524 in breakthrough SARS-CoV-2 infections. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s44231-022-00021-4.
Abstract:
Atrial fibrillation (AF) is the most common sus- tained arrhythmia posing a significant burden to patients and leading to an increased risk of stroke and heart failure. Additional ablation of areas of arrhythmogenic substrate in the atrial body detected by either late gadolinium enhance- ment magnetic resonance imaging (LGE-MRI) or electro- anatomical mapping (EAM) may increase the success rate of restoring and maintaining sinus rhythm compared to the stan- dard treatment procedure of pulmonary vein isolation (PVI). To evaluate if LGE-MRI and EAM identify equivalent sub- strate as potential ablation targets, we divided the left atrium (LA) into six clinically important regions in ten patients. Then, we computed the correlation between both modalities by ana- lyzing the regional extents of identified pathological tissue. In this regional analysis, we observed no correlation between late gadolinium enhancement (LGE) and low voltage areas (LVA), neither in any region nor with regard to the entire atrial surface (−0.3 < 𝑟 < 0.3). Instead, the regional extents identified as pathological tissue varied significantly between both modali- ties. An increased extent of LVA compared to LGE was ob- served in the septal wall of the LA (𝑎 ̃sept.,LVA = 19.63 % and 𝑎 ̃sept.,LGE = 3.94 %, with 𝑎 ̃ = median of the extent of patho- logical tissue in the corresponding region). In contrast, in the inferior and lateral wall, the extent of LGE was higher than the extent of LVA for most geometries (𝑎 ̃inf.,LGE = 27.22% and 𝑎 ̃lat.,LGE = 32.70 % compared to 𝑎 ̃inf .,LVA = 9.21 % and 𝑎 ̃lat.,LVA = 6.69 %). Since both modalities provided dis- crepant results regarding the detection of arrhythmogenic sub- strate using clinically established thresholds, further investiga- tions regarding their constraints need to be performed in order to use these modalities for patient stratification and treatment planning.
Abstract:
Introduction: Although the effective refractory period (ERP) is one of the main electrophysiological properties governing atrial tachycardia (AT) maintenance, ERP personalization is rarely performed when creating patient-specifi c computer models of the atria to inform clinical decision-making. State-of-the-art models usually do not consider physiological ERP gradients but assume a homogeneous ERP distribution. This assumption might have an influence on the ability to induce reentries in the model.Aim: To evaluate the impact of incorporating clinical ERP measurements when creating in silico personalized models to predict vulnerability to atrial fibrillation (AF).Methods: Clinical ERP measurements were obtained from three patients from multiple locations in the atria. The protocol for ERP identification consisted of trains of 7 S1 stimuli with a basic cycle length of 500ms followed by an S2 stimulus with a coupling interval between 300 and 200ms in decrements of 10ms until loss of capture. The atrial geometries from the electroanatomical mapping system were used to generate personalized atrial models. To reproduce patient-specific ERP, the established Courtemanche cellular model was gradually reparameterized from control conditions to a setup representing AF-induced remodeling. Three different approaches were studied:1) a control scenario with no ERP personalization 2) a discrete split where each region had a single ERP value and3) a continuous ERP distribution by interpolation of measured ERP data (Fig. 1). Arrhythmia vulnerability was assessed by virtual S1S2 pacing from different locations separated by 3cm. The number and location of inducing points and type of arrhythmia were determined for the three approaches. The mean conduction velocity was setto 0.7 m/s and the electrical propagation in the atria was modeled by the monodomain equation and solved withopenCARP.Results: Incorporating patient-specific ERP as a continuous distribution did not induce any reentrant activity. A summary of induced ATs is shown in Table 1. For patient A, AF was induced from 3 different locations with the control setup, whereas 9 ATs were induced with the regional method, of which 4 were AF and 5 macro reentries. For patient B, AF was induced from 1 point with the control setup; whereas with the regional approach, AF was induced at 4 points. For patient C, only one macro reentry was induced with the regional method.Conclusion: The incorporation of patient-specifi c ERP values has an impact on the assessment of AF vulnerability. Furthermore, the type of personalization affects the likelihood of AF inducibility. The incorporation of more detailed ERP distributions may lead to a more accurate prediction of AF trigger points and could in the future inform patient-specifi c therapy planning. Larger cohorts need to follow to demonstrate the role of incorporating clinical patient-specifi c ERP values into personalized models for predicting AF vulnerability.
Abstract:
We present in silico experiments investigating the potential relationship between atrial arrhythmias in patients with systemic lupus erythematosus (SLE) and the combined effects of structural and electrical remodeling due to chronic inflammation. The study utilized a computational model to simulate the structural and electrical changes in atrial tissue caused by chronic inflammation, with the ultimate goal of shedding light on the mechanisms underlying the development of atrial arrhythmias in SLE patients. The experiment results indicate that electrical remodeling associated with SLE can alter the depolarization pattern and facilitate the emergence of reentry patterns that could initiate arrhythmias. Mild inflammation was found to be insufficient to trigger arrhythmias, while severe inflammation could induce arrhythmias that were not sustained but exhibited a repetitive pattern. This pattern exhibited a 2:1 block of the left atria. These findings provide important insights into the mechanisms underlying the development of atrial arrhythmias in SLE patients and suggest that inflammation-induced structural and electrical remodeling may contribute to this condition. The study offers a valuable starting point for further investigating the complex relationship between SLE, chronic inflammation, and atrial arrhythmias. Furthermore, in the future, this could contribute to the development of new therapeutic strategies for this condition.
Abstract:
Regions with pathologically altered substrate have been identified as potentially proarrhythmic for atrial fibrillation. Mapping techniques, such as voltage mapping, are currently used to estimate the location of these fibrotic areas. Recently, local impedance (LI) has gained attention as another modality for atrial substrate assessment as it does not rely on the dynamically changing electrical activity of the heart. However, its limits for assessing non-transmural and complex fibrosis patterns have not yet been studied in detail. In this work, the ability of EGMs and LI to identify non-transmural fibrosis at different transmural levels using in silico experiments is explored. A pseudo-bidomain model was used to recover the extracellular potential on the surface of the tissue while LI reconstruction was calculated by a time-difference imaging approach with an homogeneous tissue background conductivity. Four fibrosis configurations were modeled to compare the two modalities using Pearson correlation coefficient. Only one transmural structure was detected by voltage whereas non-transmural structures, namely endo-, midmyo-, and epicardial, yielded zero. The correlation for LI maps ranged from -0.02 to 0.74. We conclude that LI, together with EGMs, can be expected to distinguish between healthy and fibrotic tissue, paving the way towards its use as a surrogate for non-transmural atrial substrate.
Abstract:
Conduction velocity in cardiac tissue is a crucial electrophysiological parameter for arrhythmia vulnerability. Pathologically reduced conduction velocity facilitates arrhythmogenesis because such conduction velocities decrease the wavelength with which re-entry may occur. Computational studies on CVand how it changes regionally in models at spatial scales multiple times larger than actual cardiac cells exist. However, microscopic conduction within cells and between them have been studied less in simulations. In this work, we study the relation of microscopic conduction patterns and clinically observable macroscopic conduction using an extracellular- membrane-intracellular model which represents cardiac tissue with these subdomains at subcellular resolution. By considering cell arrangement and non-uniform gap junction distribution, it yields anisotropic excitation propagation. This novel kind of model can for example be used to understand how discontinuous conduction on the micro- scopic level affects fractionation of electrograms in healthy and fibrotic tissue. Along the membrane of a cell, we observed a continuously propagating activation wavefront. When transitioning from one cell to the neighbouring one, jumps in local activation times occurred, which led to lower global conduction velocities than locally within each cell.
Abstract:
The restraining effect of the pericardium and surrounding tissues on the human heart is essential to reproduce physiological valve plane movement in simulations and can be modeled in different ways. In this study, we investigate five different approaches used in recent publications and apply them to the same whole heart geometry. Some approaches use Robin boundary conditions, others use a volumetric representation of the pericardium and solve a contact problem. These two strategies are combined with a smooth spatially varying scaling or a region-wise partitioning of the epicardial surface. In general, all simulations follow the same morphology regarding mitral valve displacement, tricuspid valve displacement and left ventricular twist. We show that – with the parameters used in the original papers – Robin boundary conditions are computationally more expensive and lead to smaller stroke volumes and less ventricular twist. Unrelated to this, simulations with a penalty scaling result in a less pronounced displacement of the tricuspid valve. In one of the investigated scenarios adipose tissue is modeled using a volumetric mesh and the Robin boundary conditions are applied on its outside surface. We conclude that this approach leads to similar results as a partitioning of the epicardial surface into two regions with different penalty parameters and therefore a volumetric representation of the adipose tissue is neither necessary nor practical.
Abstract:
Lumped parameter models of the human circulatory sys-tem are able to reproduce major features and phases of human circulation. However, they often lack physiological detail regarding pressure and flow across the valves. To alleviate these shortcomings, we implement a model of heart valve dynamics based on Bernoulli's principle to account for the transvalvular pressure drop and extend it by smooth opening and closing of the valves. We evaluate the new model based on a simulation with healthy valves and explore the possibility of simulating heart valve diseases by considering a case of severe aortic stenosis. The model more faithfully reproduces pressure, volume, and flow in all four chambers and in particular across the valves. Most of the changes are related to the consideration of blood inertia. However, only by opening and closing the valves more slowly, it is possible to reproduce features connected to backflow. When reducing the maximum area ratio of the aortic valve to 10%, a pressure gradient of 77.2 mmHg during systole and a 20% reduction in stroke volume was observed in accordance with the AHA guidelines of severe aortic stenosis. To conclude, we were able to improve our existing OD circulation model in terms of physiological accuracy by replacing the diode-like valves with an easy to implement model of heart valve dynamics that is capable of simulating both healthy and pathological scenarios.
Abstract:
Persistent atrial fibrillation (AF) patients show a 50% recurrence after pulmonary vein isolation (PVI), and no consensus is established for following treatment. The aim of our i-STRATIFICATION study is to provide evidence for optimal stratification of recurrent AF patients to pharmacological versus ablation therapy, through insilico trials in 800 virtual atria. The cohort presents variability in anatomy, electrophysiology, and tissue structure (low voltage areas, LVA), and is developed and validated against experimental and clinical data from ionic currents to ECG. AF maintenance is evaluated prior-and post-PVI, and atria with sustained arrhythmia after PVI are independently subjected to seven state-of-the-art treatments for AF. The results of the i-STRATICICATION study show that the right and left atrial volume dictate the success of ablation therapy in structurally-healthy atria. On the other hand, LVA ablation, both in the right and left atrium, is required for atria presenting LVA remodelling and short refractoriness. This atrial refractoriness, mainly modulated by L-type Ca2+ current, ICaL, and fast Na+ current, INa, determines the success of pharmacological therapy. Therefore, our study suggests the assessment of optimal treatment selection using the above-mentioned patient characteristics. This provides digital evidence to integrate human in-silico trials into clinical practice.
Abstract:
Aims: The purpose of this study is to assess the effects of autonomic modulation and hypocalcemia on the pace-making rate in a human sinoatrial node (SAN) cell model. The clinical relevance is to bring a better understanding of the increased risk of sudden cardiac death in chronic kidney disease patients who regularly undergo hemodialysis. Methods: The Fabbri et al. (2017) SAN model was used to compute the gradual response on isoprenaline concentration ([$\text{ISO}$]) between 0 and $1.5\ \mu\mathrm{M}$ with extracellular calcium concentrations ($[\text{Ca}^{+2}]_{o}$) in the range from 1.2 to 2.2 mM. The pacing capacity of the model was evaluated by assessing the pacing rate (in beats per minute (BPM)). Results: Low $[\text{Ca}^{+2}]_{\mathrm{o}}$ led to decreased pacing rate: at $[\text{Ca}^{+2}]_{\mathrm{o}}=1.4mM$, the rate without extra autonomous stimulation was only 50 BPM compared to the 74 BPM at the default $[\text{Ca}^{+2}]_{\mathrm{o}}=1.8mM$ This effect was counteracted by autonomous modulation. The [$\text{ISO}$] necessary to restore the baseline pacing rate was $0.5 \mu \mathrm{M}$ and $1\mu \mathrm{M}$ when $[\text{Ca}^{+2}]_{\mathrm{o}}$ was reduced to 1.6 mM and 1.4 mM, respectively. Conclusions: Isoprenaline stimulation can conserve the pacing capacity during hypocalcemia. However, extremely high [$\text{ISO}$] may lead to saturation and a non-linear response, which the current model does not take into account.
Abstract:
Timely and accurate diagnosis of severe Aortic Stenosis (AS) is crucial to prevent severe clinical implications. The most commonly used parameter for diagnostic purposes is the mean transvalvular pressure gradient, measured by echocardio-graphy (>= 40 mmHg). However, its use for detecting severe AS has several limitations, including technical, pathophysiological, and clinical reasons. This study aimed to develop a Deep Learning (DL) model for identifying severe AS using ColorDoppler Echocardiography video data. The new DL model used is called ViViT (Video Vision Transformers). To limit the overfitting problem, the data augmentation technique was applied during the training phase. The model achieved an accuracy of 87% in classifying patients with severe AS compared to healthy subjects in the testing group. Future efforts will focus on enhancing model accuracy, increasing the initial dataset, and refining the classification process by implementing multi-classification of AS with varying degrees of severity.
Abstract:
This work investigates into radiation-free classification of craniosynostosis with an additional focus on including data augmentation and using synthetic data as a replacement for clinical data.Motivation: Craniosynostosis is a condition affecting infants and leads to head deformities. Diagnosis using radiation-free 3D surface scans is a promising alter- native to traditional computed tomography (CT) imaging. Clinical data are only sparsely available due to the low prevalence and difficulties in anonymization. This work addresses these challenges by proposing new classification algorithms for craniosynostosis, by creating synthetic data for the scientific community, and by demonstrating that it is possible to fully replace clinical data with synthetic data without losing classification performance.Methods: A statistical shape model (SSM) of craniosynostosis patients is created and made publicly available. A 3D-2D conversion from the 3D mesh geometry to a 2D image is proposed which enables the usage of convolutional neural net- works (CNNs) and data augmentation in the image domain. Three classification approaches (based on cephalometric measurements, based on an SSM, and based on the 2D images using a CNN) to distinguish between three types of craniosynos- tosis and a control group are proposed and evaluated. Finally, the clinical training data are fully replaced with synthetic data by an SSM and a generative adversarial network (GAN).Results: The proposed CNN classification outperformed competing approaches on a clinical dataset of 496 subjects and achieved an F1-score of 0.964. Data augmen- tation increased the F1-score to 0.975. Attribution maps of the classification decision showed high amplitudes on parts of the head associated with craniosynostosis. Replacing the clinical data with synthetic data created by an SSM and a GAN still yielded an F1-score of more than 0.95 without the model having seen a single clinical subject.Conclusion: The proposed conversion of 3D geometry to a 2D encoded image improved performance to existing classifiers and enabled data augmentation during training. Using an SSM and a GAN, clinical training data could be replaced with synthetic data. This work improves existing diagnostic approaches on radiation-free recordings and demonstrates the usability of synthetic data which makes clinical applications more objective, interpretable, and less expensive.
Abstract:
The human heart is the highly complex core part of the cardiovascular system, permanently, reliably and autonomously keeping up the blood flow. In computational models, the cardiac function is reproduced in silico to deduce simulation studies that deliver deeper insights into underlying phenomena or investigate parameters of interest under perfectly controlled conditions. In light of cardiovascular diseases being the number one cause of death in Western countries, contributing to the early stage diagnosis is of high clinical relevance. In this context, computational fluid dynamics simulations can deliver valuable insights into blood flow dynamics and thus offer the opportunity to investigate a central area of physics of this multi-physics organ. As the deformation of the endocardial surface triggers the blood flow, the effects of elastomechanics have to be considered as boundary conditions for such fluid simulations. However, to be relevant in the clinical context, a balance between computational effort and necessary accuracy has to be met and models need to be both, robust and reliable. Thus, in this thesis the opportunities and challenges of light and therefore less complex coupling strategies are assessed focusing on three key aspects:First, a fluid dynamics solver based on the immersed boundary approach is implemented, as this method attracts with a highly robust representation of moving meshes. Basic func- tionality was verified for various simplified geometries and showed a high accordance with analytical solutions. Comparing the 3D simulation of a realistic left heart geometry to a body-fitted mesh approach, basic global quantities were reproduced. However, variations in the boundary conditions revealed a high influence on the simulation results.The application of the solver to simulate the influence of pathologies on the blood flow patterns showed results in good accordance with literature values. In mitral valve regurgitation simulations, the regurgitation jet was visualized by a particle tracking method. In hypertrophic cardiomyopathy, flow patterns in the left ventricle were assessed by a passive scalar transport to visualize the local concentration of the initial blood volume.As in the just mentioned studies, only a unidirectional flow of information from the elas- tomechanical model to the fluid solver was considered, the retrograde effect of the spatially resolved pressure field resulting from fluid dynamics simulations on the elastomechanics is quantified. A sequential coupling approach is introduced to consider fluid dynamics influences in a cycle-to-cycle coupling structure. The low deviations in the mechanical solver of 2 mm vanished already after one iteration, implicating that the retrograde effect of fluid dynamics in the health heart is limited. Summing up, especially in fluid dynamics simulations, boundary conditions have to be set with caution as due to their high influence the vulnerability of the models is increased. Nevertheless, light coupling strategies showed promising results in reproducing global fluid dynamic quantities while the dependence between the solvers is reduced and computational effort is saved.
Abstract:
An early detection and diagnosis of atrial fibrillation sets the course for timely intervention to prevent potentially occurring comorbidities. Electrocardiogram data resulting from electrophysiological cohort modeling and simulation can be a valuable data resource for improving automated atrial fibrillation risk stratification with machine learning techniques and thus, reduces the risk of stroke in affected patients.
Abstract:
With the development of deep learning techniques, deep neural network (NN)-based methodshave become the standard for vision tasks such as tracking human motion and pose estimation,recognizing human activity, and recognizing faces. Deep learning techniques have improvedthe design, implementation, and deployment of complex and diverse applications, which arenow being used in a wide variety of fields, including biomedical engineering. The applicationof computer vision techniques to medical image and video analysis has resulted in remarkableresults in recognizing events. The inbuilt capability of convolutional neural network (CNN)in extracting features from complex medical images, coupled with long short term memorynetwork (LSTM)’s ability to maintain the temporal information among events, has createdmany new horizons for medical research. Gait is one of the critical physiological areas thatcan reflect many disorders associated with aging and neurodegeneration. A comprehensiveand accurate gait analysis can provide insights into human physiological conditions. Existinggait analysis techniques require a dedicated environment, complex medical equipment, andtrained staff to collect the gait data. In the case of wearable systems, such a system can altercognitive abilities and cause discomfort for patients.Additionally, it has been reported that patients usually try to perform better duringthe laboratory gait test, which may not represent their actual gait. Despite technologicaladvances, we continue to encounter limitations when it comes to measuring human walkingin clinical and laboratory settings. Using current gait analysis techniques remains expensiveand time-consuming and makes it difficult to access specialized equipment and expertise.Therefore, it is imperative to have such methods that could give long-term data aboutthe patient’s health without any dual cognitive tasks or discomfort while using wearablesensors. Hence, this thesis proposes a simple, easy-to-deploy, inexpensive method for gaitdata collection. This method is based on recording walking videos using a smartphonecamera in a home environment under free conditions. Deep NN then processes those videosto extract the gait events after classifying the positions of the feet. The detected eventsare then further used to quantify various spatiotemporal parameters of the gait, which areimportant for any gait analysis system.In this thesis, walking videos were used that were captured by a low-resolution smartphonecamera outside the laboratory environment. Many deep learning-based NNs wereimplemented to detect the basic gait events like the foot position in respect of the groundfrom those videos. In the first study, the architecture of AlexNet was used to train the modelfrom scratch using walking videos and publicly available datasets. An overall accuracyof 74% was achieved with this model. However, the LSTM layer was included with thesame architecture in the next step. The inbuilt capability of LSTM regarding the temporalinformation resulted in improved prediction of the labels for foot position, and an accuracyof 91% was achieved. However, there is hardship in predicting true labels at the last stage ofthe swing and the stance phase of each foot.In the next step, transfer learning is used to get the benefit of already trained deep NNs byusing pre-trained weights. Two famous models inceptionresnetv2 (IRNV-2) and densenet201(DN-201) were used with their learned weights for re-training the NN on new data. Transferlearning-based pre-trained NN improved the prediction of labels for different feet’ positions.It especially reduced the variations in the predictions in the last stage of the gait swing andstance phases. An accuracy of 94% was achieved in predicting the class labels of the testdata. Since the variation in predicting the true label was primarily one frame, it could beignored at a frame rate of 30 frames per second.The predicted labels were used to extract various spatiotemporal parameters of thegait, which are critical for any gait analysis system. A total of 12 gait parameters werequantified and compared with the ground truth obtained by observational methods. TheNN-based spatiotemporal parameters showed a high correlation with the ground truth, andin some cases, a very high correlation was obtained. The results proved the usefulness ofthe proposed method. The parameter’s value over time resulted in a time series, a long-termgait representation. This time series could be further analyzed using various mathematicalmethods. As the third contribution in this dissertation, improvements were proposed to theexisting mathematical methods of time series analysis of temporal gait data. For this purpose,two refinements are suggested to existing entropy-based methods for stride interval timeseries analysis. These refinements were validated on stride interval time series data of normaland neurodegenerative disease conditions downloaded from the publicly available databankPhysioNet. The results showed that our proposed method made a clear degree of separationbetween healthy and diseased groups.In the future, advanced medical support systems that utilize artificial intelligence, derivedfrom the methods introduced here, could assist physicians in diagnosing and monitoringpatients’ gaits on a long-term basis, thus reducing clinical workload and improving patientsafety.
Abstract:
Chronic diseases constitute a significant global health challenge, accounting for 74% of all deaths annually. Managing these conditions involves continuous medication use, and prolonged medication regimens introduce immunosuppressive risks, particularly among chronic disease patients, heightening susceptibility to infections. Continuous temperature monitoring emerges as a critical solution, as elevated body temperature is a primary indicator of infection. Additionally, measuring core temperature provides better precision compared to other methods. However, the current ingestible devices in the market are designed only for short-term monitoring.This thesis addresses this critical gap by proposing an innovative ingestible temperature monitoring system, that integrates the stackable magnetic gastric resident system (GRS) plat- form developed by MIT. This system efficiently manages battery usage, enabling prolonged temperature measurements. It incorporates a carefully selected temperature sensor, bluetooth low energy (BLE) communication, and a meticulously engineered flexible PCB to ensure functionality within an ingestible container. The accompanying software optimizes energy efficiency and facilitates constant temperature transmission. Finally, the receiver software for the computer provides an interface for data visualization.In the conducted tests, the device’s current consumption was measured and 48 days for constant transmission was theoretically calculated. However, in real-world long-term mea- surements, the device operated for approximately 19 days, still exceeding the performance of comparable devices in the market. Accuracy tests yielded a promising result of +/- 0,84°C for our device. Moreover, in-vivo tests conducted at the Massachusetts Institute of Tech- nology’s (MIT) animal facility with ethical approval, utilizing a porcine model, confirmed that the device can effectively transmit signals from inside the body to the exterior. The received values demonstrated coherence with the expected temperature values from the swine’s stomach. This approach positions our system as a promising candidate for continu- ous temperature monitoring in immunosuppressed patients, aiding in the early detection of potential infections.
Abstract:
The recent advancements in laparoscopic liver surgery, driven by benefits such as reduced pain, shorter hospitalizations, and improved cosmetic results, have heightened the demand for support systems for this challenging type of surgery. In this context, intraoperative monocular depth estimation plays a crucial role, offering valuable insights for surgery support. Current techniques often rely on artificial neural networks trained through unsupervised or semi- supervised methods, primarily due to the absence of clinical data sets with ground truth. This study addresses this gap by utilizing synthetic data sets and employing a supervised approach to investigate the feasibility of exclusive training on synthetic data for subsequent use on clinical data.The methodology employs a U-Net architecture and divides the approach into synthetic, uncertainty, and clinical components. The synthetic methods focus on enhancing model performance and exploring possibilities and limitations through common deep learning practices. Hyperparameter tuning and augmentation result in high accuracy: 77.15% for δ1 , 96.22% for δ2, 99.19% for δ3, with an RMSE of 9.98 mm. Generalization over color is achieved, but challenges persist in generalizing over depth ranges. The uncertainty methods illustrate the advantages and disadvantages of uncertainty estimation, particularly with the implementation of Monte Carlo dropout, which, while reducing metrics, enables valuable analysis of clinical images. The clinical method highlights differences between scenes close to the synthetic data set and unseen scenes, showcasing increased uncertainty over the latter. Investigation into the impact of different colors in the clinical data set reveals effects primarily on the uncertainty parts of the image. Consistency analysis indicates that while images remain consistent in form, there are slight variations in depth. Time consistency analysis emphasizes that changes are most notable in the uncertainty part, while the rest remains stable.In summary, this study highlights the potential of training an artificial neural network on synthetic data for accurate depth estimation in laparoscopic liver surgery. The selected architecture exhibits strong generalization capabilities, but faces challenges in handling diverse depths. The integration of uncertainty with Monte Carlo dropout proves valuable, though occasional deviations in clinical evaluation highlight the importance of quantitative assessments. Despite challenges, training on synthetic data holds promise for clinical applications, with identified avenues for improvement.
Abstract:
This study investigates the impact of Federated Learning (FL) on the explainability of convolutional neural networks, using the CheXpert dataset and the DenseNet121 model architecture. It compares the performance of FL models using two aggregation algorithms, Federated Averaging (FedAvg) and Federated Proximal (FedProx), across multiple client configurations and learning rates, against centralized models as a benchmark. The focus is on how these different setups affect Local Interpretable Model-agnostic Explanations. The results indicate that centralized models generally outperform FL models in accuracy and Area Under the Curve metrics due to their access to comprehensive datasets. However, they show lower interpretability, as indicated by their F1 scores and local fidelity scores. In contrast, FL models demonstrate superior interpretability, with FedProx showing a consistent advantage over FedAvg in local fidelity, cosine similarity, and IoU scores, suggesting that FedProx is better at providing locally accurate explanations. This study highlights the trade-off between model accuracy and interpretability, suggesting that FL, particularly using FedProx, offers a viable approach for applications where interpretability is crucial, such as in the healthcare sector. The findings emphasize the need for optimizing learning rates to balance accuracy with explanation quality, and suggest that FL models can effectively handle data diversity, making them particularly valuable in privacy-sensitive and decentralized data environments. The research contributes to the understanding of FL’s potential in enhancing the interpretability of machine learning (ML) systems, paving the way for more transparent and trustworthy ML applications.
Abstract:
Non-invasive fetal electrocardiography offers the potential to measure heart rate, heart ratevariability, and to analyse ECG waveform morphology of the fetal heart. It is a promisingtechnique for monitoring fetal cardiac activity using abdominal recordings. [1] Despite effortsto implement this technology in clinical practice, there remains no standardised electrodeplacement. The reasons for this shortcoming are that studies have faced the problem oflimited data sets and a lack of knowledge about the fetal electrocardiogram as it cannotbe sufficiently separated from the overlaying parental cardiac noise. [2] In this work, anapproach was developed to identify an optimal lead for a healthy fetus in a computationalmodel of a full pregnant female torso. This model offered the advantage of knowing thetrue fetal electrocardiogram without any noise. The signal strength was attributed to bean indicator of an appropriate electrode location. Therefore, several fitting features wereextracted for each node on the torso to create a feature map, representing the signal strengthdistribution. The features were based on the amplitudes and energy of the different segmentsof an electrocardiogram. To identify an optimal bipolar lead, an approach was developed todetect a lead on the parental torso that represented a lead on the fetal body, in this case LeadII, and was associated with a strong signal. Furthermore, a fetus in distress was modelled byadding myocardial ischaemia as well as a fetus in different positions to investigate the impacton the fetal signal. We observed that for the main mesh, the different features producedsimilar distributions. The energy of the full cycle served as a basis for identifying an optimalbipolar lead based on the fetal Lead II since it correlated the most with all other featuremaps. The Lead II representations originated from combinations of electrodes placed aroundthe abdomen and the back of the torso. The bipolar lead with the highest energy value hadan interelectrode distance of 183.2 mm and belonged to the leads that were close to themaximum of the feature map. An even higher energy value was associated with the leadwhose electrodes were placed at the two local maxima of the feature map, respectively. Sincethis lead was also similar to the Lead II on the fetal body with a correlation coefficient of0.9237, it was used to investigate the simulation results when the fetal heart suffered frommyocardial ischaemia. Here, the waveform changed considerably with respect to the healthycase. Further exploration of the feature maps showed that the signal strength distributionchanged for fetuses in different positions. In conclusion, we proposed a bipolar lead thatrepresented the fetal Lead II and acknowledged signal strength using a computational model.However, we also demonstrated that more studies are needed to identify an optimal set ofbipolar leads to cover the variety of common clinical scenarios.
Abstract:
In laparoscopic liver surgery, overlaying preoperative information from a 3D computed tomography model with intraoperative reconstructed surfaces from laparoscopic images could support the surgeons by enhancing surgical navigation. However, achieving robust registration is difficult due to the lack of distinct features on the liver surface, the partial visibility of the liver in laparoscopic images, and the occurrence of disturbances like deforma- tions of the liver during surgery. Especially when attempting to register synthetic data with disturbances using handcrafted feature descriptors, identifying distinct features is particularly challenging. A promising solution is the utilization of learned feature descriptors. This work compares the registration accuracy of two learned feature descriptors, the PREDATOR and the LiverMatch, with currently used handcrafted feature descriptors. Additionally, the focus is on uncovering the reasons behind the superior performance of learned feature descriptors and assessing the degrees of disturbances for which the registration is still successful.To achieve these goals, data sets were generated for training, validation, and testing of the feature descriptors. While the 3D-IRCADb-01 data set was used for the preoperative data, the intraoperative data were synthetically generated. For the resulting features of the feature descriptors, the rigid transformation matrices were calculated with the random sample consensus algorithm, and the registration accuracy was evaluated with the root mean square error (RMSE). Additionally, the features were analyzed with regard to the possibility of calculating correspondences. To further understand the feature calculation of the learned feature descriptors, the features of the preoperative data were analyzed.The examined learned feature descriptors significantly improved the registration accuracy. One investigated reason was the ability of learned feature descriptors to identify more corre- spondences that aligned with the actual correspondences. Similar to the handcrafted feature descriptors, the learned ones used geometric properties for the feature calculation. Addition- ally, they benefit from the exchange of information between preoperative and intraoperative data, which enabled them to compute more distinctive features. When assessing the impact of disturbances individually, the LiverMatch achieved RMSE values of 5.38 mm for large variations. Regarding combinations of all disturbances, the LiverMatch reached its limits more quickly, with the patch size notably affecting the registration accuracy.In summary, this study shows the large potential of the examined learned feature descrip- tors for use in laparoscopic liver surgery because they enabled a more precise initial rigid alignment compared to their handcrafted counterparts. Furthermore, by optimizing both the training data set and hyperparameters, there is the potential for further improving the results.
Abstract:
Neurotransmitter measurements are a tool to improve comprehension of the neural circuitry and combatting neurodegenerative illnesses. Fast-scan cyclic voltammetry offers exquisite temporal resolution and is commonly conducted on carbon fiber microelectrodes, offering a low limit of detection and high spatial resolution. However, they are etched away during measurements, making them disadvantageous for long-term measurements and chronic implants. Boron doped diamond as electrode material promises to overcome this problem and offers some other advantages for the fabrication of electrode arrays.This thesis presents a fabrication method for free standing boron doped diamond microelec- trode arrays using Parylene C as insulation and laser cutting for exposing the electrodes. With this process, one 16-channel and two 4-channel arrays were manufactured. A 16-channel fast-scan cyclic voltammetry system was developed and implemented in an experimental setup. The setup was used to demonstrate that the 4-channel array is capable of detecting physiological neurotransmitter concentrations. Furthermore, a data set containing fast-scan cyclic voltammetry measurements of 21 dopamine and serotonin mixtures was gathered using eight single channel boron doped diamond microelectrodes.Several machine learning models were trained to predict concentrations autonomously from the data. Among the used methods the support vector regression, combined with a Gaussian feature extraction method, performed best on single electrode models. For general models for all electrodes, random forest regression combined with principal component analysis resulted in the best performance.
Abstract:
Atrial fibrillation is a common heart condition that affects millions of people world- wide. Catheter ablation therapy is one of the treatments to prevent atrial fibrillation during the therapy. However, there is some inconsistency in the therapy in order to locate an abnormal tissue. Measurements of cardiac conduction velocity (CV) offer important anatomical and functional insight into the development and maintenance of cardiac arrhythmias. The slow conduction areas refer to regions of arrhythmia substrate. Objective: This study’s primary goal is to use the eikonal model to assess the conduction velocity of cardiomyocytes in AF patients. Methods: The eikonal model is used to simulate the electical wave propagation and CV with the Fast Itera- tive Method (FIM) solver. In order to make a robust method, additional approaches are added that can reduce the uncertainty in local activation times (LATs) and then calculate the CV from denoised LATs and verify the accuracy under anisotropy. Regularization, 2D and 3D gaussian, radial basis function (RBF), and bayesian regularization neural networks are used as additional approaches. Results: The results are divided into two sections: 2D square mesh and atrial geometry. In both cases, different noise scenarios are added to the LATs and denoised, and the error is calculated with the ground truth. In both cases, the average noise can be reduced to 75–90% in all scenarios in the CV. Conclusion: The cross-validation results demonstrate that the techniques used for wavefront approximation, denoising, and CV estimation are correct for a particular mesh or geometry (on a trained mesh or geometry), although additional efforts are needed when implementing them in different geometries in order to obtain trustworthy results.
Abstract:
The integration of 4D-OCT (real-time OCT) into surgical settings, com- plemented by effective segmentation of the anterior segment of the eye, would open up a wide range of potential applications, including enhanced visualization for surgeons and intraoperative assistance func- tions. This thesis introduces a streamlined process for annotating 3D OCT data from ex-vivo porcine eyes, particularly for the intricate sce- narios in cataract surgery. This data serves as the training ground for a 3D nnU-Net, helping ascertain the optimal network architecture for surgical scene segmentation during cataract procedures. Subsequently, the trained model undergoes optimization to achieve inference speeds compatible with intraoperative demands, resulting in a model that seg- ments the lens extraction process in just 115.19 ms. Overall, this work demonstrates that real-time segmentation of 3D OCT scans from ex- vivo porcine eyes can effectively enhance scene comprehension and aid in ophthalmic surgery.
Abstract:
Minimally invasive interventions offer numerous advantages over open surgery. However, la- paroscopic procedures on complex organs like the liver are challenging. This study proposes an automatic liver surface detection method, which can highlight the liver intraoperatively on the laparoscopic display to aid the surgeon.The utilised annotated clinical data set comprises 525 images from 5 liver surgeries. Addi- tionally, synthetic photo-realistic data is incorporated into the training process.The approach employs a supervised training using the U-Net architecture. Three experiments were conducted. Clinical: Training and testing on clinical data. Hyperparameter tuning was conducted, before employing data augmentation to the best performing model. Synthetic: Training with synthetic data and testing on both synthetic and clinical data. Combination: Pre-training with synthetic data followed by fine-tuning with clinical data and testing on clinical data. Data augmentation was applied to the clinical data. In order to receive a representative test result for the clinical data set, a 5-fold cross-validation was performed for the fine-tuning with clinical data. Each fold comprised one patient video, with one fold used as the test set and the remaining folds as the training set.For the clinical experiment, the best performing model in the hyperparameter tuning ex- periment was the combination of the sparse categorical cross-entropy loss function and the adaptive moment estimation optimisation algorithm. This model achieved average values of 0.67 for recall, 0.77 for precision and 0.72 for the F1 score. In comparison, this same model trained with the addition of data augmentation resulted in 0.74, 0.78 and 0.76 for recall, precision and F1 score, respectively. In the synthetic experiment, the recall, precision and F1 score yielded averages of 0.96, 0.97 and 0.97, respectively, when tested on synthetic data and 0.12, 0.51 and 0.17, respectively, when tested on clinical data. The average scores of the combined experiment were as follows: 0.69, 0.95 and 0.80 for recall, precision and F1 score, respectively, for the model pre-trained with synthetic data and fine-tuned with augmented clinical data and 0.83, 0.91 and 0.85 for recall, precision and F1 score, respectively, for the average test results of the 5-fold cross validation.These results show that leveraging synthetic data for pre-training as well as employing data augmentation benefit the model, especially with limited clinical data. Retraining with clinical data is necessary to fine-tune and improve liver recognition robustness. Future work involves extending anatomical structure recognition beyond the liver.
Abstract:
Cardiac modeling is an upcoming tool in cardiac research gaining more and more popularity in recent years. Its application purposes range from better understanding of the processes taking place to preliminary simulation of operations as well as fundamental research of new drugs and treatments. Incorporating accurate representations of physiological and pathologi- cal behavior is crucial in order to cater to specific application needs. One such behavior is the variability in heart rate.This thesis deals with the effect of increased heart rates on active tension development on the scale of a single cardiac muscle cell. Prior research has uncovered no frequency dependent adaptation of active tension duration in modeled tension transients, contradicting in-vivo behavior. The goal of this thesis is to pinpoint the deviations between simulated and experimental data. Therefore, an extensive literature research was conducted collecting experimental data of the intracellular calcium concentration and active tension development. Afterwards modifications to the ionic and contraction model were implemented, hoping to closer model the behavior observed in experimental data. Hereby, the O’Hara-Rudy model was replaced by the more advanced ionic model developed by Tomek et al.. In addition – due to their use in experimental data – simulations were conducted using model parameters for subendocardial heart tissue. This change necessitated a parameter optimization of the Land cardiac contraction model conducted for the Tomek ionic model at 1 Hz.Differences in the behavior of intracellular calcium were uncovered, pointing to a non- conformance of the simulated intracellular calcium concentration. These include an increase in simulated calcium amplitude at frequencies above 2 Hz with experimental data showing a decrease happening at higher frequencies and a weaker adaptation of the time to peak tension. Further deviations were uncovered between experimental tension and tension simulated with the Land cardiac contraction model. With the main differences being the behavior of peak tension, time to peak tension and the relaxation time.The changes introduced throughout this thesis improve multiple discrepancies, especially well represented in the adaptation of time to peak tension, tension relaxation and the change in peak calcium.
Abstract:
Currently, therapeutic approaches for cardiac arrhythmias like atrial fibrillation are not personalized, leading to more than half of the patients experiencing relapses after ablation treatments. Moreover, the risk of atrial fibrillation increases with age, posing a significant challenge to healthcare systems due to the aging population. Personalization and improving the success rate of arrhythmia treatment could be achieved through the use of mathematical models. Simulating cardiac electrophysiology, these models offer insights into individual aspects of treatment options for each patient. However, existing models are impractical for clinical application either due to their computational demands or limitations in simulating various scenarios. The DREAM presents a mathematical model that aims to circumvent these drawbacks by alternating available models, yet it remains clinically unusable due to existing inaccuracies. One reason for these inaccuracies lies in the computation of the diffusion current, a parameterized function computed independently of electrical wave propagation patterns. The aim of this work is to study the relationships between the diffusion current and propagation patterns, enabling their incorporation into the DREAM and enhancing its precision in simulating cardiac arrhythmias.In order to establish connections between the diffusion current and emerging electrical wave propagation patterns, two novel variables were introduced: the source area and the sink rate. These newly defined variables reflect the ratio of excited to unexcited nodes for each node within a mesh. The monodomain model was applied to simulate both a curved wavefront and colliding waves. When analyzing the effects of different pacing frequencies, the influence of the diastolic interval on the diffusion current was investigated. This involved introducing six stimuli with a constant pacing cycle length in a monodomain model simula- tion. For experiments involving various propagation patterns and varying pacing frequencies, the diffusion current was characterized by its maximum and minimum values, providing insights into the upstroke and downstroke.By examining various source-sink dynamics, it was discovered that the source has an impact on the upstroke, while the sink influences the downstroke of the diffusion current. Connections between the source area and the maximum of the diffusion current, as well as between the sink rate and the minimum of the diffusion current, were established. Further- more, the investigation of pacing frequencies revealed a correlation between the maximum and minimum of the diffusion current and the diastolic interval, manifested as a restitution curve.This work demonstrated, on one hand, the influence of source-sink dynamics on the shape of the diffusion current, and on the other hand, highlighted the distorted results deriving from the underlying scenario of source-sink dynamics correlation. Focusing solely on a curved wavefront, the distribution of data complicated the curve fitting to the extent of making it impossible. Lastly, this study delineates how the acquired insights into existing correlations can be incorporated into the DREAM, offering an approach to optimize its precision.
Abstract:
Currently, accurately characterisation of the atrial substrate for atrial arrhythmias manage- ment remains a challenging task. Electroanatomical voltage mapping, conduction velocity measurements, and Late Gadolinium Enhancement-Magnetic Resonance Imaging (LGE- MRI) are commonly used modalities for atrial substrate characterisation. However, they still exhibit inconsistencies in identifying arrhythmogenic areas. To enhance the precision of diagnosis and treatment, research actively seeks additional information that can provide context and explanations for these discrepancies.This thesis aims to explore the potential correlation between LGE-MRI and Local Impedance (LI) measurements and its implications for enhancing ablation procedures. Previous stud- ies have primarily focused on comparing established modalities, such as LGE-MRI and electroanatomical mapping, revealing disparities, particularly in the posterior region of the Left Atrium (LA). However, no direct comparison between LI and LGE-MRI data has been conducted, and this study aims to bridge that gap.The hypothesis of this thesis posits that LI measurements and LGE-MRI data in regions characterised by Magnetic Resonance Imaging (MRI) are correlated, and such correlation could enhance ablation procedures. Although the non-systematic nature of the data limits a definitive conclusion, this preliminary study lays the groundwork for a future clinical investigation.The results indicate that while the hypothesis cannot be accepted or rejected conclusively due to insufficient reliable data, the study successfully demonstrates the feasibility and acceptability of the approach. Further research using a well-designed clinical study with sufficient sample sizes could provide more definitive insights into the correlation between LI and LGE-MRI data in characterising substrate. The preliminary study’s findings include the development of methods adjusted for this specific study setup, including the generation of and registration onto a mean shape, and the establishment of systematic LI data acquisition criteria and thresholds for data processing. The potential utility of LI in replacing LGE-MRI for avoiding secondary effects and reducing time and cost remains uncertain for the fore- seeable future. Nevertheless, the clinical study is considered crucial in comprehending the potential and limitations of LI as a complementary diagnostic and ablation navigation tool. In conclusion, this preliminary study provides valuable insights and sets the stage for a more comprehensive clinical investigation that may shed light on the correlation between LI and LGE-MRI, paving the way for improved atrial substrate characterisation and potentially enhancing ablation procedures in the future.
Abstract:
Atrial fibrillation (AFib) is a cardiac arrhythmia that affects millions of people worldwide. This condition is associated with an increased risk of stroke, heart failure, and other serious cardiovascular complications. Due to the aging population and the increasing prevalence of risk factors, the incidence of AFib is expected to continue to rise in the future.This emphasizes the need to improve outcomes for patients affected by this condition. The underlying mechanisms of AFib are complex and not yet fully understood. However, it is widely accepted that abnormalities in the intramural tissue state within the cardiac walls, including fibrosis, play a crucial role in the development of AFib. Fibrosis refers to the formation of scar tissue due to injuries that happen in the cardiac tissue, disrupting this way the normal electrical conduction pathways. This leads to the formation of reentrant circuits and other arrhythmogenic mechanisms that promote AFib.Identifying regions that may act as triggers for atrial rhythm disorders has remained a daunting task, particularly for patients with significantly altered substrate. Current approaches to identify these regions include mapping techniques, such as voltage maps computed from electrogram (EGM)s, in order to estimate the presence of fibrotic tissue. Voltage maps provide valuable information about tissue pathology, but the result interpretation is a complex process that requires the consideration of various interacting factors. Therefore, voltage mapping may benefit from a complementary method to aid in the interpretation of its results. Recently, local impedance (LI) has gained attention as another modality for atrial substrate assessment as it does not rely on the dynamically changing cardiac electrical activity. How- ever, its abilities and limits for assessing non-transmural and complex fibrosis patterns have not yet been characterized in detail. In this work, the ability of EGM and LI to identify non-transmural fibrosis in different transmural locations between the endo- and epicardium using in silico experiments will be evaluated. The peak-to-peak (p2p) voltage values ob- tained from different electrode configurations were utilized as a criterion to determine the detection of the fibrotic area. Through voltage mapping it was possible to identify fibrotic transmural and endocardial regions, whereas through LI based maps, it was possible to identify other non-transmural patterns of fibrotic regions, including fibrotic areas located on the midmyocardium and epicardium. These findings may have important implications for the development of fibrosis detection techniques for AFib. The provided results from LI reconstruction simulations suggest that LI mapping may be a potential method for detecting non-transmural fibrosis. Its limits for assessing complex fibrosis patterns still need to be further investigated by including various catheter configurations and geometrical setups.
Abstract:
Minimally invasive laparoscopic surgery has become increasingly popular in recent years due to its benefits for patients, but remains a demanding task for the surgeon. Tele-operative systems ease the physically demanding part of handling multiple laparoscopic instruments in ergonomically uncomfortable positions, but previous attempts to automate simple subtasks have not been sufficient to reduce the mental impact. Vision-based Reinforcement Learning recently has been applied successfully to learn behaviors for surgical instruments, but spatial reasoning is one of the main challenges for the surgeon and a RL agent in laparoscopic tasks. We train a central policy using Reinforcement Learning for controlling a camera and varying surgical instruments in three different LapGym environments. Input observations are encoded in a latent feature vector using representation learning methods. We show that visual exploration with predefined camera positions vastly improve the learning success relatively to a static camera. Among the tested representation learning methods, agents with BYOL consistently are better able to extract the features of the observation and show higher performance. Furthermore, the policy performance in environments with highly variable, fine granular visual features is limited by the ability of the representation-learning encoders to represent these in their latent space. Further work in the field of robotic-assisted surgery needs to explore approaches with significantly higher sample efficiency and ability to perform multi-stage tasks.
Abstract:
The Respiration Rate (RR) is one of the vital signs and therefore an important indicator of a patient’s clinical condition. A continuous respiratory signal can provide more information than the RR and can be used to diagnose and track patient conditions such as pneumonia or sleep breathing disorders. Traditional techniques interfere with normal breathing, are intrusive and uncomfortable for patients, and are therefore less suitable for continuous monitoring. For these reasons, non-contact continuous monitoring systems have gained interest, but challenges remain, particularly in terms of accuracy and robustness.In this work, a non-contact continuous respiration monitoring system based on Fringe Projection Profilometry (FPP) is proposed. The aim is to answer the question whether the proposed system is suitable for continuous monitoring of respiration in terms of accuracy and robustness. A FPP 3D sensor suitable for measuring respiratory motion of the chest wall and abdomen was designed and implemented. The sensor prototype was evaluated based on its temporal and spatial repeatability and resolution in the context of respiratory variations. To extract an RR and continuous respiration signal, the resulting 3D images from the sensor were further analyzed. The analysis algorithm included preprocessing of 3D coordinates, dynamic and automatic selection of Region of Interest (ROI), extraction of respiratory movements, detection of additional body movements, and calculation of average RR and Breath-to-Breath Respiration Rate (BRR). A sliding window approach was used to continuously update the extracted respiratory parameters. The system was evaluated by comparing the results with a reference signal and the RR determined by a spirometer. Different scenarios were tested to assess the robustness and limitations of the system.The FPP sensor showed a high Signal-to-Noise ratio (SNR) of 37 dB when evaluated with an ideal sinusoidal respiration signal. When evaluating the overall system, a high mean correlation between the measured respiration signal and the reference signal of 0.95 was achieved across all tested scenarios. The mean Root-Mean-Square Error (RMSE) between the calculated RR and the reference was 0.11 breaths per minute (bpm). Evaluation of the BRR resulted in a mean RMSE of 0.52 bpm. The system was found to be robust towards various breathing modes, patient positions, ambient light conditions, and light body movements. The limitations of the system were reached during severe body movements, where the respiratory signal could not be reliably reconstructed, so that no respiratory parameters could be calculated during the movement.
Abstract:
Atrial fibrillation (AF) is one of the most common cardiac diseases, imposing a great international health burden. However, a complete understanding of how to treat patients suffering from AF is still not achieved. Pulmonary vein isolation (PVI) is a common standardized rhythm control treatment and effective for catheter ablation in patients suffering from paroxysmal AF. In persistent AF, however, PVI success rates are decreased, especially at long-term evaluation. As PVI naturally focuses on the left atrium (LA), the role of the right atrium (RA) in AF is rarely considered, although AF triggers and arrhythmogenic substrate apart from the LA are present. In-silico arrhythmia vulnerability assessment (AVA) methods enable the quantification of reentry inducibility. As the common catheter ablation (CA) is more focused on the LA, the computational tools mostly only consider the LA as well. In order to provide insights on the impact of the RA on AVA, this work considered only LA mono-atrial and bi-atrial scenarios based on five atrial geometries derived from imaging data segmentation. Three distinct EP scenarios were modelled by considering a state of healthy tissue (H), and two states of persistent AF electrical remodelling (M and S). The ionic remodelling due to AF was applied at 50 % for the M state, which was considered a mildly remodelling state. For each scenario, a pacing protocol was run to induce reentries from a set of stimulation points. Furthermore, this work considered the augmentation of missing RA from atrial geometries by adding two scenarios with both atria, whereby the RA is a fitted instance of a statistical shape model (SSM). The comparison of the simulation results from the fitted scenarios to the ones of the bi-atrial scenario were evaluated to assess the general use of a SSM in AVA. The average share of inducing points to stimulation points (IP/SP ratio) across all EP setups was 0.0 %, 0.83 % and 7.17 % for the mono-atrial scenario, 0.49 %, 27.27 % and 37.87 % for the bi-atrial scenario. The increase in inducibility of LA stimulation points from mono- to bi-atrial scenario was 0.91 ± 2.03%, 34.55 ± 14.9 % and 44.4 ± 14.9 % with respect to the EP states H, M and S. Thus, vulnerability assessment based only on LA mono-atrial scenarios could differ fundamentally once the RA is included. The comparison of fitted scenarios to bi-atrial resulted in an average IP/SP ratio difference of 14.95 % and 1.3 % in the M and S state. A dependency of the IP/SP ratio on individual atrial geometry was observed in the bi-atrial scenarios but not consistently matched by the fitted scenarios. In this work, it was shown that the RA has impact on AVA, and it is suggested that multiple properties are contributing to this impact. Further investigation could help to get a full pictureof their influence. The fitted scenarios could not consistently match the corresponding results of the bi-atrial scenarios but constitute a potential solution since the difference of mono-atrial and bi-atrial scenarios demands further investigation on the RA influence in AVA.
Abstract:
In case of cerebrovascular diseases, patients sometimes need revascularization surgeries to overcome hypoperfusion. The goal is to remain or restore a regular blood flow. Thus, intraoperative blood flow measurement is crucial for surgeons to control the quality of surgical procedure. The state-of-the-art technique is using ultrasonic. However, the instrument has to contact with blood vessel, which may cause contamination, compromise, and in worst case rupture. Therefore, Quantitative Fluorescence Angiography (QFA) is under research. In this thesis, factors influencing the deviation and accuracy of retrospective QFA pipeline developed in previous work are investigated. Using centerline shortening for testing, transit time is proven to be the most sensitive parameter. After applying line fitting to transit time based on centerline shortening, flow turns out to be less deviated. The idea behind centerline shortening and line fitting can be used in all applications if there exists difficulty in collecting data of a single variable without changing others, high deviation of this variable and clear relationship between it and another less deviated variable/s with known distribution.
Abstract:
Background: Transcranial Magnetic Stimulation (TMS) of the human brain is the only non-invasive technique which enables stimulation of specific brain areas above the threshold of neuronal action potentials in a safe and painless manner. For this reason, TMS emerged to a powerful tool for the investigation of local neural network properties. However, accurate placement of the TMS coil was shown to be crucial for the outcome of TMS related research and therapies. TMS of the cortical hand muscle representations situated in the motor cortex enables an objective readout of TMS related effects based on electromyography. To ensure correct placement of the coil on the motor cortex area, so called ’motor mapping methods’ were established. Up to now, these methods have only limited reliability and do not fully unravel the geometric complexity of TMS coil placement.Objective: The aim of this thesis is the development of a novel motor mapping method based on an advanced heuristical approach. This method incorporates the relation between individual anatomy and optimal coil orientation. The focus is especially on the assessment of inter-session reliability of the established approach.Methods: The novel so-called ’mean-nearest neighbour’ motor mapping method determines optimal position and orientation of the TMS coil relative to the motor cortex by interpolating pseudo-randomly acquired stimulation sites based on a nearest neighbour search. First, the algorithmic procedure was tested in an virtual model environment and subsequently validated based on real TMS-electromyography data from voluntary subjects. In a second step the novel method was compared to the common standard procedure and to an advanced method from literature based on electric field simulations, also established in this work.Results and Conclusion: The newly developed mean-nearest neighbour approach resulted in high inter-session accuracy in the approximation of an optimal coil orientation and position. Importantly, the newly developed method showed lower inter-session variability than the advanced electric field simulation based approach, which was not necessarily expected. In contrast, the comparison with the common manual standard procedure resulted in strong overlaying of identified coil positions and orientations. Consequently, the mean- nearest neighbour approach may constitute an accurate, precise and computationally efficient alternative to already established motor mapping methods. In general, the findings of this work on inter-session reliability of motor mapping methods may additionally influence future developments and usages of these methods.A Matlab® based implementation of the mean-nearest neighbour approach is available online: https://gitlab.com/chr.woe/mean-nn.
Abstract:
The early and correct differentiation between the types of cranial deformities affecting young children is particularly important, as it increases the chances of successful treatment. To assist physicians in their diagnosis and to make examinations more patient-friendly in clinical practice, there are some successful classification approaches based on neuronal networks. Although these approaches provide promising results, their medical acceptance is rather low due to their difficult explainability. To increase medical acceptance, an explainable Decision Tree Classifier is investigated in this study with regard to its applicability in clinical practice. For this, we trained and tested the classifier with essentially three different types of input features. First, we investigated the classification based on clinical indices (Cranial Index and Cranial Vault Asymmetry Index) and their underlying feature points, which are widely used in clinical practice. In addition, we evaluated an icosphere approach based on feature points uniformly distributed on the head. While the first approach, based on clinical indices, considers insufficiently representative points for classification, the second approach has limited applicability in clinical practice. Therefore, we developed our own layer approach, which consisted of four layers at different heights of the head. Each layer contained the eight feature points that would correspond to the Cranial Index and Cranial Vault Asymmetry points of the respective layer. We evaluated the performance of all three approaches using the performance parameters accuracy, balanced accuracy and F1 score. From this, we found that the decision tree that classified based on features of the layer approach gave the best results. However, the classification using a Linear Discriminant Analysis Classifier could not be outperformed. To reduce the measurement effort for our self-designed approach in clinical practice, we evaluated which feature points of the layer approach were most relevant. With a reduction of the measurement effort by 50%, the optimized layer approach only led to a slight deterioration of the investigated performance parameters. To investigate the effects of measurement errors on classification, we tested the Decision Tree Classifier with features of varying degrees of noise. We found that even in the presence of strong noise, the classifications based on the optimized layer approach were only slightly worse than for the unnoised features. Since our final approach is explainable, known measurement methods can be used, the measurement effort barely increases and the performance improves strongly while being resistant to measurement errors, we see good chances that this approach can support physicians in their diagnoses of cranial deformities.