A. Naber, D. Berwanger, and W. Nahm. In Silico Modelling of Blood Vessel Segmentations for Estimation of Discretization Error in Spatial Measurement and its Impact on Quantitative Fluorescence Angiography. In 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 4787-4790, 2019
Today the vascular function after interventions as Bypass surgeries are checked qualitatively by observing the blood dynamics inside the vessel via Indocyanine Green (ICG) Fluorescence Angiography. This state-of-the-art should be upgraded and has to be improved and converted towards a quantitatively measured blood flow. Previous approaches show that the blood flow measured from fluorescence angiography cannot be easily calibrated to a gold standard reference. In order to systematically address the possible source of error we investigate as a first step the discretization error in a camera-based measurement of the vessel’s geometry. In order to generate an error-free ground truth, a vessel model has been developed based on mathematical functions. This database is then used to determine the error in discretizing the centerline of the structure and estimate its effects on the accuracy of the flow calculation. As result the model is implemented according to the conditions which are set up to ensure transferability on camera-based segmentations of vessels. In this paper the relative discretization error for estimating the centerline length of segmented vessels could be calculated in the range of 6.3%. This would reveal significant error propagated to the estimation of the blood flow value derived by camera-based angiography.
K. Sieler, A. Naber, and W. Nahm. An Evaluation of Image Feature Detectors Based on Spatial Density and Temporal Robustness in Microsurgical Image Processing. In Current Directions in Biomedical Engineering, vol. 5(1) , pp. 273-276, 2019
Optical image processing is part of many applications used for brain surgeries. Microscope camera, or patient movement, like brain-movement through the pulse or a change in the liquor, can cause the image processing to fail. One option to compensate movement is feature detection and spatial allocation. This allocation is based on image features. The frame wise matched features are used to calculate the transformation matrix. The goal of this project was to evaluate different feature detectors based on spatial density and temporal robustness to reveal the most appropriate feature. The feature detectors included corner-, and blob-detectors and were applied on nine videos. These videos were taken during brain surgery with surgical microscopes and include the RGB channels. The evaluation showed that each detector detected up to 10 features for nine frames. The feature detector KAZE resulted in being the best feature detector in both density and robustness.
A. Wachter, A. Mohra, and W. Nahm. Development of a real-time virtual reality environment for visualization of fully digital microscope datasets. In Proceedings of SPIE, vol. 10868, pp. 108681F1-9, 2019
Current surgical microscope systems have excellent optical properties but still involve some limitations. A future fully digital surgical microscope may overcome some major limitations of typical optomechanical systems, like ergonomic restrictions or limited number of observers. Furthermore, it can leverage and provide the full potential of digital reality. To achieve this, the frontend, the reconstruction of the digital twin of the surgical scenery, as well as the backend, the 3-D visualization interface for the surgeon, need to work in real-time. To investigate the visualization chain, we developed a virtual reality environment allowing pretesting this new form of 3-D data presentation. In this study, we wanted to answer the following question: How must the visualization pipeline look like to achieve a real-time update of the 3-D digital reality scenery. With our current approach, we were able to obtain visualizations with a frame rate of 120 frames per second and a 3-D data update rate of approximately 90 datasets per second. In a further step, a first prototype of a real-time mixed-reality head mounted visualization system could be manufactured based on the knowledge gained during the virtual reality pretesting.