E. Dicke, A. Wachter, and W. Nahm. Estimation of the interpolation error of a three-step rotation algorithm using recorded images with rotated test pattern as ground truth. In Current Directions in Biomedical Engineering, vol. 3(2) , pp. 555-558, 2017
Nowadays, the surgical microscope is the goldstandard for microsurgical procedures. Additional functionalities such as surgical navigation, data injection or imageoverlay are providing additional valuable information to the surgeon. For substituting the conventional optical system by a fully-digital multi-camera setup the three dimensional (3D) reconstruction of the scenery in the field of view is required. However, for in camera-based systems, an exact alignment of the cameras is a challenging task. Therefore, a final adjustment through a digital image rotation becomes necessary. Even though the digital rotation is a commonly used procedure, it leads to unavoidable errors because of the discretized grid of the image. Previous research reported in literature has demonstrated that the method of digitally rotating the images combined with the Fourier interpolation delivers the results of best quality. Nevertheless, the performance evaluation of this algorithm was carried out rotating an image in multiple threestep rotations to a total of 90 or 180 degrees and comparing it to the original image rotated in one step. This is a valid approach because a rotation of 90 or 180 degrees does not produce rotation artifacts. In this research project, we verify the performance of the three-step rotation algorithm using recorded images for which the test pattern was rotated as ground truth. A series of photographs with a rotation angle of 3 to 45 degrees was created. The advantage of this setup is that the result of the digital rotation can be directly compared to the recorded image. In addition, with the knowledge obtained about the interpolation error, we can improve pixel matching in the further triangulation used for 3D reconstruction. By doing so, the estimation of the interpolation error helps to reduce the triangulation error.
C. Marzi, A. Wachter, and W. Nahm. Design of an experimental four-camera setup for enhanced 3D surface reconstruction in microsurgery. In Current Directions in Biomedical Engineering, vol. 3(2) , pp. 539542, 2017
Future fully digital surgical visualization systems enable a wide range of new options. Caused by optomechanical limitations a main disadvantage of todays surgical microscopes is their incapability of providing arbitrary perspectives to more than two observers. In a fully digital microscopic system, multiple arbitrary views can be generated from a 3D reconstruction. Modern surgical microscopes allow replacing the eyepieces by cameras in order to record stereoscopic videos. A reconstruction from these videos can only contain the amount of detail the recording camera system gathers from the scene. Therefore, covered surfaces can result in a faulty reconstruction for deviating stereoscopic perspectives. By adding cameras recording the object from different angles, additional information of the scene is acquired, allowing to improve the reconstruction. Our approach is to use a fixed four-camera setup as a front-end system to capture enhanced 3D topography of a pseudo-surgical scene. This experimental setup would provide images for the reconstruction algorithms and generation of multiple observing stereo perspectives. The concept of the designed setup is based on the common main objective (CMO) principle of current surgical microscopes. These systems are well established and optically mature. Furthermore, the CMO principle allows a more compact design and a lowered effort in calibration than cameras with separate optics. Behind the CMO four pupils separate the four channels which are recorded by one camera each. The designed system captures an area of approximately 28mm × 28mm with four cameras. Thus, allowing to process images of 6 different stereo perspectives. In order to verify the setup, it is modelled in silico. It can be used in further studies to test algorithms for 3D reconstruction from up to four perspectives and provide information about the impact of additionally recorded perspectives on the enhancement of a reconstruction.
A. Naber, and W. Nahm. Video magnification for intraoperative assessment of vascular function. In Current Directions in Biomedical Engineering, vol. 3(2) , pp. 175-178, 2017
In neurovascular surgery the intraoperative fluorescence angiography has been proven to be a reliable contact-free optical imaging technique to visualize vascular blood-flow. This angiography is obtained by injecting a fluorescence dye e.g. indocyanine green and using an infrared camera system to visualize the fluorescence inside the vessel. Obviously this requires a medical approved dye and an additional camera setup and therefore generating risks and costs. Hence, the aim of our research is to develop a comparable technique for assessing the vascular function. This approach would not require dye nor an additional infrared camera setup. It is achieved by first preprocessing the video data of a camera that records only the visible spectrum and then filter it spatially as well as temporally. The prepared data is again processed to extract information about the vascular function and visualize it. This method would provide an option to compute and visualize the vascular function using the data recorded in the visible spectrum by the surgical microscopes. Given this contact-free optical imaging system, physiological information can be easily provided to the surgeon without an additional setup. In the case of comparable results with the state-of-the-art, this technique provides a straightforward optical intraoperative angiography. Further no drug approval is needed since no dye is injected.
The invention relates to a system for visualizing characteristic tissue with a colorant in a surgical region. The system contains a detection unit which detects light from at least one object point in the surgical region. The system has a computer unit which is connected to the detection unit and drives a visualization unit which displays an image of an area in the surgical region. The computer unit determines the color coordinate in a color space with respect to the light from a point from the object point in the surgical region. Depending on the position of the color coordinate determined with respect to the object point, the computer unit calculates a color coordinate information ("0", "1") for controlling the visualization unit by comparing information concerning the determined color coordinate of the object point with information concerning a characteristic reference color coordinate.