The assessment of craniofacial deformities requires patient data which is sparsely available. Statistical shape models provide realistic and synthetic data enabling comparisons of existing methods on a common dataset. We build the first publicly available statistical 3D head model of craniosynostosis patients and the first model focusing on infants younger than 1.5 years. For correspondence establishment, we test and evaluate four template morphing approaches. We further present an original, shape-model- based classification approach for craniosynostosis on photogrammetric surface scans. To the best of our knowledge, our study uses the largest dataset of craniosynostosis patients in a classification study for craniosynostosis and statistical shape modeling to date. We demonstrate that our shape model performs similar to other statistical shape models of the human head. Craniosynostosis-specific pathologies are represented in the first eigenmodes of the model. Regarding the automatic classification of craniosynostis, our classification approach yields an accuracy of 97.3 %, comparable to other state-of-the-art methods using both computed tomography scans and stereophotogrammetry. Our publicly available, craniosynostosis-specific statistical shape model enables the assessment of craniosynostosis on realistic and synthetic data. We further present a state-of-the-art shape-model- based classification approach for a radiation-free diagnosis of craniosynostosis.
Cranio-maxillofacial surgery often alters the aesthetics of the face which can be a heavy burden for patients to decide whether or not to undergo surgery. Today, physicians can predict the post-operative face using surgery planning tools to support the patient’s decision-making. While these planning tools allow a simulation of the post-operative face, the facial texture must usually be captured by another 3D texture scan and subsequently mapped on the simulated face. This approach often results in face predictions that do not appear realistic or lively looking and are therefore ill-suited to guide the patient’s decision-making. Instead, we propose a method using a generative adversarial network to modify a facial image according to a 3D soft-tissue estimation of the post-operative face. To circumvent the lack of available data pairs between pre- and post-operative measurements we propose a semi-supervised training strategy using cycle losses that only requires paired open-source data of images and 3D surfaces of the face’s shape. After training on “in-the-wild” images we show that our model can realistically manipulate local regions of a face in a 2D image based on a modified 3D shape. We then test our model on four clinical examples where we predict the post-operative face according to a 3D soft-tissue prediction of surgery outcome, which was simulated by a surgery planning tool. As a result, we aim to demonstrate the potential of our approach to predict realistic post-operative images of faces without the need of paired clinical data, physical models, or 3D texture scans.
This contribution is part of a project concerning the creation of an artificial dataset comprising 3D head scans of craniosynostosis patients for a deep-learning-based classification. To conform to real data, both head and neck are required in the 3D scans. However, during patient recording, the neck is often covered by medical staff. Simply pasting an arbitrary neck leaves large gaps in the 3D mesh. We therefore use a publicly available statistical shape model (SSM) for neck reconstruction. However, most SSMs of the head are constructed using healthy subjects, so the full head reconstruction loses the craniosynostosis-specific head shape. We propose a method to recover the neck while keeping the pathological head shape intact. We propose a Laplace- Beltrami-based refinement step to deform the posterior mean shape of the full head model towards the pathological head. The artificial neck is created using the publicly available Liverpool-Y ork-Model. W e apply our method to construct artificial necks for head scans of 50 scaphocephaly patients. Our method reduces mean vertex correspondence error by approximately 1.3 mm compared to the ordinary posterior mean shape, preserves the pathological head shape, and creates a continuous transition between neck and head. The presented method showed good results for reconstructing a plausible neck to craniosynostosis patients. Easily generalized it might also be applicable to other pathological shapes.