Abstract:
Craniosynostosis is defined as premature fusion of one or more cranial sutures in an infant skull. The prevalence of craniosynostosis is 3.14 to 6 per 10000 live births. This head deformity results in cranial malformation and can lead to facial asymmetry, as well as functional consequences such as increased intracranial pressure, deafness, visual impairment and cognitive deficits. Currently, computed tomography is the primary image technique used in craniosynostosis diagnosis. Although CT is an accurate diagnosis tool, it exposes the infant to ionizing radiation. Therefore, we investigated an automated, radiation-free, patient friendly and new classification method without CT scans using a CNN. For the classification of craniosynostosis, this project utilized ResNet18 network in a pretrained version. Images which were based on the imbalanced real data set and to be classified were generated out of the 3D triangulated surface meshes provided from a 3D photogrammetry scanner at Heidelberg University Hospital. Apart from the real data set, a synthetic data set was created using Statistical Shape Model . In this project, the cranial information regarding 3D head surface was visualized through 2D distance maps. For the generation of them, we defined an origin, extracted the distances with Triangle Ray Intersection Algorithm and assembled the $224\times 224$ pixel images. We conducted a proof of concept study on synthetic data and study on influences of noise on landmarks, different map representation as well as scaling. We report that the proof of concept study was performed successfully and results on real data show an average accuracy of 83.5\%. The best approach on the real data was the 2D second type, non-scaled distance maps. However, the results may be distorted due to overfitting and small data set. As future studies we can think of hyperparameter optimization to avoid overfitting, usage of larger and balanced data set.