bart vandenberghe
maria garoff


B. Yalvaç Pınarbaşı1, N. Ersu1, A. Kaygısız2, İ. Devrim3, E.M. Canger1, T. Batbat2

1Erciyes University, Department of Oral and Maxillofacial Radiology, Kayseri, Turkey, 2Erciyes University, Department of Biomedical Engineering, Kayseri, Turkey, 3Erciyes University, Department of Periodontology, Kayseri, Turkey

Aim: The purpose of this study is to evaluate the accuracy of classification of bone loss as vertical and horizontal observed in periodontitis patients using the deep learning method.

Material and Methods: Panoramic radiographs were collected under three main headings: healthy, horizontal bone loss and vertical bone loss, and each heading was grouped within itself as 80% training data and 20% test data. 225 healthy (training: 180, test: 45), 171 horizontal bone loss (training: 136, test: 35), 73 vertical bone loss (training: 58, test: 15) panoramic images were used. Artificial intelligence studies were carried out using deep learning tools in the Matlab program. AlexNet, GoogleNet and Resnet-101, which have their own deep convolutional neural network (CNN) architectures, were used as pre-trained deep learning models. Panoramic images were resized to 227*227 pixels for Alexnet and 224*224 pixels for GoogleNet and Resnet101. Apart from this, no image processing or marking was done.

Results: The success rates of reading panoramic radiographs in detecting bone loss were 74.7%, 73.7%, 74.7% for Alexnet, GoogleNet and ResNet-101 models, respectively.

Conclusion: In this study, we tested the success rate of deep learning methods in determining the bone loss pattern on panoramic radiographs. We hope that with more comprehensive artificial intelligence studies in the future, artificial intelligence can be used as a diagnostic tool to evaluate the bone loss pattern.


I.S. Bayrakdar1, N.S. Elfayome2, R.A. Hussien2, I.T. Gulsen3, A. Kuran4, I. Gunes5, A. Al-Badr6, O. Celik7, K. Orhan8

1Eskisehir Osmangazi University, Oral and Maxillofacial Radiology, Eskisehir, Turkey, 2Cairo University, Cairo, Egypt, 3Alanya Alaaddin Keykubat University, Antalya, Turkey, 4Kocaeli University, Kocaeli, Turkey, 5Eskisehir Technical University, Eskisehir, Turkey, 6Riyadh Elm University, Riyadh, Saudi Arabia, 7Eskisehir Osmangazi University, Eskisehir, Turkey, 8Ankara University, Ankara, Turkey

Aim: The study aims to develop an artificial intelligence (AI) model based on nnU-Net v2 for automatic maxillary sinus (MS) segmentation in Cone Beam Computed Tomography (CBCT) volumes and to evaluate the performance of this model.

Material and Methods: In 101 CBCT scans, MS was annotated using the CranioCatch labeling software (Eskisehir, Turkey) The dataset was divided into three parts: 80 CBCT scans for training the model, 11 CBCT scans for model validation, and 10 CBCT scans for testing the model. The model training was conducted using the nnU-Net v2 deep learning model with a learning rate of 0.00001 for 1000 epochs. The performance of the model to automatically segment the MS on CBCT scans was assessed by several parameters, including F1-score, accuracy, sensitivity, precision, Area Under Curve (AUC), Dice Coefficient (DC), 95% Hausdorff Distance (95% HD), and Intersection over Union (IoU) values.

Results: F1-score, accuracy, sensitivity, and precision values were found to be 0.96, 0.99, 0.96, and 0.96 respectively for the successful segmentation of maxillary sinus in CBCT images. AUC, DC, 95% HD, IoU values were 0.97, 0.96, 1.19, 0.93, respectively.

Conclusion: Models based on nnU-Net v2 demonstrate the ability to segment the MS autonomously and accurately in CBCT images.


C. Evli1, İ.Ş. Bayrakdar2, E. Bilgir2, Ö. Çelik3, K. Orhan1

1Ankara University Faculty of Dentistry, Department of Dentomaxillofacial Radiology, Ankara, Turkey, 2Eskişehir Osmangazi University Faculty of Dentistry, Department of Dentomaxillofacial Radiology, Eskişehir, Turkey, 3Eskişehir Osmangazi University, Faculty of Science, Department of Mathematics-Computer, Eskişehir, Turkey

Aim: Bitewing radiographs allow the visualization of the entire crowns and coronal periodontal structures adjacent to the crowns of teeth in the maxilla and mandible using a single receptor. The use of artificial intelligence and related automation in the evaluation of these radiographs is beneficial in dental education and in assisting dentists in diagnosis and decision-making mechanisms. YOLO is a deep learning algorithm designed for real-time object detection. YOLOv8 is the newest version in this series and offers improved speed, accuracy and learning capacity. The aim of our study is to evaluate the success of artificial intelligence in the automation of evaluating bite-wing radiographs using YOLOv8 deep learning algorithm.

Material and Methods: A total of 11768 bite-wing radiographs were used in the study. 80% of this data was used for training, 10% for validation and the remaining 10% as test data. Training was performed using the YOLO 8 architecture in the PyTorch library. Confusion Matrix was used in the calculation of performance metrics.

Results: In total, 43585 labelings were performed. The best F1 score were found for dental pulp detection (0.98). The best sensitivity value were found for pontic detection (1) and the worst sensitivity value were found for detection of pulp stone (0.81).

Conclusion: Although there are other studies on the segmentations like in this study and this study is the only one that evaluates all these parameters together. In addition, at the literature, can seen that there is no other study using the YOLO 8 architecture.