tomasz kulczyk


S.N. Özdemir1, K. Altuntaş1, E. Kılıç1, O. Baydar1, E. Yeşilova1, E. Bilgir1, Ö. Çelik2, İ.Ş. Bayrakdar1

1Eskisehir Osmangazi University, Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskişehir, Turkey, 2Eskisehir Osmangazi University, Department of Mathematics-Computer, Faculty of Science, Eskişehir, Turkey

Aim: Although pterygopalatine fossa is a small anatomic region, it is one of the crucial intersections between midfacial and cranial parts of the skull. It is a natural pathway for pathologies because of maxillary artery, maxillary branch of trigeminal nerve and pterygopalatine ganglion which were occupied in this fossa. So being an anatomical landmark for direct extension and/or perineural spread of the tumours, transmission of infections and the surgical treatment plannings makes it important for radiologists. Pterygopalatine canal connects pterygopalatine fossa to infratemporalis fossa. This study is aimed to present the automatic segmentation of pterygopalatine canal using nnUNET.

Material and Methods: Forty five dicom format cbct images with 0.200 mm and 0.400 mm slice thickness were segmented retrospectively by three trained dentists with open access 3D Slicer program. Data was categorized in two groups as 41 training and 4 test. 1000 Epoch lowres traning was performed with nnU-Netv2 architecture model. The success of our study model was evaluated accuracy, dice score, precision and jaccard index with recall.

Results: The accuracy, dice score, precision, recall and jaccard index were found respectively as 0.99, 0.603, 0.702, 0.602 and 0.45.

Conclusion: Small sample size and different slice thicknesses of the images were the limitations of this study. However, it is concluded that results encourage the use of this technique for segmentation of this deep space of viscerocranium. The present results will be a guide in learning anatomical structures in dentistry training and in the use of orthognathic surgery in dental practice.


S. Chopra1, M. Vranckx2,3, A. Ockerman2,3, P. Östgren4, C. Krüger-Weiner5,6, D. Benchimol1, S. Shujaat2,3, R. Jacobs2,3,1

1Karolinska Institutet, Department of Oral and Maxillofacial Radiolology, Stockholm, Sweden, 2KU-Leuven, OMFS-IMPATH Research Group, Leuven, Belgium, 3University Hospitals Leuven, Department of Oral and Maxillofacial Surgery, Leuven, Belgium, 4Eastmaninstitutet, Department of Oral and Maxillofacial Radiology, Stockholm, Sweden, 5Eastmaninstitutet, Department of Oral and Maxillofacial Surgery, Stockholm, Sweden, 6Karolinska Institutet, Department of Oral and Maxillofacial Surgery, Stockholm, Sweden

Aim: The primary aim of this study was to investigate the prediction of lower third molar eruption and its uprighting with the assistance of an artificial intelligence (AI) tool. The secondary aim was identifying the incidence of fully erupted lower third molars with hygienic cleansability.

Material and methods: 771 patients having two panoramic radiographs were recruited, where the first radiograph was acquired at 8-15 years of age (T1) and the second acquisition was between 16 and 23 years (T2). Angulations, level of developement, available retromolar space, and level of eruption were recorded in all panoramic radiographs.

Results: The predictive model for third molar eruption could not be obtained as few teeth reached full eruption. However, uprighting model at T2 showed that in cases with sufficient retromolar space, an initial angulation of < 32° predicted uprighting. Full eruption was observed for 13.9% of the teeth, and only 1.7% showed hygienic cleansability.

Conclusion: The predictions model of third molar uprighting could act as a valuable aid for guiding a clinician with the decision-making process of extracting third molars which fail to erupt in an upright fashion. In addition, a low incidence of fully erupted molars with hygienic cleansability suggest that a clinician might opt for prophylactic extraction.


P. Khruasarn1, K. Teeyapan2, S. Prapayasatok3, S. Nalampang3

1Graduate Program Faculty of Dentistry, Chiang Mai University, Oral Radiology, Chiang Mai, Thailand, 2Chiang Mai University, Computer Engineering, Faculty of Engineering, Chiang Mai, Thailand, 3Chiang Mai University, Oral Radiology, Faculty of Dentistry, Chiang Mai, Thailand

Aim: To develop a novel deep-learning model for age estimation from periapical radiographs of upper incisors in Thais.

Material and Methods: The dataset consists of 2,000 periapical radiographic images of upper central incisors, totaling 3,351 teeth. These images were randomly collected from 2014 to 2023, comprising 917 males and 1,083 females aged between 10 to 83 years. Labelbox was used to create a root and root canal segmentation mask. All datasets were randomly distributed into training, validation, and testing datasets in 2,675: 341: 335 ratios. The steps for developing the deep learning model are as follows: (1) Training DeepLabv3+ on the training set to segment the area of interest (root and root canal). (2) ResNet50 was trained for age estimation from the training dataset and segmented images. (3) The validation dataset was used for model selection. (4) The testing dataset, including 335 teeth, was used for dental age estimation.

Results: The estimation error of the developed deep learning model was acceptable, with the mean absolute error (MAE) of 6.72 years for the testing dataset.

Conclusion: The developed deep learning model introduced in the present study is effective. It is an economical and less time-consuming method for dental age estimation, especially in a large sample group.


W. Upalananda1,2, S. Prapayasatok3, S. Na Lampang3, S. Chaichulee1

1Faculty of Medicine, Prince of Songkla University, Department of Biomedical Sciences and Biomedical Engineering, Hatyai, Thailand, 2Faculty of Dentistry, Prince of Songkla University, Department of Oral Diagnostic Sciences, Hatyai, Thailand, 3Faculty of Dentistry, Chiang Mai University, Department of Oral Biology and Diagnostic Sciences, Chiang Mai, Thailand

Aim: This study aimed to assess the performance of YOLOv8 Oriented Bounding Box (YOLOv8-OBB) models in automated tooth detection tasks, utilizing oriented bounding boxes aligned with the axis of each tooth, in orthopantomography (OPG) images.

Material and Methods: A dataset consisting of 560 OPG images was utilized for training, validation, and testing purposes. The dataset was split into training (80%), validation (10%), and test (10%) sets. Two YOLOv8-OBB models (nano and small size) were employed for automated tooth detection and assigned tooth numbers following the FDI system.

Results: The results demonstrated that both YOLOv8-OBB-nano and YOLOv8-OBB-small models exhibited high precision and recall for tooth detection. YOLOv8-OBB-nano achieved a precision of 0.968, recall of 0.951, F1 score of 0.959, and mean average precision (mAP50) of 0.964. Meanwhile, YOLOv8-OBB-small achieved a precision of 0.971, recall of 0.963, F1 score of 0.967, and mAP50 of 0.974. In the tooth detection and numbering task, YOLOv8-OBB-nano achieved a precision of 0.669, recall of 0.684, F1 score of 0.676, and mAP50 of 0.715. Notably, YOLOv8-OBB-small outperformed its nano counterpart, with a precision of 0.826, recall of 0.758, F1 score of 0.791, and mAP50 of 0.815.

Conclusion: The findings underscore the effectiveness of YOLOv8-OBB models for oriented tooth detection tasks in OPG images. YOLOv8-OBB-small, in particular, demonstrated superior performance compared to YOLOv8-OBB-nano, offering promising results for dental imaging applications. These results highlight the potential of YOLOv8-OBB in enhancing automated dental radiographic image analysis workflows.


I. Rozylo-Kalinowska1, K. Futyma-Gabka1, M. Piskorz1, K. Smala2, W. Miazek2, M. Moskwa2

1Medical University of Lublin, Department of Dental and Maxillofacial Radiodiagnostics, Lublin, Poland, 2Medical University of Lublin, Lublin, Poland

Aim: The aim of the study is to analyse the performance of artificial intelligence (AI) software in tooth numbering in mixed dentition in panoramic radiographs.

Material and methods: Seventy-eight panoramic radiographs of patients with mixed dentition were assessed for tooth numbering using the Viohl system with the software by Diagnocat Inc. (LCC Diagnocat, Miami, USA). In the next step, the results were analysed by a dentist and then compared.

Results: The applied software correctly identified deciduous and permanent teeth in 58 cases (74.35%). In 20 cases (25.64%) errors such as incorrect order of the teeth, doubled number of teeth or complete failure to identify some teeth, occurred. The errors concerned mostly deciduous teeth with advanced root resorption. There were markedly more errors in the assessment of anterior teeth than in the posterior region. AI was fully effective in identifying the presence of third molars, even in very early stages of mineralisation.

Conclusions: The used software demonstrates significant potential in identifying teeth in patients with mixed dentition, but it still requires supervision of a dentist.


G. Torgersen1

1University of Oslo/Institute of Clinical Dentistry, Maxillofacial Radiology, Oslo, Norway

Aim: To identify the most effective convolutional neural network (CNN) model for fine-tuning on a dataset of intraoral radiographs and to evaluate the accuracy of the fine-tuned model in classifying the periapical index (PAI), thereby determining the model‘s capability in accurately reflecting endodontic treatment outcomes.

Material and Methods: The model employed anonymized intraoral radiographs with landmarks and PAI scores from endodontic treatments performed at the University of Oslo. Scores were done by master students, residents in endodontics and PHD-candidates. A plugin to ImageJ was used to manually mark landmarks and set the PAI as an indicator for treatment outcome. The PyTorch library in the Python programming language was used to fine-tune and test several pretrained open-source CNN models. Several models including RESNET and VGG19 were fine-tuned on our dataset and tested for accuracy.

Results: A fine-tuned VGG19 based model performed best, achieving an accuracy rate of 64%.

Conclusion: While applying machine learning to image analysis in standard datasets, such as handwritten digits (MNIST) or flower classification, appears straightforward, the development of a model tailored to endodontic treatment evaluation poses significant challenges. Further results from refinement of training data and experimenting with different CNN models will be presented.


S. Satir1, T. Felek2, S. Ozel3

1Karamanoglu Mehmetbey University, Faculty of Dentistry, Oral and Maxillofacial Radiology, Karaman, Turkey, 2Akdeniz University, Institute of Naturel and Applied Scinces, Antalya, Turkey, 3Altinbas University, Faculty of Dentistry, Oral and Maxillofacial Radiology, Istanbul, Turkey

Aim: Aim is to perform depth analysis and 3D modeling of the pulps of single-root teeth using periapical radiographs (PAR) without requiring additional radiation. For this application, a software (XPAR) that creates a depth map from the gray values in PAR was used.

Material and Methods: Retrospectively, both cone beam computed tomography (CBCT) and PAR images of 31 single-root teeth were included. PARs were examined and zones where depth decreased (bifurcation, transition from elliptical to oval form, pulp stone/calcification, thick alveolar bone, imaging artifact) and depth increased (accessory canal, internal/external root resorption) were analyzed by XPAR. The same teeth were also analyzed with CBCT. XPAR and CBCT findings were evaluated separately and as those obtained by consensus.

Results: A total of 23 and 16 abnormal depth decrease zones were detected with XPAR and CBCT, respectively. A total of 20 and 18 abnormal depth increase zones were detected with XPAR and CBCT, respectively. A total of 11 depth decrease zones (4 bifurcation: 80%, 3 transition from elliptical to oval form: 60%, 2 pulp stone/calcification: 40%, 2 thick alveolar bone: 100%) were detected both XPAR and CBCT by consensus. Also 6 depth increase zones (5 accessory canal: 29%, 1 internal/external root resorption: 100%) was determined by consensus.

Conclusion: XPAR can be useful in detecting bifid canal formation in single-root teeth and buccolingual form changes in the canal. XPAR software needs to be developed to increase diagnostic accuracy in detecting accessory canals in the root apex.


A. Kuran1, E. Bilgir2, Ö. Çelik2, İ.Ş. Bayrakdar2

1Kocaeli University, Kocaeli, Turkey, 2Eskişehir Osmangazi University, Eskişehir, Turkey

Aim: The objective of this study is to develop a deep learning algorithm (DL) based on nnU-Net v2 that can diagnose various dental conditions that may occur when examining cone beam computed tomography (CBCT) volumes.

Material and Method: Three-dimensional segmentations were made on the CBCT volumes in different ways as caries, filling, residual root, fixed prosthesis (crown/pontic), root canal filling, periapical lesion, impacted/unerupted teeth, dental implants, endodontic-periodontal lesions, implant-supported crowns using the CranioCatch software (Eskişehir, Turkey). As a result of the segmentations made for each condition, each group was divided into 90% training and 10% test groups according to the number of labelling. A DL based on nnU-Net v2 was developed separately for each model and the success of the model was evaluated using Dice Score, Precision, Recall and Jaccard Index.

Results´: When evaluating the success of the developed models, Dice Score, Precision, Recall, and Jaccard Index values for different dental conditions were determined as follows: for caries, 0.48, 0.54, 0.55, 0.35, respectively; for filling, 0.73, 0.72, 0.76, 0.60; for residual root, 0.62, 0.70, 0.56, 0.53; for fixed prosthesis, 0.72, 0.81, 0.67, 0.61; for root canal filling, 0.71, 0.81, 0.69, 0.57; for periapical lesion, 0.56, 0.65, 0.56, 0.44; for impacted/unerupted, 0.61, 0.59, 0.84, 0.53; for dental implant, 0.85, 0.85, 0.85, 0.74; for endodontic-periodontal lesion, 0.39, 0.92, 0.34, 0.32; and for implant-supported crown, 0.85, 0.85, 0.85, 0.74, respectively.

Conclusion: The application of the nnU-Net v2-based DL gives promising results in the automated segmentation of various dental conditions observed in CBCT volumes.