Chairs:
gang li
kaan orhan

224: FROM 2D TO 3D: AI-ENABLED TRANSFORMATION OF PERIAPICAL DENTAL RADIOGRAPHS INTO THREE-DIMENSIONAL MODELS

M. Sehat1, A. Hariri1, P. Soltani1

1Dental implants Research Ccenter, Isfahan University of Medical Sciences, Isfahan, Iran, Islamic Republic of

Aim: This pilot study aimed to investigate the capability of artificial intelligence in converting 2D periapical dental radiographs into 3D models. Our focus was on determining the feasibility and accuracy of this transformation process, which can be beneficial in significant clinical scenarios where 3D tooth models are required for diagnosis and treatment planning.

Methods: We developed a custom-trained convolutional neural network (CNN) using a database of periapical dental radiographic images. Our dataset comprised 100 images, chosen for their clarity and the presence of the mandibular first molar, to target learning on a critical dental structure. This collection was divided into two segments: a training set of 70 images to educate the model and a test set of 30 images to evaluate its performance. To gauge the accuracy of our 3D reconstructions, we utilized cone beam computed tomography (CBCT) scans as ground truth. Key performance metrics—accuracy, precision, and recall—were computed from a confusion matrix to assess model efficacy.

Results: The CNN achieved an accuracy of 80% in rendering 3D dental structures, with precision and recall closely aligned with this metric. The model showed particular adeptness in delineating anatomical contours in teeth, suggesting a strong basis for further refinement.

Conclusion: This study‘s findings underscore the viability of using AI for 3D reconstruction from 2d radiographic images, opening avenues for enhanced clinical applications. While the initial accuracy is promising, ongoing adjustments and larger, more varied datasets are anticipated to improve model performance.

231: FULLY AUTOMATIC ASSESSMENT OF MANDIBULAR CONDYLE CHANGES

N. van Nistelrooij1,2, M. Boelstoft Holte3,4, E. Marie Pinholt3,4, T. Xi1, S. Bergé1, S. Vinayahalingam1

1Radboud University Medical Center, Oral and Maxillofacial Surgery, Nijmegen, Netherlands, 2Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Oral and Maxillofacial Surgery, Berlin, Germany, 3University Hospital of Southern Denmark, Oral and Maxillofacial Surgery, Esbjerg, Denmark, 4University of Southern Denmark, Regional Health Research, Esbjerg, Denmark

Aim: This study proposes and validates a fully-automated assessment of volume changes in the mandibular condyle following orthognathic surgery.

Material and Methods: Two collections of cone-beam computed tomography (CBCT) scans were included in this study. The first collection included subjects with a class I occlusion and segmentations of the complete mandible. The second collection included pre-operative and post-operative CBCT scans and segmentations of the ramal segments of the mandible. Two convolutional neural networks (CNNs) were developed to predict a segmentation of the mandible and its ramal segments using a coarse-to-fine strategy. A pre-operative ramal segment was registered to the post-operative mandible. Lastly, the pre-operative and post-operative condylar volumes were determined within the same volume of interest based on the pre-operative condyle. For validation, the agreement between the fully-automated assessment and a validated semi-automated method was calculated by mean difference and intra-class correlation coefficients (ICC).

Results: Forty mandibular condyles in twenty held-out subjects (sixteen female; mean age 27.6 years) with maxillomandibular retrognathia, who underwent bimaxillary surgery, were assessed. The fully-automated method was considerably faster than the semi-automated method (3 min vs. 30 min). A small difference of the condylar volume change measurements produced by the two methods was observed (mean = 2.6%, standard deviation = 1.8%), and the agreement was excellent (ICC = 0.99). A systematic underestimation of the condylar volumes and the condylar volume change was noticed.

Conclusion: The fully-automated assessment demonstrated excellent reliability for quantifying condylar volume changes. The short processing time allows for integration into routine clinical practice.

261: 3D TOOTH SEGMENTATION WITH NN-UNET: AN ARTIFICIAL INTELLIGENCE STUDY

M. Orhan1, E. Bilgir2, İ.Ş. Bayrakdar3, Ö. Çelik4

1Beykent University/Faculty of Dentistry, Department of Dentomaxillofacial Radiology, İstanbul, Turkey, 2Eskişehir Osmangazi University/Faculty of Dentistry, Department of Dentomaxillofacial Radiology, Eskisehir, Turkey, 3Eskişehir Osmangazi University/Faculty of Dentistry, Department of Dentomaxillofacial Radiology, Eskişehir, Turkey, 4Eskişehir Osmangazi University, Faculty of Science, Department of MathematicsComputer, Eskişehir, Turkey

Aim: Three-dimensional radiographs are difficult to interpret and require special expertise. In addition, situations such as human misinterpretation may occur. The

development of systems in which anatomical and pathological structures are automatically analyzed is a necessity of the age. In this study, we aimed to evaluate the success of artificial intelligence models developed in the segmentation and numbering of teeth on CBCT images.

Material and Methods: In this study, CBCT images taken from our radiology archive were examined retrospectively. A total of 50 stacked images were included for manual segmentation of teeth in CBCT images. Teeth were labeled polygonally along the axial section using CranioCatch software by three experienced radiologist. DICOM files were converted to Nifti format using nn-Unet architecture, Epoch lowres training was performed 1000 times using the nn-Unet architecture. The models performance was evaluated using precision, recall values, as well as Dice and Jaccard scores.

Results: In this study; a total of 255324 tags were performed. The highest dice scores obtained after segmentation were 0.92 for teeth 36 and 23, and the lowest was 0.52 for teeth 25.

Conclusion: Recent research highlights the effectiveness of AI in detecting 2D dental radiographs with high accuracy. However, studies on 3D analyzes are few and most of the studies automate frame-based analyzes instead of 3D segmentation analysis. In this study, successful results were obtained in 3D segmentation of the location and numbering of teeth using nn-Unet. No New U-Net' methodology in dental AI research signifies a shift towards optimizing existing U-Net architectures with innovative enhancements.

173: TOP CITED ARTICLES IN ORAL RADIOLOGY: A BIBLIOMETRIC NETWORK ANALYSIS

A. Delantoni1, A. Fardi1, T. Lillis1

1Aristotle University of Thessaloniki, Dentoalveolar Surgery, Implant Surgery and Radiology, Thessaloniki, Greece

Aim: The purpose of the present study was to identify and analyze the 100 top-cited articles published in oral radiology journals, describe basic bibliometric indicators and analyze current research trends.

Materials and Methods: Web of Science was used to conduct a comprehensive search from inception until 22 November 2023 in dental radiology. Basic information of the 100 top-cited articles was recorded. Biblioshiny, the web interface for Bibliomterix, and VOSviewer tools were employed for conducting thematic map as well as author keyword, title, and abstract terms analysis in order to elucidate the research trends. Elsevier Scopus database was also used for citation comparisons.

Results: The total citation count for the 101 most-cited articles ranged from 105-587. The majority of them were original research studies with observational design conducted in diagnosis, dose, geometric measurements and image analysis topics. Cone beam computed tomography was the most studied radiologic technique as author keyword co-occurrence analysis revealed and appeared as basic theme for transdisciplinary research field’s development. While making infant steps, artificial intelligence was adequately represented in top cited list, as it received increasing citation numbers in very few years, concentrating the highest citation densities.

Conclusions: Bibliometric analysis of the most affecting publications in oral radiology depicts the science’s evolution, enhances the understanding of scientific research progress and provides dental clinicians an auxiliary guide for educational and training purposes.

175: ARTIFICIAL INTELLIGENCE-ASSISTED SEGMENTATION OF LYMPH NODES IN THE HEAD AND NECK REGION: AN ULTRASOUND STUDY

B.T. Çiftçi1, F. AŞANTOĞROL1

1Gaziantep University, Faculty of Dentistry, Gaziantep, Turkey

Aim: Lymph nodes are a crucial component of the immune system. The aim of this study is to segment inflammatory lymph nodes in the head and neck region on ultrasonography images using the YOLOv8 deep learning algorithm.

Material and Methods: This study included 218 lymphadenopathy images from 185 patients with ultrasonography images obtained for various reasons at the Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Gaziantep University. The images were polygonally labeled using Roboflow software (Roboflow, Inc., Des Moines, Iowa, USA). The labeled images were randomly divided into training, validation and test datasets.

Results: For the segmentation of lymph nodes on the test data, the true positive, false positive and false negative values for 39 lymph nodes out of 36 images were 38, 1, and 3 respectively. The accuracy of the YOLOv8 algorithm was found to be 90%. The F1 score, precision and recall values were 0.95, 0.92 and 0.97 respectively.

Conclusion: The present study demonstrated that the lymph nodes in the head and neck region can be successfully segmented using the YOLOv8 deep learning algorithm. These results suggest that deep learning algorithms have superior efficiency in the detection of various pathologies and can be used as an appropriate tool in clinical applications.