Chairs:
anastasia mitsea
kostas tsiklakis

138: A NEW ERA IN ROOT CANAL SEGMENTATION: AN INNOVATIVE AI-DRIVEN TOOL FOR AUTOMATED BI-ROOTED PREMOLAR ROOT CANAL SEGMENTATION ON CBCT

R.C. Fontenele1, A.O. Santos-Junior1,2, F.S. Neves1,3, S. Saleh Ali1,4, R. Jacobs1,5

1Katholieke Universiteit Leuven, Imaging and Pathology, Leuven, Belgium, 2São Paulo State University, Department of Restorative Dentistry, Araraquara, Brazil, 3Federal University of Bahia, Department of Propedeutics and Integrated Clinic, Salvador, Brazil, 4King Hussein Medical Center, Department of Restorative Dentistry, Amman, Jordan, 5Karolinska Institutet, Department of Dental Medicine, Stockholm, Sweden

Aim: To develop and assess the efficacy of an artificial intelligence (AI)-driven tool for the automated segmentation of root canals in bi-rooted premolar teeth using cone-beam computed tomography, and to compare its performance with that of human approach.

Material and Methods: In total, 111 CBCT scans were randomly divided into training (n = 55, 96 teeth), validation (n = 14, 16 teeth), and testing (n = 42, 72 teeth) sets. A convolutional neural network (CNN) model was trained using manual root canal segmentation by three operators as ground truth. The CNN automatically segmented CBCT scans in the testing sample, and identified errors were fixed, resulting in a refined-3D (R-AI) model. Segmentation performance was assessed by comparing AI and R-AI models. Thirty percent of the test sample was manually segmented for AI and human performance comparison, with segmentation time recorded in seconds (s).

Results: The AI-tool demonstrated outstanding performance, achieving a Dice similarity coefficient (DSC) of 88%±7%-93%±3 and a 95% Hausdorff distance (HD) of 0.13±0.06-0.16±0.06 mm. Notably, second upper premolars automated segmentation performance was slightly better than first premolars (p<0.05). In contrast to the manual approach, the AI-driven method exhibited higher recall, accuracy, and lower 95% HD (p<0.05). Furthermore, AI segmentation (42.8 ± 8.4 s) was 75 times faster than manual segmentation (3218.7 ± 692.2 s) (p<0.05).

Conclusion: The novel AI-driven tool developed and validated in this study provides highly accurate and efficient automatic root canal segmentation for bi-rooted teeth, surpassing human performance in accuracy and time efficiency.

139: ARTIFICIAL INTELLIGENCE BASED DENTAL ROOT CANAL SEGMENTATION FROM CBCT IMAGES

C. Dobo-Nagy1, B.T. Szabo1, B. Szabo1, B. Benyo2

1Semmelweis University, Oral Diagnostics, Budapest, Hungary, 2Budapest University of Technology and Economics, Control Engineering and Information Technology, Budapest, Hungary

Aim: This study aimed to develop and evaluate an artificial intelligence based automatic root canal segmentation method able to support root canal treatment.

Material and Methods: A UNet architecture-based fully convolutional neural network parameters were adopted to the given problem, where cone-beam computer tomography slices (501X501 pixel matrix, 100 µm voxel size) were processed. The data set consists of human molars and premolars. Ten images were used to assess the ‚method‘s accuracy by calculating the mean Intersection over Union (mean IOU or Jaccard index) and the axial distance metric. An initial, empirical ‚network‘s meta parameter optimization is applied mainly to avoid over-fitting the network. Two observers compared the visualized results from a efficiency semi-automatic segmentation with the results from the AI-based automatic segmentation method.

Results: The validation using the data set of ten teeth showed no sign of over-fitting during the training. The proposed model reached high segmentation accuracy, a high level of overlap between the segmented volumes and the given ground truth masks, the Jaccard index as high as 0.825 measured on the validation data set, and the axial distance was in the scale of the voxel size of the CBCT image. The visual evaluation by the experts showed no significant difference among the observers (p > 0.05) using the logistic regression model. The results showed that the method could produce accurate root canal segmentation with similar accuracy to the semi-automatic method.

Conclusion: The presented promising results motivate extending the data set and transferring the results towards clinical practice.

141: PERFORMACE OF A DEEP LEARNING APPLICATION IN THE DETECTION OF PERIAPICAL LESIONS ON PANORAMIC RADIOGRAPHS

B.T. Szabó1, V. Szabó1, D.S. Veres2, D. Manulis3, M. Ezhov3, A. Sanders3, K. Orhan4,1

1Semmelweis University, Faculty of Dentistry, Department of Oral Diagnostics, Budapest, Hungary, 2Semmelweis University, Department of Biophysics and Radiation Biology, Budapest, Hungary, 3Diagnocat Inc, San Francisco, United States, 4Ankara University, Faculty of Dentistry, Department of Dentomaxillofacial Radiology, Ankara, Turkey

Aim: The aim of our study is to determine the accuracy of an artificial intelligence based system in detecting periapical lesions on panoramic radiographs.

Material and methods: A total number of 357 panoramic radiographs were selected, comprising 308 teeth with clearly visible periapical radiolucency and 308 without any radiographic signs of periapical lesion. Three groups were generated: teeth with radiographic signs of caries, teeth with coronal restoration and teeth with root canal filling. The PRs were uploaded to the Diagnocat system (DC) for evaluation. The performance of the convolutional neural network in detecting PLs was assessed by its sensitivity, specificity values, as well as the correctly classified proportion values were calculated.

Results: Periapical lesions were detected in 240 (77.9%) cases out of the 308 teeth with PL by DC and no PL was found by 68 (22.1%) teeth with PL. The DC detected no PL in any of the groups without PL. The overall sensitivity, specificity and correctly classified proportion values of DC was 0.78, 1.00 and 0.89, respectively. Considering these parameters for each group, Group 2 showed the highest values at 0.84, 1.00 and 0.95, respectively. The AI-based system showed lower sensitivity and diagnostic accuracy of DC for detecting PL on canines with a value of 0.27 and 0.64, respectively.

Conclusions: The deep CNN-based Diagnocat system can support the diagnosis of PL on PRs, and serves as a decision support tool during the radiographic assessments.

142: THE ACCURACY ASSESSMENT OF A CNN BASED ARCHITECTURE IN THE ANATOMICAL IDENTIFICATION OF THE MAXILLARY SINUS AND INFERIOR ALVEOLAR CANAL ON CBCT .

P.P. Jaju1, S. Jaju2, A. Friedel3, M. Feinberg3, M. Suri3, V. J3, R. Jagtap4

1Rishiraj College of Dental Sciences & Research Centre, Bhopal, ORAL MEDICINE RADIOLOGY, Bhopal, India, 2Rishiraj College of Dental Sciences & Research Centre, Bhopal, Conservative Dentistry & Endodontics, Bhopal, India, 3VELMENI, Florida, United States, 4University of Mississippi Medical Center School of Dentistry, Division of Oral and Maxillofacial Radiology, Department of Care Planning and Restorative Sciences, Mississippi, United States

Aim: The objective of this study was to validate the diagnostic performance of a convolutional neural network (CNN)-based architecture for detecting the maxillary sinus & inferior alveolar canal on CBCT.

Materials and Methods: In this study, 412 CBCT scans were utilized. Patient radiographs were assessed using the PAC system at the Rishiraj Dental college from January 2020 to January 2024. Manual segmentation was performed with the assistance of the Artificial Intelligence (AI) system . The detection of the maxillary sinus and IAC was conducted through a CNN -based architecture.

Two independent Oral Maxillofacial Radiologists evaluated the maxillary sinus and inferior alveolar canal utilizing the Anatomage InVivo 3D software. A comparison was drawn between the results obtained from the human observers and the artificial intelligence model. Interobserver agreement and statistical analysis were carried out using the Pearson Product Moment.

Results: CNN architecture demonstrated success in the detection of the maxillary sinus and inferior alveolar canal on CBCT. The accuracy of correctly identifying the maxillary sinus and inferior alveolar canal was 91.2%. There was no significant difference between the two measurement methods (P > 0.05). The Pearson Product Moment revealed a strong correlation (R > 0.5) between the AI›s perception. The AI correlation ranged from 0.906 to 0.918 for the number of maxillary sinuses and 0.912 to 0.924 for the inferior alveolar canal.

Conclusion: The disparity in the detection of the maxillary sinus and inferior alveolar canal between specialists and the AI algorithm did not yield statistically significant differences.