Chairs:
rubens spin-neto

144: THE PERFORMANCE OF A CONVOLUTIONAL NEURAL NETWORK (CNN) BASED DEEP-LEARNING SYSTEM FOR CARIES DETECTION IN PANORAMIC RADIOGRAPHY

S. Khanna1, M. Thariyal1

1Nair Hospital Dental College, Oral Medicine& Radiology, Mumbai, India

Aim: The aim of this study was to validate the performance of a Convolutional Neural Network (CNN) based deep-learning system for caries detection in panoramic radiography.

Methodology: In this study, 1200 anonymous dental panoramic radiographs were analyzed by two independent dental surgeons.

A pre-trained model (the cloud-based platform of Velmeni Inc. company Convolutional neural network) was used to assess the panoramic radiographs at the Nair Hospital Dental College. A comparison was drawn between the results obtained from the human observers and the artificial intelligence model.

Background: A large percentage of adults are nowadays affected by dental caries. Accuracy of early diagnosis of dental caries is still a challenging problem for dentists. Dentists inspect caries on tooth surfaces by observation of texture and discoloration using the visual-tactile method. This method is highly subjective, based on the dentist’s expertise. Hence it is necessary to implement an efficient, fully automatic, and accurate dental caries detection algorithm. These artificial intelligence systems with computer assistance have been created to help with diagnosis and treatment procedures related to dental imaging. In this study, we aim to evaluate the performance of a Deep learning artificial intelligence system model (A pre-trained model (the cloud-based platform of Velmeni Inc. company Convolutional neural network) for caries detection on panoramic Radiography.

Conclusion: The suggested AI model showed encouraging results in detection of carious lesions on panoramic radiographs. As AI models continue to advance in dental radiology, they will support clinicians in digitally based patient education as well as treatment planning.

164: VISNOW-MEDICAL: BRIDGING THE GAP BETWEEN AI AND VISUAL INTERPRETATION IN DENTAL RADIOLOGY.

P. Regulski1, K. Szopinski2

1Medical University of Warsaw, Digital Imaging and Virtual Reality Laboratory, Department of Dental and Maxillofacial Radiology, Warsaw, Poland, 2Medical University of Warsaw, Department of Dental and Maxillofacial Radiology, Warsaw, Poland

The objective was to present the capabilities of VisNow-Medical, an extension of the VisNow-platform, specifically tailored for the visual analysis of radiological data. VisNow-Medical integrates open-source algorithms designed to enhance the analysis of radiological images with support of AI-models and simulations, making it a versatile tool for dental radiology applications.

VisNow-Medical is designed for the facilitate development of case-specific, multi-platform medical and radiological applications. It supports the processing of 1D, 2D, and 3D data across a variety of tasks including denoising, filtering, segmentation, classification, mapping, and quantitative visual analysis. The platform‘s functionalities are modularized, allowing users to construct complex data processing networks by linking modules together to form comprehensive applications. Various types of data are supported, differing in geometry, structure, and values. The platform offers easy integration with commonly used Python frameworks for AI modeling.

Utilizing VisNow-Medical, a novel AI-driven method was applied in quantitative MRI to assess temporomandibular-disc displacement, analyzing 50 images and revealing significant differences in retrodiscal tissue, medial pterygoid muscle, and bone marrow between groups with and without disc displacement. In CBCT, the platform aided in post-surgical analysis for lip and palate clefts in 35 patients, showing a trend towards increased alveolar volume in early secondary bone grafting cases. Moreover, VisNow-Medical facilitated the creation of 230 digital anatomical models for virtual reality.

In conclusion, VisNow-Medical significantly enhances dentomaxillofacial radiology research and development by providing a powerful, integrated platform for visual analysis with AI integration, handling diverse data and tasks.

228: EXPLORING PHARYNGEAL AIRWAY CHARACTERISTICS WITH NNUNET: A DEEP LEARNING APPROACH

K. Altuntaş1, S.N. Özdemir1, E. Kiliç1, O. Baydar1, E. Yeşilova1, E. BİLGİR1, Ö. Çelik2, İ.Ş. Bayrakdar1

1Eskisehir Osmangazi University, Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskişehir, Turkey, 2Eskisehir Osmangazi University, Department of Mathematics-Computer, Faculty of Science, Eskisehir, Turkey

Aim: The pharyngeal airway is crucial in medical and dental practice, particularly for pathology identification, diagnosing obstructive sleep apnea, orthodontic evaluations, and pre-anesthetic assessments. Modern 3D techniques allow precise segmentation and volumetric analysis. Our study aims to assess a model using Cone Beam Computed Tomography (CBCT) image labels for pharyngeal airway detection.

Material and Methods: We retrospectively analyzed 188 CBCT images from our radiology archive, ensuring they showed the entire nasopharyngeal complex without motion-induced artifacts. 32022 pharyngeal airway labels were delineated by 3 dentists, verified by four radiologists. Consensus-approved labels were converted to Nifti format and trained using nnUNET architecture based Py-Torch library. Evaluation metrics included dice, precision, recall, and Jaccard score.

Results: The nnunet method, powered by artificial intelligence and deep learning, provided compelling results. A high dice score of 0.94 was achieved at both full and reduced resolutions, confirming deep learning›s efficacy. Precision and recall scores, reaching 0.94 and 0.89 respectively,

Conclusion: In conclusion, nnunet, utilizing artificial intelligence and deep learning, proves effective for pharyngeal airway analysis. High dice, precision, and recall scores underscore deep learning›s effectiveness in medical imaging. This study contributes insights into pharyngeal airway assessment, emphasizing AI›s role in advancing medical imaging and diagnostics.

229: ESTIMATING HUMAN AGE USING MACHINE LEARNING ON PANORAMIC RADIOGRAPHS

D.Q. Freitas1, W. Oliveira2, M. Albuquerque3, C.A.P. Burgardt3, F.M.d.M. Ramos Perez3, A. dos Anjos Pontual3, M.L. dos Anjos Pontual3, C. Zanchettin2

1University of Campinas, Oral Diagnosis, Piracicaba, Brazil, 2Universidade Federal de Pernambuco, Centro de Informatica – CIn, Recife, Brazil, 3Universidade Federal de Pernambuco, Centro de Ciencias da Saude – CCS, Recife, Brazil

Aim: Age estimation is a crucial task in forensic sciences, relevant in cases such as mass disasters and legal age confirmation. In this study, AI-based models were developed and evaluated using panoramic radiographs to predict human chronological age of Brazilian patients.

Methods: From 12,818 radiographs of patients aged 2.25 to 96.50 years of both sexes, without bone lesions, fractures, or dental anomalies, 10,035 were selected after quality analysis process. The experiment initially started with a training set of 8,027 randomly selected images, representing 80% of the total dataset. To expand the dataset and improve the performance of the model, a 3-fold data augmentation technique was applied, resulting in a total of 24,081 images.

Results: The adapted InceptionV4 network effectively predicted chronological age with a mean absolute error of 3.15 and an R-squared value of 95.6%, demonstrating consistent performance on the holdout set. Two oral radiologists further evaluated the predictions of 1,282 images with a heatmap and concluded that the neural network focused on areas already used in forensic dentistry for age determination, such as pulp measurement and calcification stages of permanent teeth. The performance of the model decreased for populations over 60 years of age due to limited data on older individuals in the dataset.

Conclusion: This study highlights the potential of AI-based models for estimating chronological age from panoramic radiographs, with consistent results comparable to traditional manual methods. The next step is to adapt this network for multitask functionality, allowing simultaneous classification of sex.

Grant: FAPESP #2022/07468-7

245: CHRONOLOGICAL AGE ESTIMATION WITH PANORAMIC RADIOGRAPHS USING PREDICTIVE FORMULA AND NEURAL NETWORK: A COMPARATIVE ANALYSIS

M.L. Pontual1, M. Albuquerque Santos1, W. Oliveira2, C.A. Pereira Burgardt3, R. Villar Beltrão4, F.M. Moraes Ramos-Perez1, A. Anjos Pontual1, C. Zanchetin3

1Universidade Federal de Pernambuco, Clinic and Preventive Dentistry, Recife, Brazil, 2Universidade Federal de Pernambuco, Centro de informática-CIn, Recife, Brazil, 3Universidade Federal de Pernambuco, Centro de Informática-CIn, Recife, Brazil, 4Universidade Federal da Paraíba, Clinic and Social Dentistry, João Pessoa, Brazil

This study aimed to compare a predictive formula and a deep neural network in estimating chronological age in panoramic radiographs (PR). A total of 2,533 images from patients aged 2 to 22 years were utilized, sourced from the database of the Dental Radiology Clinic at UFPE, Recife, PE, Brazil. Of these images, 2,219 were employed for training and validating a neural network model with the Inception V4 architecture, following interactive concepts of the CRISP-DM structure. The remaining 314 PRs were used in the neural network‘s testing phase to assess its efficacy in age estimation. For the evaluation of the predictive formula, an experienced and calibrated radiologist assessed the same 314 images for the third molar development degree according to Demirjian et al.‘s (1973) classification, with stage zero added for crypt presence. The data underwent descriptive and Bland-Altman analyses with Student‘s t-test to assess the mean difference between the estimated and real ages (p≤0.05). The neural network model exhibited a mean absolute error (MAE) of 1.043 years, a mean squared error (MSE) of 1.769 years, and a mean difference compared to the real age of 0.07 (p=0.37921). The predictive formula displayed higher MAE values (1.521 years), MSE (4.105 years), and a mean difference between estimated and real ages of 0.97 (p≤0.05). In conclusion, the predictive formula cannot be used as the sole method for age estimation and the Inception V4 architecture‘s age estimation through this neural network model can be considered similar to the real age.

260: ENHANCING OSTEOPOROSIS SCREENING WITH EFFICIENTNET-BASED MODELS FOR PANORAMIC DENTAL IMAGES

A. Leite1, B. Scholles Soares Dias2, R. Querrer1, P. Figueiredo1, N. Santos de Melo1, L. Rodrigues Costa3, M. Fagundes Caetano3, M. Farias4

1University of Brasília, Department of Dentistry, Brasília, Brazil, 2University of Brasília, Department of Electrical Engineering, Brasília, Brazil, 3University of Brasília, Department of Computer Science – Gigacandanga, Brasília, Brazil, 4Texas State University, Department of Computer Science, San Marcos, United States

Aim: To develop a method for screening osteoporosis on panoramic radiographs (PRs) using convolutional neural network (CNN) models built on the EfficientNet architecture.

Material and methods: Two approaches were explored: using complete PR images and using cropped PR images that specifically focus on the mandibular cortical region. The image database consisted of 750 PRs, 579 were classified as C1 and 171 were annotated as C3 by an experienced examiner. An in-house platform named Gigaview was developed to gather, process, and view data. The study used the Computer Vision Annotation Tool (CVAT) for image annotation and dataset preparation, integrating data preprocessing techniques such as the Rolling Ball method for noise reduction and background correction, and also data augmentation techniques. The EfficientNet B5, B6, and B7 models were fine-tuned using transfer learning classification techniques. The metrics were Accuracy, Precision, AUC Score, Recall, F1-Score, and Specificity.

Results: The cropped-image approach achieved an accuracy of approximately 98% and AUC of 0.987 (EfficientNet B7), while the full-image approach yielded a slightly lower accuracy of approximately 95% and AUC of 0.961 (EfficientNet B6). The incorporation of Grad-CAM enabled a more detailed visual analysis, particularly highlighting the significance of the mandibular cortical edge in model predictions, even with the utilization of the full-image method, despite its slightly lower performance compared to the cropped-image approach.

Conclusions: This study contributed to computer-aided diagnosis in dentistry by making the codes used for training and testing the networks publicly available, thereby offering a promising tool for osteoporosis screening in routine PRs.

91: DIAGNOSTIC IMAGE QUALITY IN BITEWING EXAMINATIONS

D. Olsson1, E. Levring Jäghagen1, M. Garoff1

1Umeå University, Odontology, Umeå, Sweden

Aim: To evaluate diagnostic image quality in bitewing examinations (BWE) and deficiencies in applied imaging technique.

Material and Methods: In total, 1000 randomly selected BWEs performed with CMOS-technique in 2019 as part of routine check-ups at general dental clinics in Västerbotten county council, Sweden, were retrospectively analysed. The assessment of image quality focused on possibility to diagnose caries and marginal bone level. Deficiencies related to applied imaging technique were analysed. If present, radiographic images of other categories than bitewings that compensate for deficiencies in the BWE were considered. Differences between age-groups, gender, sensor size and number of exposed images were analysed.

Preliminary results: Twenty-four of 304 (8%) BWEs were of optimal diagnostic quality considering both caries and marginal bone level; 80 (26%) met requirements for caries diagnostics and 67 (22%) for marginal bone level evaluation. Considering clinical factors in a holistic evaluation, 134 (44%) BWEs were of acceptable diagnostic quality. Gender didn›t impact the image quality (p=0.678). However, the image quality was better with larger sensor size (p=0.001), among patients aged ≥13 years (p<0.001) (the larger sensors were more prevalent among these patients) and finally image quality correlated with extended examinations meaning higher number of exposures (p=0.002). Common deficiencies were related to incorrect sensor placement and beam angulation.

Conclusion: Less than 1 of 12 BWEs performed with CMOS-technique fulfilled the diagnostic requirements for caries and marginal bone level assessment. Opting for the larger sensor size and utilizing complementary exposures to improve the BWEs, would improve the outcome.