Amaç: Yapay zekâ uygulamalarının diş hekimliği alanında kullanımı yaygınlaşmaktadır ancak yapay zekâ destekli sohbet robotlarının diş hekimliği alanındaki bilgi düzeyi belirsizdir. Çalışmanın amacı, iki farklı sohbet robotunun protetik diş tedavisi hakkındaki bilgi düzeylerini ölçmek ve düzeyler arasındaki farklılık olup olmadığını incelemektir. Gereç ve Yöntemler: 2012-2021 yılları arasındaki Diş Hekimliğinde Uzmanlık Eğitimi Giriş Sınavında (DUS) sorulan protetik diş tedavisi soruları konulara göre ayrılmıştır. Toplam 128 adet çoktan seçmeli soru ChatGPT-3.5 (Chat Generative Pre-trained Transformer; OpenAI, San Francisco, Kaliforniya, ABD) ve Gemini (Google, Mountain View, Kaliforniya, ABD) adlı sohbet robotlarına eş zamanlı olarak sorulmuştur. Sohbet robotlarının verdiği cevaplar 3 noktalı Likert ölçeği ile puanlandırılmıştır. Alınan puanlar sınav yılları ve sınavda çıkan konulara göre ayrı ayrı hesaplanmıştır. İstatistiksel analiz için IBM SPSS 23 yazılım programı (IBM, SPSS, Chicago, IL, ABD) kullanılmıştır. ChatGPT-3.5 ve Gemini'nin aldıkları puanlar ''paired sample t-test'' kullanılarak karşılaştırılmıştır. Bulgular: Çalışmadan elde edilen sonuçlara göre ChatGPT-3.5 ve Gemini'nin çalışmaya dâhil edilen sorulara verdikleri cevaplara karşılık aldıkları puanlar arasında anlamlı fark bulunmamıştır (p=0,251). Sohbet robotlarının konulara göre bilgi düzeyleri arasında da anlamlı fark görülmemiştir (p=0,965). Sonuç: ChatGPT ve Gemini'nin protetik diş tedavisi bilgi düzeyi benzerdir. Sohbet robotlarının soruları doğru yanıtlama yüzdeleri incelendiğinde şu an için protetik diş tedavisi ile ilgili soruları doğru yanıtlama yeteneklerinin sınırlı olduğu görülmektedir.
Anahtar Kelimeler: Prostodonti; yapay zekâ
Objective: Artificial intelligence applications are becoming increasingly prevalent in dentistry, but the level of knowledge of artificial intelligence-powered chatbots in dentistry is uncertain. The aim of this study was to evaluate the level of knowledge of two different chatbots on prosthodontics and to investigate whether there are any differences between these levels. Material and Methods: The questions about prosthodontics asked in the specialty exam in dentistry between 2012 and 2021 were categorized by topic. A total of 128 multiple-choice questions were asked simultaneously to chatbots ChatGPT-3.5 (Chat Generative Pre-trained Transformer; OpenAI, San Francisco, California, USA) and Gemini (Google, Mountain View, California, USA). The answers given by the chatbots were scored on a 3-point Likert. The scores were calculated separately for each exam year and topic. The IBM SPSS 23 software program (IBM, SPSS, Chicago, IL, USA) was used for statistical analysis. The scores obtained by ChatGPT-3.5 and Gemini were compared using the paired sample t-test. Results: The results showed that there was no significant difference between the scores received by ChatGPT-3.5 and Gemini for their answers to the questions (p=0.251). There was also no significant difference between the level of knowledge of the chatbots by topic (p=0.965). Conclusion: ChatGPT-3.5 and Gemini demonstrate similar levels of knowledge in prosthodontics. When the percentage of correct answers to the questions by the chatbots was examined, it was found that their ability to correctly answer questions about prosthodontics is currently limited.
Keywords: Prosthodontics; artificial intelligence
- Aggarwal A, Tam CC, Wu D, Li X, Qiao S. Artificial Intelligence-based chatbots for promoting health behavioral changes: systematic review. J Med Internet Res. 2023;25:e40789. [Crossref] [PubMed] [PMC]
- Gritti MN, AlTurki H, Farid P, Morgan CT. Progression of an Artificial Intelligence Chatbot (ChatGPT) for pediatric cardiology educational knowledge assessment. Pediatr Cardiol. 2024;45(2):309-13. [Crossref] [PubMed]
- Fatani B. ChatGPT for future medical and dental research. Cureus. 2023;15(4):e37285. [Crossref] [PubMed] [PMC]
- Mihalache A, Grad J, Patil NS, Huang RS, Popovic MM, Mallipatna A, et al. Google Gemini and Bard artificial intelligence chatbot performance in ophthalmology knowledge assessment. Eye (Lond). 2024;38(13):2530-5. [Crossref] [PubMed] [PMC]
- Alhaidry HM, Fatani B, Alrayes JO, Almana AM, Alfhaed NK. ChatGPT in dentistry: a comprehensive review. Cureus. 2023;15(4):e38317. [Crossref] [PubMed] [PMC]
- Freire Y, Santamaría Laorden A, Orejas Pérez J, Gómez Sánchez M, Díaz-Flores García V, Suárez A. ChatGPT performance in prosthodontics: assessment of accuracy and repeatability in answer generation. J Prosthet Dent. 2024;131(4):659.e1-659.e6. [Crossref] [PubMed]
- Ding H, Wu J, Zhao W, Matinlinna JP, Burrow MF, Tsoi JK. Artificial intelligence in dentistry-a review. Front Dent Med. 2023;20(4):1085251. [Crossref]
- Alshadidi AAF, Alshahrani AA, Aldosari LIN, Chaturvedi S, Saini RS, Hassan SAB, et al. Investigation on the application of artificial intelligence in prosthodontics. Appl Sci. 2023;13(8):5004. [Crossref]
- Revilla-León M, Gómez-Polo M, Vyas S, Barmak AB, Gallucci GO, Att W, et al. Artificial intelligence models for tooth-supported fixed and removable prosthodontics: a systematic review. J Prosthet Dent. 2023;129(2):276-92. [Crossref] [PubMed]
- Gilson A, Safranek CW, Huang T, Socrates V, Chi L, Taylor RA, et al. How does ChatGPT perform on the united states medical licensing examination (USMLE)? The implications of large language models for medical education and knowledge assessment. JMIR Med Educ. 2023;9:e45312. Erratum in: JMIR Med Educ. 2024;10:e57594. [Crossref] [PubMed] [PMC]
- Bhayana R, Krishna S, Bleakney RR. Performance of ChatGPT on a radiology board-style examination: insights into current strengths and limitations. Radiology. 2023;307(5):e230582. [Crossref] [PubMed]
- Shay D, Kumar B, Bellamy D, Palepu A, Dershwitz M, Walz JM, et al. Assessment of ChatGPT success with specialty medical knowledge using anaesthesiology board examination practice questions. Br J Anaesth. 2023;131(2):e31-e4. [Crossref] [PubMed] [PMC]
- Mago J, Sharma M. The potential usefulness of ChatGPT in oral and maxillofacial radiology. Cureus. 2023;15(7):e42133. [Crossref] [PubMed] [PMC]
- Danesh A, Pazouki H, Danesh K, Danesh F, Danesh A. The performance of artificial intelligence language models in board-style dental knowledge assessment: a preliminary study on ChatGPT. J Am Dent Assoc. 2023;154(11):970-4. [Crossref] [PubMed]
- Suárez A, Díaz-Flores García V, Algar J, Gómez Sánchez M, Llorente de Pedro M, Freire Y. Unveiling the ChatGPT phenomenon: evaluating the consistency and accuracy of endodontic question answers. Int Endod J. 2024;57(1):108-13. [Crossref] [PubMed]
- Hopkins AM, Logan JM, Kichenadasse G, Sorich MJ. Artificial intelligence chatbots will revolutionize how cancer patients access information: ChatGPT represents a paradigm-shift. JNCI Cancer Spectr. 2023;7(2):pkad010. [Crossref] [PubMed] [PMC]
- Revilla-León M, Barmak BA, Sailer I, Kois JC, Att W. Performance of an Artificial Intelligence-Based Chatbot (ChatGPT) answering the european certification in implant dentistry exam. Int J Prosthodont. 2024;37(2):221-4. [Crossref] [PubMed]
.: İşlem Listesi