Skip to main content

Research Repository

Advanced Search

ChatGPT sits the DFPH exam: large language model performance and potential to support public health learning

Davies, Nathan P; Wilson, Robert; Winder, Madeleine S; Tunster, Simon J; McVicar, Kathryn; Thakrar, Shivan; Williams, Joe; Reid, Allan

ChatGPT sits the DFPH exam: large language model performance and potential to support public health learning Thumbnail


Authors

NATHAN DAVIES Nathan.Davies@nottingham.ac.uk
Clinical Research Fellow

Robert Wilson

Madeleine S Winder

Simon J Tunster

Kathryn McVicar

Shivan Thakrar

Joe Williams

Allan Reid



Abstract

Background: Artificial intelligence-based large language models, like ChatGPT, have been rapidly assessed for both risks and potential in health-related assessment and learning. However, their applications in public health professional exams have not yet been studied. We evaluated the performance of ChatGPT in part of the Faculty of Public Health’s Diplomat exam (DFPH). Methods: ChatGPT was provided with a bank of 119 publicly available DFPH question parts from past papers. Its performance was assessed by two active DFPH examiners. The degree of insight and level of understanding apparently displayed by ChatGPT was also assessed. Results: ChatGPT passed 3 of 4 papers, surpassing the current pass rate. It performed best on questions relating to research methods. Its answers had a high floor. Examiners identified ChatGPT answers with 73.6% accuracy and human answers with 28.6% accuracy. ChatGPT provided a mean of 3.6 unique insights per question and appeared to demonstrate a required level of learning on 71.4% of occasions. Conclusions: Large language models have rapidly increasing potential as a learning tool in public health education. However, their factual fallibility and the difficulty of distinguishing their responses from that of humans pose potential threats to teaching and learning.

Journal Article Type Article
Acceptance Date Jan 6, 2024
Online Publication Date Jan 11, 2024
Publication Date 2024
Deposit Date Jan 12, 2024
Publicly Available Date Jan 12, 2024
Journal BMC Medical Education
Electronic ISSN 1472-6920
Publisher Springer Verlag
Peer Reviewed Peer Reviewed
Volume 24
Issue 1
Article Number 57
DOI https://doi.org/10.1186/s12909-024-05042-9
Keywords Public health, Theory, Artificial intelligence, Examination
Public URL https://nottingham-repository.worktribe.com/output/29553618
Publisher URL https://bmcmededuc.biomedcentral.com/articles/10.1186/s12909-024-05042-9

Files





You might also like



Downloadable Citations