Skip to main content

Large language models and medical education: a paradigm shift in educator roles

Abstract

This article meticulously examines the transformation of educator roles in medical education against the backdrop of emerging large language models (LLMs). Traditionally, educators have played a crucial role in transmitting knowledge, training skills, and evaluating educational outcomes. However, the advent of LLMs such as Chat Generative Pre-trained Transformer-4 has expanded and enriched these traditional roles by leveraging opportunities to enhance teaching efficiency, foster personalised learning, and optimise resource allocation. This has imbued traditional medical educator roles with new connotations. Concurrently, LLMs present challenges to medical education, such as ensuring the accuracy of information, reducing bias, minimizing student over-reliance, preventing patient privacy exposure and safeguarding data security, enhancing the cultivation of empathy, and maintaining academic integrity. In response, educators are called to adopt new roles including experts of information management, navigators of learning, guardians of academic integrity, and defenders of clinical practice. The article emphasises the enriched connotations and attributes of the medical teacher's role, underscoring their irreplaceable value in the AI-driven evolution of medical education. Educators are portrayed not just as users of advanced technology, but also as custodians of the essence of medical education.

Introduction

Artificial intelligence (AI) is a field that focuses on the creation of technology capable of imitating, expanding, or surpassing human intellectual behaviour. Natural Language Processing is a core subset of AI, primarily utilising large language models (LLMs) to analyse, understand, and generate human language. In the field of education, the real-time interactive learning experiences provided by LLMs are fundamentally changing the traditional modes of learning and teaching. This shift is particularly notable in medical education, where LLMs empower students with convenient access to the latest medical information. However, the core of medical education is not just about acquiring advanced knowledge but more about cultivating the ability to translate knowledge into practical application skills. The current challenge lies not only in ensuring that students can access this information but also in ensuring they understand, evaluate, and apply it critically. Therefore, as teachers face the opportunities and challenges brought by LLMs to medical education, they are also redefining the new connotations and attributes of the teacher's role.

We have delved deeply into the changes in the connotations and attributes of the medical educator's role in the era of opportunities and challenges brought by LLMs such as Chat Generative Pre-trained Transformer (ChatGPT) for medical education. Discussing these changes is not only about teachers adapting to advanced technology but also about rethinking and planning the future methods of training medical talent, to ensure that the quality of medical education keeps pace with the times. This article aims to show medical educators new developmental paths, guiding them in training students to adapt to a future medical environment that is more precise and collaborative.

Traditional medical education and educator roles

Traditional medical education relies on a series of established teaching modes, encompassing everything from fundamental medical theory, to simulated clinical scenarios, to actual clinical practice. The most fundamental teaching method is the lecture, delivered at a set place and time, allowing the teacher to efficiently transmit standardized knowledge from an authoritative position. However, this one-way mode of communication often overlooks student involvement and initiative, leading to a passive learning attitude among students. The introduction of problem-based learning (PBL) and case-based learning (CBL) signifies a gradual shift in medical education philosophy. PBL emphasises students learning through autonomous exploration of problems, fostering self-motivation and problem-solving skills, with the teacher becoming more of a facilitator. CBL, via specific case studies, encourages students to apply theoretical knowledge to actual clinical situations, enhancing clinical reasoning and decision-making abilities. Both methods have jointly promoted interactive teaching, sparked student interest and increased participation, and enhanced the development of critical thinking and teamwork skills. However, they also pose challenges to the teaching skills of educators and the allocation of educational resources (Zhao et al., 2020).

Both simulated procedures and real clinical practice are indispensable components of medical education. Simulation enables students to hone their skills in a safe and risk-free setting, improving their proficiency through repetitive practice (Trehan et al., 2014), but lacks the realism and complexity of an authentic clinical setting. The learning curve associated with simulation technology and the substantial cost of facilities also pose significant challenges (Sadava & Novitsky, 2021). In contrast, real clinical practice sharpens students in an unpredictable clinical setting, treating actual patients and dealing with actual diseases. This not only improves their clinical adaptability and communicative skills but also helps them appreciate the significance of each role within the medical team. However, real clinical practice may pose significant medical risks to patients and learners. Additionally, the challenges of real clinical practice include a limited range of cases available for learning and the involvement of numerous trainees at different levels, potentially leading to unequal educational opportunities and inconsistent learning outcomes.

In terms of assessment techniques, written examinations and Objective Structured Clinical Examinations (OSCEs) are two main ways to evaluate the outcomes of medical education. Written examinations focus on evaluating the retention of knowledge, and while this method is straightforward to implement and standardise, it may encourage rote learning and lacks a comprehensive evaluation of students' overall competencies. OSCEs, on the other hand, aim to evaluate students' practical clinical competencies and communicative skills with patients. Despite endeavouring to create a clinical setting close to reality, they are constrained by high costs and intensive resource organization (Chang et al., 2023). Furthermore, limitations in their standardisation, realism and the pressure of exam timing may influence students' capacity to demonstrate their true abilities. Subjective variations among examiners and the lack of feedback to students are also potential challenges (Baid, 2011). Assessment techniques that focus on dissertations and research projects foster the development of critical and inventive thinking, but the subjectivity and non-standardised scoring in the assessment process are also limiting factors when evaluating students' comprehensive abilities (Amgad et al., 2015).

In the traditional medical classroom, educators fulfil multiple roles (Fig. 1). They act not only as authoritative conveyors of knowledge, ensuring the precision and authority of medical information, but also as providers and organisers of educational resources. They select and integrate various learning materials, design course materials, and fairly distribute educational resources to ensure that each student gains the required knowledge and skills. Educators also continually update resources to adapt to educational advancements. Additionally, as instructors of skills training, they oversee students' clinical practice in a safe setting and provide prompt feedback. In their role as educational outcomes assessor, they monitor students' educational progress and outcomes through examinations and evaluations (Hatem et al., 2011). However, this teacher-centred approach, despite its efficiency in knowledge transmission, may lead to passive learning amongst students and neglect the importance of recognising individual differences and fostering a spirit of active enquiry. Therefore, with the evolution of educational concepts and methods, such as the introduction of PBL and CBL teaching methods, traditional roles are gradually evolving towards more dynamic facilitators, champions, organisers, and supporters, emphasising student-centred personalised instruction and independent learning. This shift reflects a growing emphasis on the adaptability, interactivity, and comprehensive skills development of students. Traditional medical education is undergoing profound changes with the advancement of science and technology. Teaching content, methods, and the roles of educators are constantly adapting to the opportunities and challenges presented by new technologies. As AI technology is gradually integrated into medical education, it also imparts new attributes and connotations on traditional teaching roles.

Fig. 1
figure 1

Traditional medical teacher roles (left) and new attributes endowed to medical teachers (right) by large language models (LLMs)

Opportunities from LLMs impart new connotations to medical educators

Opportunities LLM brings to medical education

In the age of AI, LLMs such as ChatGPT are redefining the paradigms of medical education, enhancing teaching efficiency and enriching the content and methods. Educators can now use LLMs to construct syllabi and lecture notes efficiently, and even create textbooks, significantly increasing teaching productivity (Han et al. 2023). ChatGPT not only can elucidate complex medical concepts with clarity but also visualise these concepts, making the educational material more enriching and intuitive (Alkaissi & McFarlane, 2023). For example, in subjects like anatomy that require strong spatial reasoning, LLMs can cater to individual student needs by not only generating textual explanations of anatomical structures but also creating images or videos that dynamically illustrate these structures and their functions. Moreover, AI's personalised learning paths cater to the unique needs of each student, enhancing engagement and significantly boosting collaborative skills and motivation through PBL-based models (Hamid et al., 2023). If a student has difficulty diagnosing respiratory diseases, LLMs can generate a series of patient cases focused on various respiratory conditions, complete with medical histories, lab results, and imaging studies. This allows students to practice and enhance their diagnostic skills in a targeted manner.

In clinical skills training, ChatGPT offers flexible virtual clinical scenarios, particularly when combined with forthcoming virtual reality technologies, to facilitate students' risk-free skills training and assist them in identifying their weaknesses and strengths through immediate feedback (Heng et al., 2023). This innovative approach to teaching transcends the traditional constraints of time and space and addresses the problem of unequal distribution of clinical resources (Lee, 2023). For instance, during simulated clinical rounds, LLMs can simulate patients with various diseases. Students can interact with these virtual patients, ultimately making diagnoses and determining treatment plans. LLMs provide feedback and specific personalized guidance based on the students' responses, helping them develop clinical skills with a wealth of case resources. Regarding continuing education, AI technology enables medical professionals to stay informed of the latest medical research and clinical practice developments promptly, especially in handling rare cases and emergent epidemics. By aggregating global medical information for knowledge sharing, it augments doctors' clinical experiences (Seetharaman, 2023). AI's cost-effectiveness ensures that medical professionals can easily update their knowledge and skills amidst busy work schedules (Mesko, 2023).

As LLMs influence educational methods and content, assessment approaches are likewise evolving. ChatGPT can now efficiently produce multiple-choice questions and simulate OSCE stations that reflect the depth of learning, providing real-time feedback on responses, substantially enhancing students' exam preparation efficiency (Li et al., 2023; Tsang, 2023). This mechanism of prompt feedback allows students not only to understand their performance but, more critically, to pinpoint their strengths and areas for improvement (Seetharaman, 2023), surmounting the constraints of traditional written examinations with their extensive preparation and absence of feedback. The employment of LLMs streamlines the exam preparation process, conserves valuable time, and, by analysing students' learning data and examination results, aids educators in discerning students' learning patterns and challenges to refine teaching strategies (Meyer et al., 2023). In assessments based on research papers and projects, LLMs can aid in precise literature searches, summarisation, and data interpretation, recommend research methodologies and experimental designs, and bolster the writing of papers across languages, thus substantially enhancing the efficiency and quality of academic writing (Biswas, 2023; Gao et al., 2023; Kitamura, 2023; Shen et al., 2023; van Dis et al., 2023). These advancements underscore the immense potential of AI in medical education and signal a pivotal shift in teaching models.

New connotations of medical educators

The role of educators is undergoing a profound metamorphosis, and shortly, they are anticipated to transition from conventional knowledge transmitters to multi-dimensional roles. LLMs present opportunities to medical education by infusing traditional teacher roles with new connotations. As knowledge conveyors, educators are no longer mere unilateral transmitters of information but utilise LLMs as potent auxiliary tools to bolster teaching efficiency and enrich and engage the educational content, thereby magnifying their teaching impact. As clinical skill instructors, educators can offer a broader spectrum of clinical simulations through LLMs, providing more tailored guidance and feedback, thereby augmenting students' practical capabilities. In appraising students' learning achievements, teachers persist as assessors but can now, with the support of LLM's efficient examination preparation and data analysis, diagnose students' progression and pinpoint areas requiring enhancement more accurately. As educational resource provider and organizers, LLMs empower teachers to access, manage, and equitably distribute teaching resources more effectively and conveniently, presenting an expanded variety of learning materials and environments.

Challenges from LLMs add new attributes to medical educators

Challenges LLM brings to medical education

When utilizing LLMs in medical education, it is vital to be aware of information accuracy, potential biases, the risks of over-reliance, academic misconduct, and copyright infringement, as well as their limitations in nurturing empathy and concerns related to patient privacy and data security. Research indicates that ChatGPT may yield incorrect responses when handling complex medical queries, and its precision varies across different medical sub-specialties (Alkaissi & McFarlane, 2023; Johnson et al., 2023). Despite improvements in accuracy with ongoing developments in deep learning, the medical profession must stay alert to potential misinformation, as any inaccuracies in medical education could imperil patient safety. These issues of precision partly arise from shortcomings in training data, its outdatedness, and constraints from direct access to authoritative databases like PubMed, the Cochrane Library, or UpToDate (Arif et al., 2023; Haman & Skolnik, 2023). Biases in training data and algorithms can skew information generation, potentially leading to an incomplete understanding of knowledge or unfair treatment and discrimination against specific genders or ethnicities. This can cause disparities in diagnosis and patient care (Khera et al., 2023; Zack et al., 2024). Furthermore, there is a risk that students may become excessively reliant on LLMs, employing generated content as submissions for coursework or research findings, which not only clouds the assessment of originality but might also erode their scholarly abilities and critical thought (Abd-Alrazaq et al., 2023; Dergaa et al., 2023). Such over-reliance can promote academic misconduct and infringe on intellectual property rights. The professional-level text generated by LLMs can be challenging for industry experts to distinguish, allowing room for plagiarism (Else, 2023; Shen et al., 2023). Current LLMs embedded in browsers can directly link to source data, such as Google Gemini and Microsoft Copilot. If students copy directly from these sources, it can lead to source data plagiarism, inevitably causing damage to intellectual property rights. Additionally, medical education places a premium on fostering empathy, a skill that LLMs cannot cultivate, Although LLMs can be trained to express empathy similarly to humans, and sometimes even better, they lack the ability to provide genuine emotional experiences (Ayers et al., 2023; Guidi & Traversa, 2021). They cannot perceive and convey non-verbal emotional cues such as facial expressions, tone of voice, and body language. Consequently, students are unable to acquire the profound interpersonal relationship experiences essential for developing empathy through interactions with LLMs (Safranek et al., 2023). Lastly, privacy risks and data security issues associated with LLMs are significant ethical challenges in medical education. Inputting sensitive patient information, such as names, gender, medical history, or non-anonymised imaging pictures, to generate teaching materials or obtain interpretations from LLMs can lead to data leakage due to the memory effect of LLMs. Even anonymised information can be combined with other data on the internet to achieve re-identification, posing significant challenges to protecting patient privacy and data security (Jegorova et al., 2023; Rocher et al., 2019). Safeguarding the privacy of sensitive medical information and ensuring data security are paramount when employing LLMs. The challenges brought by LLMs to medical education are also test current medical ethics. After an in-depth analysis of the advantages and disadvantages of LLMs in education, Scholar Ahamed concluded that despite LLMs' enormous potential, greater caution is needed during their use, and relevant guidelines must be developed to ensure their safe application (Tlili et al., 2023).

New attributes of medical educators

The challenges posed by LLMs in medical education require educators to possess new role attributes. To avoid the widespread dissemination of misinformation and protect privacy data, educators need to shift from traditional teaching roles to becoming "information management experts". To guide students in the wise use of LLMs, prevent over-reliance, eliminate biases, and cultivate critical thinking, teachers should act as "learning navigators". Additionally, to foster students' sense of responsibility and research integrity, educators must also take on the role of "academic integrity guardians". Finally, emphasizing the indispensable nature of practical experience, educators need to continue as "clinical practice champion", highlighting the importance of empathy and clinical skills developed through practical training.

As "information management experts", educators should adopt a series of specific measures to ensure the accuracy of information generated by LLMs and protect privacy data. First, educators need to deeply study and master the latest AI technologies, understanding the working principles, strengths, and limitations of LLMs (Wensheng Gan et al., 2023). Teachers should formulate questions and directives critically to ensure the authenticity of the obtained information while guiding students in proficiently filtering and evaluating the vast amount of available information. This includes querying original research, cross-referencing authoritative databases, and seeking expert opinions to verify sources, helping students acquire or discern reliable data to prevent the spread of misinformation. Additionally, establishing a feedback mechanism to collect student feedback on using LLMs, identify issues encountered, and recognize deficiencies in teaching methods is crucial for making improvements and optimizations. In terms of privacy data protection, educators should strictly adhere to relevant data privacy and security regulations and stay informed about the latest laws. They should avoid inputting personal sensitive information or patient data, use anonymized or virtualized data for teaching, and employ encryption technologies and secure protocols to protect data transmission and storage. Professionals should be assisted in regularly conducting security audits and risk assessments on LLMs to identify and fix potential security vulnerabilities. Furthermore, educators should teach students the importance of data privacy and security, formulate clear data usage and protection guidelines, and supervise their implementation to ensure that students comply with the regulations.

As "learning navigators", educators should take specific measures to guide students in the proper use of LLMs, recognize and overcome biases, avoid over-reliance, and cultivate critical thinking. Firstly, in the teaching process, they should adopt diverse and inclusive knowledge backgrounds to avoid reinforcing stereotypes of specific populations or medical conditions, ensuring that students understand the fairness and comprehensiveness of knowledge. Secondly, educators should set open-ended questions and exploratory tasks to cultivate students' critical thinking. They should encourage structured learning activities, such as group discussions, role-playing, or themed debates, focusing on the process of analyzing problems and divergent thinking rather than reaching predetermined conclusions (King & chatGpt, 2023). This helps students develop independent thinking and problem-solving skills, which also aids in avoiding excessive reliance on LLMs.

Upholding academic honesty remains a crucial component of medical education, and LLMs have infused a sense of urgency in maintaining academic integrity (Dergaa et al., 2023). As "academic integrity guardians,"educators should acknowledge that outright banning of LLM usage for assignments or research by students is impracticable. The crux lies in guiding students in the appropriate exploitation of these tools. The educational focus should be on the significance of proper attribution, ensuring transparency in data reporting and the verifiability of sources, and delineating the repercussions of scholastic dishonesty explicitly. Reducing students' reliance on LLMs is also an effective strategy to uphold academic integrity. Pedagogical methods should be designed that emphasise critical thinking and creativity, such as thematic presentations, field internships, group dialogues, and the production of medical videos—endeavours that language models currently cannot replicate (Graham, 2022). Additionally, deploying plagiarism detection technologies like Originality.ai, GPTZero, and other software to deter and detect plagiarism reinforces the principle of scholarly honesty (Lee, 2023).

Despite the remarkable potential LLMs showcase in simulating clinical settings, educators must continue to stress the irreplaceable value of actual clinical experience. As the bedrock of medical education, genuine clinical practice affords students with invaluable diagnostic and therapeutic experiences, as well as the chance to refine their decision-making skills when faced with intricate cases (Burgess et al., 2020). Direct interaction with patients not only polishes their communicative abilities but also provides essential fodder for nurturing empathy and the capacity for compassion, while collaborative efforts enhance their teamwork competencies in a multifaceted medical context. The expert mentorship students receive during clinical rounds cannot be matched by the theoretical or simulated tutelage that current LLMs offer (Bosmean et al., 2022). Moreover, actual clinical practice enables students to personally assimilate the gravity of medical ethics and legal responsibilities, establishing a robust foundation for their forthcoming professional journey and growth (Chen et al., 2022). Therefore, even amidst the swift progression of AI technology, educators must steadfastly serve as "clinical practice champions", ensuring that students' education is firmly rooted in empirical practice, marrying theoretical knowledge with practical lead, readying them to tackle future professional tribulations.

The integration of LLM technology into medical education broadens the traditional implications of the educator's role and introduces new attributes and dimensions. Enhancing the medical teacher's role in terms of connotations and attributes is essential not only for their function as knowledge transmitters, technology users, learning facilitators, and ethical guides in modern education but also for the effective management and appropriate utilization of LLMs in medical education. Medical education faces unprecedented opportunities and challenges. Educators must master the latest AI technologies, seize opportunities, and fully utilize their advantages to enrich teaching content and enhance teaching effectiveness. At the same time, they must implement diverse strategies to address the challenges these technologies bring. Teachers are not just knowledge transmitters; they are also crucial mentors and guardians guiding and protecting students' growth in an ever-changing data era (Fig. 1).

Conclusion

This article has conducted an in-depth examination of the conventional roles of educators within the sphere of medical education, the prospects and hurdles introduced by LLMs, and the fresh attributes and implications these have instilled into the educator's function. Historically, medical educators have been instrumental in disseminating knowledge, honing skills, evaluating educational outcomes, and amalgamating and orchestrating resources. With the advent of LLMs in medical education, the role of teachers has been considerably broadened and enhanced. The prospects afforded by LLMs, such as augmented teaching efficacy and quality, the evolution of personalised learning, and the improved amalgamation and allocation of educational resources, have conferred new dimensions upon the traditional roles of educators. Nevertheless, educators confront an array of challenges posed by LLMs, encompassing the assurance of information precision, averting student dependency, and upholding scholarly integrity, thus compelling the adoption of novel role characteristics. The infusion of new meanings and attributes into the role of medical educators underscores the indelible significance of teachers in the AI-led metamorphosis of medical education. In this epoch of ever-advancing AI, the role of educators becomes ever more critical—they are not merely utilizers of technology but also custodians ensuring the calibre and virtue of medical education.

Availability of data and materials

Not applicable.

Abbreviations

AI:

Artificial intelligence

ChatGPT:

Chat Generative Pre-trained Transformer

CBL:

Case-based learning

LLMs:

Large language models

OSCEs:

Objective structured clinical examinations

PBL:

Problem-based learning

References

Download references

Acknowledgements

The first author extends sincere gratitude to Ding Ding for her selfless dedication in the writing of the paper.

Funding

This study was supported by the Hospital-Level Teaching Reform Project of the First Affiliated Hospital of Chongqing Medical University (Grant No.: CMER201911) and the Program for Youth Innovation in Future Medicine at Chongqing Medical University (No. 0191).

Author information

Authors and Affiliations

Authors

Contributions

LZ and RW contributed to the conceptual design, data collection, drafting, final revision, and final submission. LFH and LH contributed to data analysis, drafting the article, language correction, and revision. WXH and FQN contributed to data collection, drafting, and revision. ZY contributed to data collection, drafting, language correction, and final revision. All contributors were involved in revising the final version and approving the article.

Corresponding author

Correspondence to Wei Ren.

Ethics declarations

Competing interests

The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, Z., Li, F., Fu, Q. et al. Large language models and medical education: a paradigm shift in educator roles. Smart Learn. Environ. 11, 26 (2024). https://doi.org/10.1186/s40561-024-00313-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40561-024-00313-w

Keywords