Workshop chair Carole Tucker, the associate dean of research at the University of Texas Medical Branch, began the final session by saying that when considering how artificial intelligence (AI) will be integrated into health care, health educators must grapple with three main issues: training future health professionals in the use of AI, using AI as a tool in health professions education, and identifying an appropriate role for health professionals in the development and deployment of AI systems. Tucker recapped the previous workshop sessions and highlighted discussions from speakers and participants that were related to these three issues, including the following:
In this closing session, speakers and participants explored what health professions educators can do to incorporate AI into health professions
education. The session began with Kimberly Lomis, the vice president of undergraduate medical education innovations for the American Medical Association, presenting a list of eight proposed steps drawn from her own published work (Lomis et al., 2021):
This list was used to frame a discussion on potential next steps for incorporating AI into health professions education. Taking one point at a time, panelists were asked to comment on how they were engaging in each step and to discuss their perspectives on and experiences with challenges and lessons learned. Lomis reviewed all eight items but asked panelists to share their thoughts on only the first six, which she said have more immediate applicability to educators currently grappling with how to incorporate AI into their course or curriculum.
Katie Link, a medical student at the Icahn School of Medicine at Mount Sinai, told participants about a student-led effort to initiate an elective course on AI and medicine, designed to create basic AI literacy among medical students and others. The students began by reaching out to the faculty to get their input and perspectives and looking for faculty mentors who could champion the course. They brought in examples from the existing curriculum as well as articles that showed the importance of AI literacy for health professionals, and they demonstrated significant interest from the student body. This effort took place before ChatGPT (Chat Generative Pre-Trained Transformer) and the associated excitement around AI, Link said. There was some content that had already been delivered by faculty members, and students were asking for more content in this area, believing
it to be essential for their future careers. Lomis pointed out the first big step, which is often getting people “educated enough to know” the importance of AI; that is, people must first gain some basic knowledge of AI and its place in the health professions in order to see the value of integrating AI into health professions education.
Bonnie Miller, a former senior associate dean for health sciences education at the Vanderbilt University School of Medicine and the executive vice president for educational affairs at the Vanderbilt University Medical Center, added that “we are at a teachable moment,” given all the publicity around ChatGPT. Currently, there are many ad hoc efforts happening around the country. In the future, these efforts could be more coordinated, structured, and systematic, but right now it is a good time to capitalize on the excitement and make things happen.
Cornelius James, clinical assistant professor at the University of Michigan Medical School, spoke about his experiences at University of Michigan. The conversations about AI at the university involved many people in computer science, research, and engineering but few frontline clinicians or medical educators, he said. To address this issue, he and his colleagues developed a series of webinars that culminated with a symposium; the webinars were open to individuals across the country and had a diverse turnout of people from various backgrounds. It was important to get these people in the same room and speaking the same language about this important topic, James said. Other educational efforts discussed included the following:
What are the relationships to put in place, Lomis asked, so the right experts are in the room when integrating AI into health professions education? Who are the stakeholders to be included—educators, clinicians, computer science professionals, ethicists? Can early adopters from any field be a resource for new efforts? Miller answered that her research (see Chapter 4) found that competent clinicians need to be supported by capable organizations. Capable organizations are those that have the infrastructure (e.g., committees) set up to evaluate and monitor outcomes and make decisions. Educators need to be represented on these committees in order to consider
the impact these tools could have on the learning environment or on the opportunities that students have to develop knowledge and skills.
Link commented that in their effort to start a course on AI, the student group met with experts within their own institution and in other institutions. These experts were critical for making the case for education on AI and for elaborating on ways that AI is being integrated in the clinical workflow. The students brought in diverse speakers and experts from academia as well as industry to open the group’s thinking on broader perspectives related to AI. One industry professional gave a demonstration of an automated documentation tool to students; this gave them a real-world perspective on how these tools work in the clinical setting. In addition, they invited students from across the country and across disciplines to participate in their online courses, which challenged all of the participants with different perspectives.
Judy Gichoya, an assistant professor at the Emory University School of Medicine, spoke about her experiences as the lead for an ML elective at her university. One lesson she learned was the necessity of reaching out to others and getting out of “your comfort zone.” In an academic institution, there are most likely many people working on AI in other departments and schools; connecting with these individuals will save time and effort. No one can do this work alone, she said, and working with people in different areas will open pathways to new perspectives and encourage learning from one another. There is no benefit to reinventing the wheel; the benefits are to be found in what educators can teach students and how to inspire learners.
The Data Augmented Technology Assisted Medical Decision-Making team (DATA-MD) at the University of Michigan consists of individuals engaged in medical education, clinical care, research, data science, pharmacy, nursing, precision health, informatics, and other fields, James said. Initially, he thought that all of these “wonderful, brilliant people” were doing him a favor by helping him to develop AI curricula. Soon, however, he realized they are also all learning from one another. For example, he heard a computer scientist ask clinicians for their opinion on a model and an engineer ask about the needs of health professions students. There is mutual benefit to these types of collaborations, James said. And he added that while the workshop was focusing on teaching AI to health professions students, there is also a need to teach computer scientists and engineers about health care, so they can develop models and tools that are relevant and important.
The third step on the list of action items, Lomis said, is to establish a local advisory group. Every educational program has a curricular oversight committee, but that group likely does not have a high level of AI-related expertise.
An AI-specific advisory group can coordinate with the existing committee and consider how different pieces fit together. Related to this, Lomis asked panelists how the set of interprofessional competencies fits together with AI competencies—are they completely separate competencies, or do they overlap in some ways? Miller responded that in her research on AI competencies, she and her colleagues began with an established interprofessional competency framework. However, they found that the framework did not fit neatly, so they developed a separate framework that could better communicate their ideas. Lomis remarked on how often seemingly compatible competencies don’t work well together and said that it can be helpful to explicitly acknowledge the relationship between the concrete competencies in one domain and how they fit into a broader view of competencies.
Link joined the discussion, saying that the Mount Sinai medical school is in the process of redesigning its curriculum and that a group of students who were involved in the AI course have been invited to share what they have learned. The students are working on identifying elements to potentially incorporate into required curriculum. They are also looking into areas for elective curriculum and the possibility of creating extended advanced coursework for students who are very interested in pursuing a deeper understanding of AI.
Jeffries commented on her own AI efforts at Vanderbilt where she created a small advisory group to look at the role of AI in communications, admissions, and the classroom. While this group is creating general guidelines, Jeffries acknowledged that because technology evolves so rapidly, there needs to be flexibility and a willingness to iterate and redesign. Tucker further remarked that changing curriculum can be a slow process because it is largely driven by the competencies identified by the accreditation board. However, there is space to include “chewable chunks” of information about AI and ML for students who need basic information but who are not going to pursue an elective or advanced course in the topic. Making progress in students’ understanding of AI does not necessarily require a monumental effort, she said, but maybe just try to move everyone “forward a little bit . . . in their breadth of understanding.” Miller then expanded the discussion by suggesting that patients could be involved in conversations about integrating AI into health professions education and practice and that community members and patient advocates could help educators and clinicians better understand how AI affects patients, diversity, and equity.
There are many areas in which AI can fit into existing competency frameworks and curricula, Lomis said. For example, coursework about clinical reasoning and clinical decision making can include discussions
about the appropriate role of AI. James offered his perspective as the director of the evidence-based medicine curriculum at his medical school. A decision was made at the University of Michigan to begin getting AI-related content into the curriculum through evidence-based medicine, acknowledging that not everything can be done at once. Next, the school’s department of internal medicine residency program received funding to integrate AI-related content into its curriculum. Lomis interjected that it can be very challenging to try to introduce a new course, so weaving AI into areas where there is already a connection is a great way to start.
Miller spoke briefly about how AI can be used in the educational process itself, in particular with precision education and identifying strengths and gaps for individual students. The American Medical Association is a leader in this area, Miller said. AI could be used to mine information from new sources (e.g., progress notes) to find conceptual gaps or experience gaps in a systematic way; students would then be directed to the next level of what they need. Such advanced technology allows educators to focus on individual students’ needs, Miller said, and is a great opportunity to use AI to improve education. Technologies such as ChatGPT could be used by educators for tasks such as generating first drafts of clinical scenarios and could make teaching more efficient. There is a need, however, for transparency in the use of these types of tools. Lomis said that one of the hopes for AI in the clinical space is that it could reduce some of the administrative burdens of clinical care; similarly, there could be opportunities to use AI to reduce some of the administrative burdens in academia so educators can focus more on students.
Mollie Hobensack, a Ph.D. candidate at the Columbia School of Nursing, shared her experience working with researchers on a clinical decision support system called CONCERN, which uses AI to analyze nursing data to produce an early warning score that identifies patients at risk for deterioration. An evaluation of the implementation of this system found that one benefit of this system is that nurses are learning about how their documentation can be used and analyzed in AI technology. This tool in particular can support new nurses in building critical thinking skills through reflecting on the drivers of deterioration and how they are captured in the electronic health record. Young health professionals in particular may be interested in and motivated by this type of interaction with AI, she said. Tucker added that documentation is an area in which there is a big opportunity for AI to make a difference and one that “everybody could get a handle on very easily.”
Given the new competencies and skills health professionals may be expected to have, Lomis asked, how can learners be assessed to determine
if they have developed these competencies and skills? Lomis acknowledged that because AI is a relatively new area, assessment may not be a current top priority. Link responded to the question by saying that the course at her medical school is pass/fail and graded by attendance; the only assessment has been on student feedback on the course (e.g., self-reported changes in their understanding of AI). There are plans to develop case-based assessments to possibly set up simulating environments in which learners can use the information in practice. Gichoya said that her course is also pass/fail, but there are three milestones the students can fulfill at the end of the course. One milestone is a dataset exercise, where students choose any dataset and examine it. Another is a focused literature review, and the third is to propose a technical project. Gichoya said she has realized that students, even those with a computer science background, struggle with carrying out the technical side. ChatGPT has only been out for 5 months, and it has already changed the curriculum; educators can strive to create structures that can quickly adapt to new technologies and situations.
Lomis concluded the discussion by saying that assessment is an area in which there is an opportunity to use AI. She proposed a scenario where AI could be used to look at the feedback given to learners and at how to improve the quality of that feedback. Then, based on the AI assessment of the feedback, supervisors could be coached to use language that is actionable and aligned with the targeted competencies.
Shifting to the use of AI in admissions and the selection process, Lomis called out two issues related to AI. First, does the integration of AI in health care mean that health professions education programs should be recruiting or accepting different types of students with different backgrounds or interests? Second, how could AI be used to make the admissions process more efficient? Link responded by saying that she is part of her medical school’s admissions committee, and, as such, she has learned the value of having a diverse group of people evaluating applicants. For example, an application of a student with a computer science background is best reviewed by someone who is familiar with that educational pathway and can evaluate how the student’s experience may be useful in the program.
Miller told workshop participants about the Medical Innovators Development Program (MIDP) at Vanderbilt, which was designed to recruit applicants who already have Ph.D.’s in technology-related fields. Students in the MIDP, as well as other medical students, have the opportunity to identify clinical challenges that could be solved with a technological innovation. Students can work on these innovations as a different type of internship experience in the third or fourth year. This program
has been helpful, she said, in diversifying the group of students who come to medical school as well as in influencing other medical students about potential career paths.
Jeffries described conversations she had with faculty at Vanderbilt University School of Nursing who have been discussing the issue of students using ChatGPT for admissions essays. Lomis offered her opinion that banning the use of ChatGPT is unlikely to work, so there is a need for more nuanced and realistic solutions. Jeffries agreed, adding that schools need to empower students to use tools like ChatGPT in an appropriate way—for example, to use it in creating an outline that cites ChatGPT for its contribution.
Lomis asked if AI will or should influence the type of learner being recruited for health professions education programs. James responded by saying that he believes there is a need for a diverse group of students in the health professions and that not every learner will need to be an expert in AI. He drew an analogy to randomized clinical trials: not every health professional needs the skills to conduct a randomized clinical trial, but every health professional needs to be able to appraise and apply the results of a trial. Similarly with AI, not every health professional will be a programmer. As AI is integrated into practice, health professionals will need team skills for building bridges between the patient, the care team, and AI tools. Through this process, clinicians’ time may be freed up to pay more attention to empathy and communication. Lomis built upon James’s comment, saying that such a scenario would require health professionals to balance the assessment and appraisal of information with being an interface between the patient/families and tools.
To illustrate the use of AI, Carl Sheperis, dean at Texas A&M University, had spent the closing session cutting and pasting comments and questions raised during the workshop into ChatGPT and then asked it to perform a thematic analysis on the inputted data. The ChatGPT noted five primary themes:
Lomis asked the panelists to comment on the second theme of addressing public mistrust of AI, noting that in the current environment of misinformation and disinformation, health professionals will likely need to be able to explain to patients how technologies like AI work. Furthermore, there may be a need to be more deliberate as a field to communicate with the public more generally. Gichoya pointed to the value of ChatGPT for bringing AI to the forefront of the public’s attention. ChatGPT was used to write a song that drew from the musical artist Drake, and the public largely responded by saying this crossed a line. There is a tremendous opportunity, Gichoya said, to take advantage of this public attention to educate the public about AI and how it potentially could be used in fields such as health care. One area of concern, she said, is using AI for ambient listening—that is, documenting everything that is said in the clinical setting, whether it needs to be documented or not. Lomis concluded that it will be important to examine how the clinical world will be changed and how to train health professions students to work in this world. “We need to be honest about the upcoming disruption,” she said, and have the hard conversations about how clinicians and educators will address and manage those disruptions.
On the question of trust, Miller said that while mistrust of AI is an issue, there is also the issue of misplaced trust in AI. Trust in expertise is diminishing in general, she said, and people who do not trust experts may be more likely to place their trust in what a computer tells them rather than what their health care professionals tell them. The most effective clinicians, she said, may be the ones who are able to acknowledge the information and resources that are out there and use their expertise to help patients make sense of the information and make decisions about their care.
Miller also remarked about her “mixed thoughts” on whether the use of AI in the clinical setting needs to be disclosed to a patient. The current uses of AI are fairly narrow, she said, and it does not seem necessary to explain these to patients. However, as AI progresses institutions will have to consider the need to make statements about their use of AI and think about institutional commitments and policies regarding responsible use. James commented on the more than 300,000 health apps available to consumers, noting that many of them employ AI. Some patients may already be fairly savvy about the use of AI and the tools that exist. Patients also vary in terms of their comfort and trust of AI; some may trust AI over a doctor and others trust the doctor over AI. Being aware of these dynamics, Miller concluded, is how health professions educators and practitioners can be better positioned moving forward and, as such, better ensure that health professions students are prepared and ready to address these issues of trust in their workplaces.
With that, Tucker thanked the planning committee, panelists, moderators, and attendees for their participation and adjourned the workshop.