Workshop chair Carole Tucker, the associate dean of research at the University of Texas Medical Branch, briefly recapped previous sessions of the workshop and the discussions held on the role of artificial intelligence (AI) in health care education and practice with social, cultural, economic, and policy considerations. The second session, she said, would examine the practical side of integrating AI into health professions education, relevant competencies, and how health professionals can develop these competencies.
Bonnie Miller, a former senior associate dean for health sciences education at the Vanderbilt University School of Medicine and the executive vice president for educational affairs at the Vanderbilt University Medical Center, spoke about her experiences working on a multidisciplinary team, identifying and describing the competencies that are necessary for health care professionals to use AI tools (Russell et al., 2023). She began with a definition of AI: “computer science techniques that mimic human intelligence, including algorithms that leverage machine learning, deep learning, natural language processing, and neural networks.” The most frequent uses for AI in health care, she said, include risk stratification and scores, image interpretation, and health record summarization.
Their process of identifying competencies began with a scoping review, looking at papers that described the implementation of AI-based tools in clinical settings. While there were many papers that described the
implementation process, far fewer discussed the training that clinicians received before using a tool. Miller and her colleagues reviewed the literature, summarized the evidence, and conducted interviews and questionnaires. An initial list of competencies was developed, and the list was refined based on feedback from subject matter experts. These experts, Miller said, were from a wide variety of professions, including informatics, medical education, public health, nursing, machine learning, surgery, ethics, pharmacy, social sciences, and computer science. Several themes related to the integration of AI into clinical practice emerged from this process:
Miller shared a few details about these themes. All of the experts said there was a need for foundational knowledge about data, statistics, and the appropriate use of different types of tools. Many also mentioned the need for consideration of ethical issues in the use of AI, with two subthemes emerging. First, clinicians cannot abdicate their professional responsibility to their patients by saying “the AI made me do it.” Second, there were strong concerns about the potential for amplifying pre-existing inequities because of underlying bias in the datasets as well as lack of representation in the groups making decisions about what tools should be used and how. Experts also pointed to the potential for AI to free up clinician time to focus on the patient relationship, but Miller emphasized that this will not automatically happen; it is important to think about how to make it happen. Furthermore, it is critical to consider how AI tools will be accommodated in the workflow and to be proactive and deliberate about this process.
Based on these themes, Miller and her colleagues developed six competency domains with 25 sub-competencies. She offered a brief description of each domain that is published in the article by Russell et al. (2023):
Although all of these competencies are important, Miller said, a competent clinician also needs to function within a capable system. The capability of an organization depends on the competencies of individuals as well as on the organization’s resources and infrastructure and the routines that are in place (Figure 3-1). Clinician competencies are critical, she said, but clinicians must be supported by capable organizations and social, regulatory, and legal systems.
In concluding, Miller offered her thoughts on how health professions education and practice can move forward with integrating AI in a thoughtful and efficient way. First, she said, health care teams will want to include experts in AI. While not every team member will have to be an AI expert, it would be desirable to bring mathematicians, data scientists, and others into the team. Second, this is a rapidly changing field, and not all changes are visible. Miller added that there is an obligation for transparency, communication about uncertainty, and regular monitoring to ensure that clinicians and patients understand the tools that are being developed and their appropriate use. Third, there will be changes in other sectors that will influence public opinion and expectations. For example, there are many opinions and fears about technologies such as self-driving cars and ChatGPT (Chat Generative Pre-Trained Transformer), but as these technologies improve and become more integrated into daily life, people may expect similar changes in health care. Finally, Miller shared that AI poses a risk of widening existing disparities, due to biased data, biased algorithms, and the varying ability of different institutions to access and use new technologies. It is important to think carefully about how these resources can be equitably distributed, she said. Miller quoted a New York Times article, in which the writer said that people working on AI tools “are creating a power they do not understand at a pace they often cannot believe” (Klein, 2023). An entire system may be totally disrupted
by these technologies, and educators and clinicians alike need to expect the unpredictable, Miller concluded.
Kimberly Lomis, the vice president of undergraduate medical education innovations for the American Medical Association, began by asking Miller how health educators can “possibly incorporate AI into already over-packed programs.” Miller responded that ideas about what should be covered by health professions education have evolved over the years, and curricula have evolved as well. For example, issues of health equity and the social determinants of health were not widely covered 10 years ago. There may be a need to rethink some of the prerequisites for programs (e.g., emphasizing statistics over calculus) and a need to integrate foundational knowledge about AI into introductory courses. Miller said that faculty competency to teach new content domains “always lags behind the need,” so experts from other fields (e.g., informatics, computer science) may need to be brought in for course work and clinical experiences.
AI is moving quickly, a workshop participant emphasized, and it might be beneficial to get ahead of the social, legal, regulatory, and other issues before AI is fully integrated into health care. Miller agreed and said the U.S. Food and Drug Administration (FDA) is beginning to regulate some AI-based devices, but some systems may not be subject to FDA regulation; some AI might even be embedded in tools in which it is possible to “not even know that they’re there.” Some have called for “model facts” (Sendak et al., 2020), which would be similar to a nutritional label. The label might include information about the dataset the model was trained on, the question the model was engineered to answer, and the model’s specificity and sensitivity—the likelihood of accurate predictions. An additional approach would be a statement that is equivalent to an environmental impact statement. Before employing a technology in a specific environment, stakeholders would be encouraged to anticipate the impacts and unintended consequences and, after the technology was implemented, to monitor for these effects. For example, stakeholders would consider who will be displaced, how workflows would be disrupted, and what populations might be benefited or harmed. These are questions, Miller said, that need to be studied before the technologies are implemented.
Melissa E. Trego, an associate professor at the Pennsylvania College of Optometry, described herself as an educator of young learners, and, as such, she said she is not concerned about whether they will embrace these new technologies or be comfortable using them. She is more concerned that students might not be comfortable enough communicating and connecting with patients face-to-face. Miller responded by acknowledging this as a valid concern, not just among young people but among people of all ages. There may be a need to be more deliberate about teaching people skills, she said, and implementing practices such as device-free breaks.
Another competency that will be essential, Kim Dunleavy, associate clinical professor at the University of Florida, said, is critical thinking skills. Some students are used to “push button” technology that simply gives the answer, so how can health professions education help students develop the ability to think critically about how to use information from different sources? Miller responded that students need opportunities to use and interact with technologies and that such opportunities are teachable moments. For example, students could generate text from ChatGPT and then be asked to critique the text and identify evidence that supports or disagrees with what the AI tool created. Tucker added that when ChatGPT initially came out, she thought it would be bad for students and their ability to think critically. However, she now sees that it could be helpful for students as they learn to analyze the value and veracity of information, as, ideally, critical thinking skills would be taught long before health professions education.
Framing AI as a clinical tool is a useful way to look at it, Alison Whelan, the chief academic officer at the Association for American Medical Colleges, said. She drew an analogy between AI and genetics technology. Years ago, when Whelan taught genetics to first-year medical students, she would introduce whole-genome sequencing and discuss how it could affect care. The students learned the content, but the information soon became irrelevant to them because it didn’t exist in the clinical space. She had similar experiences when teaching practicing physicians about the promise of genetics. However, once a new genetic test came out—for example, a hereditary risk test or genetic profiling of tumors—everyone wanted to know how it worked and how they could use it. This need-to-know moment is the “teachable moment,” she said. Whelan encouraged stakeholders to capitalize on opportunities to discuss AI in the context of new tools and interesting case studies. While starting with a discussion of a specific AI tool rather
than the foundational knowledge around AI may seem “backwards,” she said, it is more likely to be relevant and understandable to students. Whelan concluded that if educators move too far ahead teaching the concepts of AI without a concrete example, students will likely give a “blank stare” because it doesn’t resonate with them.
This page intentionally left blank.