During the third session of the workshop series, participants broke into groups to explore the competency domains and sub-competencies that Bonnie Miller, a former senior associate dean for health sciences education at the Vanderbilt University School of Medicine and the executive vice president for educational affairs at the Vanderbilt University Medical Center, had previously presented. Participants in each group discussed whether the competencies resonated with their own professional or educational practice and considered how the competencies might be integrated into health professions educational programs or curricula. Breakout groups also identified challenges and opportunities to incorporating these competencies. One virtual breakout group featured a panel of students, co-moderated by Mollie Hobensack, a nursing doctoral student at the Columbia University School of Nursing, and Cornelius James, clinical assistant professor at the University of Michigan Medical School, sharing their experiences and perspectives.
Hobensack began the conversation by asking a general question: “When you hear the word AI [artificial intelligence], what comes to mind? Are you excited about it? Are you nervous?” Erkin Ötleş, an M.D./Ph.D. student at the University of Michigan, replied, “All of the above.” He elaborated and said that technologies such as AI have the potential to greatly improve
patient care and the work lives of clinicians. However, just making the tool does not guarantee the outcome, and there is a great deal of risk involved. For example, using AI with biased data has the potential to scale harm. He said that people are needed to look at these tools and consider their impact on clinical care and identify unexpected consequences. “The benefits are real, but the risks are big,” he said. Noahlana Monzon, a nutrition student at the University of Oklahoma, agreed and said, “AI is exciting because there is so much that can be done with it, but it is also scary because the technology moves so fast and it is difficult to keep up and to understand it.”
James asked panelists how comfortable they are being part of the conversation about the risks and benefits of AI and how it could be integrated into health care. Alonzo Turner, a Ph.D. student in counseling and counselor education at Syracuse University and a 2022 National Board for Certified Counselors doctoral minority fellow, replied that today’s health professions students have the opportunity to be a conduit for information about AI, whether with fellow students, clients, patients, or the public. As AI becomes more integrated into health care, sharing transparent information that is easy to understand will build trust and will help people understand the role that AI can play in health care. Monzon added that patients often want to know why something is happening—a diagnosis, a treatment, a medication. An earlier speaker mentioned that AI can provide a recommendation but has more difficulty with the question why. AI will never take the place of people, Monzon said, because an important role of the clinician is to act as an interpreter and a mediator between the patient and the information on which decisions are based. Monzon believes the next generation of health professions practitioners will need to act as a bridge between patients and AI.
Carl Sheperis, dean at Texas A&M University, asked the student panelists to comment on the use or misuse of AI in the educational setting itself. For example, many faculty members have concerns about students using AI to cheat. Monzon said that one of her professors is very much in favor of AI and allows students to use any sort of AI to answer questions. While this could be considered “cheating,” Monzon said that it gives students the opportunity to incorporate AI into their education, to evaluate the validity of the information that the AI provides, and to think about when and where AI is an appropriate resource. She said that because AI-based technologies like ChatGPT (Chat Generative Pre-Trained Transformer) are so new, it is important that educators be very clear about what is or is not allowed and communicate their expectations in order to relieve potential frustrations when the rules change.
Ötleş responded by asking, “What is the point of writing an essay?” Often, the point is to develop a skill, he stated, and, as an engineering major, Ötleş wrote essays to explain why one design was better than another or why a certain method is optimal. The point of these essays was to construct a persuasive story. If a student can use ChatGPT to construct this story, without introducing errors, he believed this could be a good thing. Ötleş explained that the assistance of AI to generate persuasive arguments could be more efficient and less time-consuming, and could result in a better, more persuasive document; it could also be used as a starting point to help students overcome writer’s block. When considering whether and how AI should be part of an educational project, Ötleş said it is critical for students to take a nuanced approach to think about the intention of the project and how AI could benefit or harm that intention rather than simply allowing AI to play a role. James expanded on Ötleş’s point by saying that if a student uses ChatGPT to write a persuasive essay, it will ultimately be up to the student to decide whether or not ChatGPT has accomplished that goal. Winston Guo, an M.D. candidate at Weill Cornell Medical College, added that ChatGPT could serve as an excellent substrate with which to start a paper. For example, ChatGPT could present arguments for and against a certain ethical principle; students could then examine these arguments in light of their own understanding and use them to formulate their own line of reasoning. In addition, ChatGPT can be used as a quick survey of a new field of information to build a scaffold of knowledge. While there may be errors, it can offer an overview that could be helpful to get students started. Guo downplayed concerns that AI would prevent students from developing clinical reasoning skills. The structure of medical school exams and clinical experiences force students to grapple with clinical reasoning, he said. Monzon added that if students rely too heavily on ChatGPT or other tools, they will be unlikely to pass their licensure exams.
Given the potential for AI to reshape the roles of clinicians, James asked panelists for their thoughts on deskilling, reskilling, and learning new skills. Will health professions education need to reassess the skills and competencies that are being taught? How will faculty members and practitioners get the skills they need? Monzon replied that one of the interesting things about computer science and AI is that they are subjects people can teach themselves; she said that students could potentially come to class with far more knowledge about AI than the professor. There will be a need to educate faculty to enable them to educate these types of learners. Ötleş told workshop participants about a museum in Michigan called the Henry Ford Museum of American Innovation. The museum contains various old technologies
and innovations such as a tinsmith workshop and steam engines. These are technologies that are not actively used anymore; society as a whole has decided that such skills are not worthwhile to maintain. When technology evolves, there is a natural process of reskilling that occurs, Ötleş said. In medicine, there has been relatively little reskilling in the past 70 years. New technologies have been adopted, but the cognitive processes have remained basically the same. Ötleş said that AI presents an opportunity to rethink the way clinicians take in, interpret, and take action on information. He cautioned the audience not to use AI as the gold standard and to be careful not to lose abilities that are central to medical practice. He made an analogy to piloting a plane: computer systems largely are responsible for flying the plane, but if something malfunctions, human pilots need the skills to take over and land the plane. Turner agreed with this assessment and noted that during the pandemic, many health professionals were thrust into practicing telemedicine. The foundational core competencies of health care remained the same but with new technologies and capabilities.
James asked panelists to think about what interprofessional teams could look like in the future. For example, will the integration of AI shift the stakeholders on the teams or how teams work together? Monzon said that players who are often missing on a team are computer scientists, programmers, data scientists, and informatics experts. Bringing health care knowledge together with expertise on AI would be very beneficial, she said. Ötleş said that his own work has not been very interprofessional, and it has suffered because of that. For example, he has seen software developed and implemented without the input of the people who were the intended users. Having an interprofessional relationship from the outset makes it much easier and more comfortable reaching out with questions or to gain insight. For example, a team working on AI for the health care setting could reach out to health care workers to find out more information about their workflows and the needs of their patients. Tools are often created from a specific task perspective (e.g., a tool for doctors to use to assess sepsis); Ötleş compared it to finding a “nail for your hammer.” With a broader interprofessional team, tools could be created with the broader aim of improving the patient experience and the patient’s health. Hobensack agreed that it is critical to include users early in the process of tool development. Clinicians are heavily burdened with responsibilities, and if a tool is created for them but without them, they are likely to see it as just another thing they have to do. James added that developers can have brilliant ideas, but they are not asking the right questions or addressing the right issues. There is a need to bring a diverse group of users and stakeholders to the table to
ensure that the tools being developed address the actual needs of clinicians and patients.
As a final question, James asked the panelists what changes they would make in their educational program tomorrow if they could. Ötleş responded that he is “staring down the barrel of residency.” While he is excited for this process and for becoming clinically adept, he said he wishes there was more time to think—time to think about how AI or machine learning could be incorporated into the workflow or time to experiment with AI tools. If residencies were structured slightly differently, there could be a half day every other week for residents to think and experiment and “mess around a little bit” in the AI technology sandbox. This could also allow residents time to master certain skills or to take opportunities to collaborate inter-professionally with colleagues in other fields.
This page intentionally left blank.