The Gilbert W. Beebe Symposium was established by the Board on Radiation Effects Research (a predecessor of the Nuclear and Radiation Studies Board) in 2002 to honor the scientific achievements of the late Dr. Gilbert W. Beebe, a distinguished National Cancer Institute radiation epidemiologist who was one of the designers and key implementers of the epidemiology studies of Japanese atomic bomb survivors and a co-founder of the Medical Follow-up Agency. Beebe passed away in 2003. The symposium is used to promote discussions among scientists, federal staff, and other interested parties concerned with radiation health effects.
On March 13–14, 2025, the Nuclear and Radiation Studies Board of the National Academies of Sciences, Engineering, and Medicine (National Academies) hosted the most recent Gilbert W. Beebe symposium, with the goal of discussing the applications of artificial intelligence (AI) and machine learning (ML) in the fields of radiation therapy, medical diagnostics, and occupational health and safety. Among other topics, symposium participants discussed the importance of data for AI readiness, multimodal modeling, digital twins, uncertainty quantification and trustworthiness, and bias and ethics as it applies to each of these fields.
Given the rapid advances and the regular appearance of powerful new applications in the fields of AI and ML, the symposium was particularly timely. Most likely, no area of science will be untouched by developments in AI and ML over the next few years, and this is certainly true for radiation therapy, medical diagnostics, and occupational health and safety. While these three fields are distinct and have unique ways in which AI and ML may be applied, the common theme of radiation sciences and health effects means that considering the fields jointly can be quite valuable, and the symposium was designed both to discuss specific applications for each field and to identify commonalities and themes among the fields. Each field is at a different stage in applying AI and ML models, and one goal of the symposium was to act as a platform to share ideas and methods between the fields.
The sponsors of the Nuclear and Radiation Studies Board and this symposium include the Office of Environment, Health, Safety, and Security and the Office of Environmental Management at the U.S. Department of Energy; the U.S. Nuclear Regulatory Commission; the Gordon and Betty Moore Foundation; the Richard M. Lounsbery Foundation; the American College of Radiology; and the National Academy of Sciences’ Thomas Lincoln Casey Fund. The statement of task for the symposium is given in Box 1-1.
An ad hoc planning committee of the National Academies of Sciences, Engineering, and Medicine will organize a symposium to discuss current and future applications of artificial intelligence (AI) and machine learning (ML) in radiation therapy, medical diagnostics, and radiation occupation safety. Specifically, the symposium will include:
Conversations led by thought leaders from a few select fields outside of medical and occupational safety (e.g., computation, transportation, legal, military, etc.) to foster community connections.
Symposium participants will then discuss:
The critical role of human decision making in the context of AI/ML (e.g., when do algorithms advise, extend, or supplant professional medical decisions).
Future directions and opportunities in AI/ML methods and technology to advance the fields of radiation therapy, medical diagnostics, and radiation occupational health and safety.
The key challenges in data quality control with respect to reproducibility, generalization of datasets, data/model drift, and trustworthiness of results with respect to each subfield of radiation therapy, medical diagnostics, and radiation occupational safety. This discussion will also include intentionality of data collection, detector development, and dataset management for future AI/ML end algorithm applications and considered trustworthiness.
Current methodologies used when implementing AI/ML techniques for each subfield and discussions on ways to learn from other fields and identify key challenges and gaps in each subfield.
Possible ethical implications of AI/ML applications in each subfield and a community discussion on possible future directions to maximize benefits and minimize negative implications.
Tailored breakout sessions to cover specific applications of AI/ML including but not limited to: creating predictive models for estimating dose in occupational environments exposed to radiation, dose distribution models for treatment planning, predictive AI/ML techniques in cancer diagnosis and treatment, integration of AI/ML into accurate and ethical human decision making, AI/ML uses to address educational and workforce shortages of radiation medical professionals, and thresholds of uncertainty acceptable for use of AI/ML in radiation applications.
The symposium presentations and discussions will be summarized in a National Academies proceedings.
This Proceedings of a Symposium summarizes the presentations and discussions that occurred at the Gilbert W. Beebe symposium and is not intended to provide a comprehensive and detailed account of the information shared during the symposium. The information summarized here reflects the knowledge and opinions of individual symposium participants and should not be seen as a consensus of the symposium participants, the planning committee, or the National Academies of Sciences, Engineering, and Medicine. The structure approximately mirrors the agenda of the symposium (see Appendix A), although the symposium’s second day
had multiple sessions occurring simultaneously, which have been arranged to best follow the narrative of the proceedings. Additionally, summaries of presentations may appear in a different order than the order in which the presentations themselves were given, for narrative flow. The remainder of this proceedings comprises eight additional chapters.
Chapter 2 offers some broader information on AI and how it is being applied in various areas. This chapter primarily addresses the line in the statement of task on “Conversations led by thought leaders from a few select fields outside of medical and occupational safety to foster community connections”. It sets the stage for numerous topics discussed throughout the symposium but also touching upon points in the Statement of Task such as: principles for responsible AI development, human decision making in AI, ethical considerations, workforce development, and acceptable uncertainty levels for AI model outputs.
Chapter 3 offers a discussion of how AI and ML are being used in the three fields that were the focus of the symposium: radiation therapy, medical diagnostics, and occupational health and safety. This session primarily covers the lines in the statement of task: “Current methodologies used when implementing AI/ML techniques for each subfield and discussions on ways to learn from other fields and identify key challenges and gaps in each subfield” and “Future directions and opportunities in AI/ML methods and technology to advance the fields of radiation therapy, medical diagnostics, and radiation occupational health and safety.” However, other themes such as: critical role of human decision-making, ethics, workforce development are addressed.
Chapter 4 discusses various data issues relevant to AI and ML; this is a key subject because AI models rely on large datasets for their training, addressing the line “intentionality of data collection, detector development, and data set management for future AI/ML end algorithm applications.” Additional topics discussed as they relate to the statement of task include: workforce development, human decision making, and education/training.
The remaining chapters summarize focused topics identified by committee addressing issues in the Statement of Task’s final subtask.
Chapter 5 focuses on a particular type of AI model—digital twins—that has potential for medicine and public health applications. This covers predictive modeling, dose distribution models, and predictive AI/ML techniques in cancer diagnosis and treatment from the statement of task.
Chapter 6 focuses on examples of how AI is being applied in various radiation-related areas but with a focus on multimodal applications of AI. This addresses predictive modeling, human decision making, and begins to address uncertainty and trustworthiness of models.
Chapter 7 discusses bias, ethics, and regulatory topics that have relevance to applications of AI to the fields of radiation therapy, medical diagnostics, and occupational health and safety. This session also cover federal regulations and discusses bias in AI models.
Chapter 8 covers topics that are concerned with trustworthiness, and transparency in AI. There are presentations and discussions also addressing uncertainty quantification and risk mitigation.
Finally, Chapter 9 offers final remarks. The themes of human decision making, ethics, considerations of model uncertainty, education, and workforce development are interwoven throughout presentations and discussions.