Disseminating In Silico and Computational Biological Research: Navigating Benefits and Risks: Proceedings of a Workshop (2025)

Chapter: 2 Challenges Related to the Dissemination of Biological Study Results, Models, and Tools

Previous Chapter: 1 Context on In Silico Research and Biosecurity
Suggested Citation: "2 Challenges Related to the Dissemination of Biological Study Results, Models, and Tools." National Academies of Sciences, Engineering, and Medicine. 2025. Disseminating In Silico and Computational Biological Research: Navigating Benefits and Risks: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/29174.

2
Challenges Related to the Dissemination of Biological Study Results, Models, and Tools

To lay the groundwork for deeper consideration of potential strategies to promote the benefits of sharing in silico biological research while minimizing DURC and PEPP risks, participants examined challenges related to the dissemination of studies, models, and tools at the intersection of computation, AI, and biology. In a panel discussion highlighting different real-world examples and subsequent breakout sessions, participants considered a variety of approaches for assessing the benefits, risks, and mitigation strategies associated with in silico biological research and specifically its dissemination. In these discussions, panelists considered the types of outputs and outlets, explored stakeholders, and described specific opportunities and challenges for three specific domains of research that involve the use of in silico approaches: molecules and proteins, systems biology and genetic/cellular engineering, and epidemiological modeling.

APPROACHES TO RESPONSIBLE DISSEMINATION

Four speakers, James Diggans, Twist Bioscience; Anthony Gitter, University of Wisconsin–Madison; Nick Sofroniew, EvolutionaryScale; and Jamie Yassif, Nuclear Threat Initiative (NTI), shared examples that highlight the complexities of applying computational approaches to biological systems and strategies for responsible dissemination of in silico biological information.

DNA Synthesis Screening

DNA synthesis lies at the inflection point where in silico biological designs are brought into physical reality. Diggans discussed approaches to navigating the ethical and practical considerations around this potential dual-use technology, as he called it. DNA occupies a unique position as the bridge between digital design and physical biological material. DNA synthesis companies generally have used two main approaches to minimize their products’ potential biosecurity risks: (1) screening customers to understand who is receiving the DNA and (2) screening sequences for risks based on their similarity to known sequences and biological products that occur in nature. Both are based on voluntary guidance from the U.S. government1 and the International Gene Synthesis Consortium (IGSC).2 As AI-powered biodesign tools accelerate and simplify the design of new DNA

___________________

1 See https://aspr.hhs.gov/S3/Documents/syndna-guidance.pdf (accessed July 18, 2025).

2 See https://genesynthesisconsortium.org/wp-content/uploads/IGSCHarmonizedProtocol11-21-17.pdf (accessed July 18, 2025).

Suggested Citation: "2 Challenges Related to the Dissemination of Biological Study Results, Models, and Tools." National Academies of Sciences, Engineering, and Medicine. 2025. Disseminating In Silico and Computational Biological Research: Navigating Benefits and Risks: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/29174.

sequences, new biosecurity challenges are emerging. In particular, the scale of synthesis has increased substantially and turnaround time has decreased, and designs generated with AI tools are making it more difficult to detect similarities to known naturally occurring proteins, complicating homology-based screening and blurring the boundaries used to distinguish sequences by species or known function.

The more than 40 companies comprising IGSC3 have sought to work together to establish best practices to minimize the risk of misuse of gene synthesis in light of these fast-evolving capabilities. Diggans described IGSC’s work with Microsoft to conduct a pilot project to identify the vulnerabilities of existing screening systems, as part of a red-teaming effort. The pilot project confirmed that AI tools could indeed alter protein sequences enough to bypass some sequence screening systems. Twist Bioscience collaborated with Microsoft to adapt cybersecurity tools for biodesign of single molecules to scale up a process to assess and mitigate risks that would have fallen through the cracks under existing structures (Wittmann et al., 2024).

Twist Bioscience and Microsoft drafted a report on the results of this pilot project, which was published only after extensive discussion and review regarding the risks of dissemination. Diggans said that the review process itself highlighted some useful insights regarding perceived risks and processes for mitigating them: (1) reviewers’ opinions range widely, with each reviewer’s opinion correlating with their own perception of and risk tolerance and (2) reviewers with the lowest risk tolerance can tend to have outsized influence, with publications removing information until all reviewers are satisfied. Moving forward, Diggans highlighted a need for a shared vocabulary and categorization around information hazards and a structured framework to discuss and defend those categorizations in a constructive manner so that people “don’t talk past each other.” “What this really pointed out to me was that we need more formal norms around how we draw lines of what is and isn’t a significant information hazard,” Diggans said.

Open Science and Dissemination of AI Models

Gitter discussed approaches to the dissemination of AI models and the implications of that dissemination. He drew a contrast between the traditional scientific publishing pathway and a more open ecosystem for the dissemination of scientific data, methods, and results (Sever, 2023). In the publishing pathway, journal publishers screen manuscript submissions, which then proceed through a peer-review process before the manuscript is published. In an open publishing ecosystem, manuscript submissions are screened and then immediately released publicly as preprints before proceeding through additional reviews and publishing pathways. In the former approach, largely driven by publishers, AI models are typically released only when the final paper is published. In the latter approach, which authors and developers drive, AI models may be released at any stage: before submission, at the preprint stage, or during subsequent dissemination steps. Reviews and editorial oversight in the open ecosystem still exist, but the AI model weights, code, datasets, and other elements can be released at any time on virtually any platform and accessed by anyone. Gitter added that some AI developers choose not to disseminate their work through either pathway but simply post models and results online on their own blog or website. “There’s a lot of people developing these tools that don’t care about publications and have completely sidestepped this entire process,” he noted.

Open science has many benefits, and the shift to a more open ecosystem has contributed to the rapid advancement of biological AI tools (Jumper et al., 2021; Lin et al., 2023). Although existing controlled-access repositories serve as a useful model for implementing controls when open dissemination poses significant risks, Gitter posits that this approach could be limited and that risk mitigation could occur earlier in the research and development pipeline. “I think, as a community, we should think about open science by default, unless there’s very, very specific needs to have controlled access, and then we should think about what the best options for

___________________

3 See https://genesynthesisconsortium.org/ (accessed June 24, 2025).

Suggested Citation: "2 Challenges Related to the Dissemination of Biological Study Results, Models, and Tools." National Academies of Sciences, Engineering, and Medicine. 2025. Disseminating In Silico and Computational Biological Research: Navigating Benefits and Risks: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/29174.

controlled access are,” Gitter said. “In some cases, if the harms so greatly outweigh the risks, we may not want to do the research, but we need researchers [and] developers to think about that far upstream [of dissemination].”

Guardrails for Protein Language Models Sharing

Sofroniew discussed the capabilities enabled by LLMs and different approaches to their dissemination. The AI startup EvolutionaryScale develops protein language models that are used widely for protein structure modeling and prediction, antibody engineering, and drug target discovery. Sofroniew underscored the benefits of an open science approach while recognizing the need for researchers and tool developers to consider the risks and act responsibly. “Openness has been a central principle of the scientific community throughout history,” he said. “And we do think that there is a path for sort of transparent and open sharing of scientific advances, if really done responsibly.”

Sofroniew described the company’s Responsible Development Framework, which requires communication of a product’s benefits and risks, proactive and rigorous evaluations of risks internally and with the broader scientific community, and mitigation strategies and precautionary guardrails where necessary. During the evaluation process, the team first conducts internal assessments using a defined set of risk-relevant tasks, such as sequence understanding, fitness prediction, structure prediction, and design for areas of concern (e.g., viral proteins or select agents). These assessments help determine whether a model could increase the likelihood of harm or lower barriers to misuse. They also evaluate how mitigation strategies, such as excluding specific training data, may reduce these risks. External expert reviewers then assess the models using structured criteria, including whether the model increases the likelihood of harm or lowers barriers to misuse, and whether the public health or scientific benefits of releasing the model outweigh the risks. Public health benefits include the potential for widespread access to accelerate research and support preparedness. When mitigation strategies or guardrails are needed, they choose from defined measures, such as gated application programming interface (API) access, filtering of certain sequences, restricted access for vetted partners, or staged releases. For example, EvolutionaryScale recently published a model, ESM [evolutionary scale modeling]-3, that excluded millions of viral sequences, including some from select agents, and filtered out pathogen-related keywords (Hayes et al., 2025). These measures largely preserved the model’s usefulness, but Sofroniew noted that the modifications did reduce performance for some tasks, such as predicting viral fitness. “This decision came with tradeoffs,” he noted. “Many researchers use models like ESM to study viruses, develop therapeutics like vaccines, and limit those capabilities […] reduces the scientific value of the tool for those people.” He characterized the company’s approach in the case of ESM-3 as cautious and emphasized the importance of engaging with experts in the broader community to assess risks and inform decision-making. “After structured engagement with the community, we’re definitely going to continue to consider the risks and benefits of including sequences like this in future models,” he said.

Bio Funders Compact and Managed Access Strategies

Yassif discussed additional approaches to facilitating the responsible dissemination of AI models in biology. As a recent development, NTI is leading the Bio Funders Compact,4 which launched in 2024 and provides a framework to encourage funders—including governments, venture capitalists, private foundations, and others—to incorporate pre-funding biosecurity and biosafety reviews into their decision-making. Its theory of change, as she described, is to prevent deliberate or accidental misuse, avoid funding high-risk research, and

___________________

4 See https://www.nti.org/about/programs-projects/project/bio-funders-compact/ (accessed June 24, 2025).

Suggested Citation: "2 Challenges Related to the Dissemination of Biological Study Results, Models, and Tools." National Academies of Sciences, Engineering, and Medicine. 2025. Disseminating In Silico and Computational Biological Research: Navigating Benefits and Risks: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/29174.

provide incentives for researchers to follow biosecurity and biosafety best practices. Although the Compact’s initial focus was on traditional biosecurity issues, its founding organizations—NTI, along with founding signatories (the Coalition for Epidemic Preparedness Innovations, the Global Health Security Fund, and Sentinel Bio)—have increasingly focused on in silico models and AI-bio enabled capabilities, along with their dissemination.

The Compact is part of a broader backdrop of considerations regarding the balance between access and security in sharing biological AI tools. Strategies to manage access can help safeguard tools against misuse and improve public trust in science. Yassif posited that the most effective solutions are based on managed access frameworks that prioritize equitable access, responsible dissemination, and biosecurity risk assessment with tiering to enable open dissemination of low-risk tools while imposing a range of progressively stricter access requirements for higher-risk tools (Carter et al., 2024). She outlined how different techniques can be leveraged to achieve a spectrum of management strategies ranging from full sharing of the code and weights to more limited access via an API (Figure 6).

To move toward the establishment and implementation of best practices in this space, Yassif suggested the need for an iterative, collaborative process that combines the expertise, perspectives, and resources of multiple experts. “We need to figure out what these tiered frameworks look like, what falls within managed access and what doesn’t, and what the criteria are,” she said. “It’s going to cost a lot of money to make these kinds of frameworks happen in reality, and independent academic researchers or small startup groups don’t have the resources to do this, and so government funders have a really important role in supporting the development and maintenance of these. And I think in order for this to work in practice, this is going to have to be aligned with the dissemination policies of journals and nontraditional funders.”

Managed access to biodesign tools: A schematic representation of the spectrum of access and the range of management options relevant to software guardrails.
FIGURE 6 Managed access to biodesign tools: A schematic representation of the spectrum of access and the range of management options relevant to software guardrails.
SOURCE: Carter, S. R., N. E. Wheeler, C. R. Isaac, and J. Yassif. 2024. Developing Guardrails for AI Biodesign Tools. Washington, DC: Nuclear Threat Initiative. https://www.nti.org/wp-content/uploads/2024/11/NTIBio_Paper_DevelopingGuardrails-for-AI-Biodesign-Tools_FINAL.pdf (accessed July 18, 2025). CC BY NC ND 4.0.
Suggested Citation: "2 Challenges Related to the Dissemination of Biological Study Results, Models, and Tools." National Academies of Sciences, Engineering, and Medicine. 2025. Disseminating In Silico and Computational Biological Research: Navigating Benefits and Risks: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/29174.

Discussion

During a moderated discussion, Jaspreet Pannu, Stanford University, asked speakers to elaborate on approaches to decision-making regarding the dissemination of AI tools. Diggans replied that Twist Bioscience started with a classic cybersecurity vulnerability framework to evaluate safety, which provided a helpful foundation. However, he noted that this framework did not fully capture the unique risks posed by computational biology tools. He highlighted the need for more effective models and frameworks tailored to biosecurity and information hazards. Gitter emphasized the importance of engaging a broad group of external experts before releasing models to develop pre-release questionnaires that inform decisions about minimizing identified risks. Sofroniew added that EvolutionaryScale also engaged with diverse experts to create its Responsible Development Framework, which also incorporates agreed-upon best practices from the scientific community.

Richard Sever, Cold Spring Harbor Laboratory Press, requested greater detail about the divergence of opinion that sometimes occurs during expert consultations about risks. Diggans reiterated that people have widely varying risk tolerances and that expert comments essentially are opinions, which means that risk assessment is not grounded in truth and is highly dependent on individual risk perceptions. Sofroniew added that EvolutionaryScale explicitly asked reviewers to focus on concrete, immediate risks over hypothetical, imagined ones, which may explain why they reached consensus. In contrast to the case shared by Diggans, Sofroniew added that the experts his company enlisted in this process were able to reach an agreement fairly easily.

Yassif commented that keeping pace with the rapid advances in AI is challenging, particularly when working to establish best practices or frameworks. She stressed the importance of a proactive approach to decision-making and of preparing for various outcomes even when no imminent risk is apparent. “We want to get out ahead of it and start to tackle some of the really hard problems and address the gaps that we’re seeing where we don’t have solutions to some of these problems,” she said. Roger Brent, Fred Hutchinson Cancer Center, asked Yassif about the differences between Bio Funder Compact’s pre-funding review approach and existing DURC and PEPP best practices. She replied that pre-funding reviews, created through collaboration and iteration, can catch risks before research occurs, especially outside the United States. She also acknowledged that although risk assessments have improved, significant challenges and gaps, especially in in silico research, remain.

A participant asked whether different frameworks are needed to assess computational models and tools that are not AI-based. Diggans replied that other computational approaches likely raise many of the same challenges as AI-based approaches. Yassif and Gitter agreed, suggesting that the outcomes and uplift capabilities of non-AI approaches should incorporate the same careful review process. Gitter noted that AI- and ML-powered tools have been the focus because they are driving many advances, but posited that these discussions should be widely applicable across the computational biology landscape. In addition, he suggested that non-AI tools, such as evolutionary sequence conservation tools, can still perform comparably on certain tasks, such as predicting viral phenotypes. He also suggested that even those tools that do not raise biosecurity or dual-use concerns could be valuable as controls or benchmarks when evaluating AI models.

Another participant asked panelists to comment on strategies for balancing the need to guard against potential misuse with the need to validate research methods and results. Diggans replied that this tension is a challenging aspect of pre-publication review, particularly for work that aims to address existing vulnerabilities, such as in the pilot project he described. “Part of the problem is that you really can’t ground-truth a lot of these [concerns]—you can’t take the model proteins that came out and make them in a lab and see if they’re excellent weapons,” he said, “so it’s very difficult to figure out where and when something is really

Suggested Citation: "2 Challenges Related to the Dissemination of Biological Study Results, Models, and Tools." National Academies of Sciences, Engineering, and Medicine. 2025. Disseminating In Silico and Computational Biological Research: Navigating Benefits and Risks: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/29174.

creating a threat or not.” Further, being too vague or overly cautious about what research presents a security concern can impede research that can lead to future benefits, such as the development of countermeasures. Sofroniew agreed with concerns about the ways in which model capabilities might be misused and emphasized that EvolutionaryScale evaluates performance using peaceful proxy tasks to demonstrate these capabilities. He noted that although such tasks help to safely demonstrate functional capability, the risks grow significantly when in silico designs are translated into real-world biological products, underscoring the need for caution during that transition.

A participant suggested that openness is the default, unless research could substantially increase the risk of a pandemic; however, appropriate mechanisms could be in place to forestall work that poses such a risk. In evaluating biosecurity risk, the participant noted that the scientific community could help to improve communication and determine responsible governance and publication guidelines as the technology continues to advance and become more consequential, even if some risky research is prevented as a result. For example, if the probability of a model having a very dangerous outcome (e.g., “if we’ve created a new strain that is more consequential than the most dangerous existing viruses on Earth”) is deemed to be 50 percent, then that research does not need to be tested in a lab to determine its feasibility; rather, it be discontinued. “Risk assessment should be a default position for people working on the edge with AI bio,” the participant stated. “It should not be the default position that we can’t judge anything or can’t anticipate any outcomes.” Sofroniew agreed with this perspective and emphasized that, despite openness being a fundamental tenet of science, research with a high probability of misuse should not be pursued. Héctor García Martin, Lawrence Berkeley National Laboratory, added that the ability to predict the function of novel proteins using in silico tools is diminished if wet lab testing is not conducted, because of biosecurity concerns. In such cases, he suggested that potential risky experiments can be conducted with strict controls in government facilities to enhance predictive abilities while preventing misuse.

Vinson also noted the importance of considering equity when establishing access controls, suggesting that less-resourced scientists could be asked to provide input when developing frameworks. In addition, she said that access is not a binary yes/no decision, but a spectrum that depends on the circumstances. Yassif agreed with the need for equitable, tiered access and highlighted the importance of anticipating upstream uses of the models. Because implementation of solutions in this space will involve time and money, she stressed the need to move forward quickly. As most in silico and computational tools do not present high risks, she posited that these actions will not suppress research but can effectively “safeguard the benefits by guarding against the worst downside risks so that the science can flourish.”

CHALLENGES AND EXAMPLES OF NEEDS TO EFFECTIVELY SAFEGUARD BENEFITS AND PROMOTE ADVANCES

To establish a foundation for considering strategies and solutions to effectively safeguard the benefits of in silico biological research and promote advances while mitigating risks, workshop participants considered the current challenges to and gaps in evaluating the benefits of in silico research in biology, drawing on examples from physical research, and the potential for DURC and research involving PEPP associated with its dissemination.

For this discussion, workshop participants were divided into breakout groups focused on three themes: Molecules and Proteins, Systems Biology and Genetic/Cellular Engineering, and Epidemiological Modeling and Biosecurity.

Suggested Citation: "2 Challenges Related to the Dissemination of Biological Study Results, Models, and Tools." National Academies of Sciences, Engineering, and Medicine. 2025. Disseminating In Silico and Computational Biological Research: Navigating Benefits and Risks: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/29174.

Each group was asked to answer the same set of questions from the perspective of their theme. First, the groups were asked to consider the outputs arising from in silico biological research, the dissemination outlets for these outputs, and the stakeholders associated with each output and outlet. Then, they were asked to discuss three questions:

  • What criteria should be used to assess and evaluate the research benefits and biosecurity risks?
  • Does dissemination in different outlets present different risks, and what additional criteria, if any, should be used to assess the biosecurity risk of in silico research for biological systems for each dissemination outlet?
  • What are the current challenges and gaps in assessing and evaluating the possible benefits and the DURC and PEPP potential of disseminating in silico research of biological systems?

For each question, a combined summary of the breakout discussions is provided below.

Question 1:
What Criteria Should Be Used to Assess and Evaluate the Research Benefits and Biosecurity Risks?

Many participants emphasized the importance of proactively assessing the biosecurity risks of in silico biological research. Given the computational biology field’s rapid pace of development, several participants noted the benefits of addressing current capabilities and anticipating the implications of future capabilities that may be enabled as computational technologies advance. Drawing on policy frameworks such as the now rescinded United States Government Policy for Oversight of Dual Use Research of Concern and Pathogens with Enhanced Pandemic Potential, some participants highlighted the variety of technical, ethical, societal, and governance considerations. They considered possible criteria for assessing the benefits and risks of disseminating research involving in silico modeling and computational analyses. These criteria fall into seven broad areas described in more detail below: societal benefit; predictive power and scope of models; technical capability and design potential; accessibility and ease of use; oversight, provenance, and access control of in silico research outputs; potential for misuse and malicious intent; and disproportionate impacts.

Societal Benefit

In silico biological research is being pursued to advance fundamental knowledge and applications that have the potential to offer transformative benefits, including those in global health, agriculture, bioengineering, pathogen surveillance, and early-stage drug development.

Example criteria include the following:

  • Global public health value (e.g., vaccine design, diagnostics),
  • Potential to help underserved populations, and
  • Contributions to the scalability of benefits or efficiency improvements.

Although biosecurity risks cannot be dismissed, several participants emphasized the importance of capturing and evaluating potential benefits, especially in low-resource settings or for health applications.

Suggested Citation: "2 Challenges Related to the Dissemination of Biological Study Results, Models, and Tools." National Academies of Sciences, Engineering, and Medicine. 2025. Disseminating In Silico and Computational Biological Research: Navigating Benefits and Risks: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/29174.
Predictive Power and Scope of Models

An important factor in considering both benefits and risks is the accuracy with which models can simulate or predict biological outcomes, as well as the extent to which their applications are general or narrow. Example criteria include the following:

  • Predictability of properties such as toxicity, transmissibility, or skin penetration;
  • Scope and generalizability of the model (i.e., species, systems, ecological dynamics); and
  • Error tolerance in predictions (e.g., how much confidence is enough to make a decision).

Models with high predictive power can accelerate beneficial discoveries, but they may also lower barriers to the development of any outputs. As one participant noted, “One hundred percent predictive power is not required. If there is a 50 percent chance that within a pool of candidates for a toxin/pathogen, something highly concerning will be created, that is enough to take certain risk mitigation actions.” Employment of an if-then framework can help to address this issue.

Technical Capability and Design Potential

From a technical perspective, in silico models have the potential to enable capabilities that meet or surpass capabilities addressed by existing oversight structures for DURC or research involving PEPP. Participants suggested that historical lessons from dual-use physical research can help to inform how biosafety and biosecurity risks of AI models, particularly those capable of high-consequence harms, are evaluated and mitigated prior to their deployment (Wein and Liu, 2005; Pannu et al., 2025). Example criteria include the following:

  • Designing or optimizing virulence, transmissibility, or immune evasion in pathogens;
  • Facilitating de novo synthesis or reconstitution of extinct or high-risk pathogens;
  • Generating traits that enhance environmental stability, latency, or aerosolization; and
  • Predicting or modeling the spread of diseases based on genomic data.

Many of these criteria are also discussed in the 2018 National Academies report titled Biodefense in the Age of Synthetic Biology. In particular, in silico tools capable of increasing a pathogen’s virulence or transmissibility—capabilities that mirror traditional DURC triggers—were identified as a top priority for risk assessment in this 2018 report.

Accessibility and Ease of Use

Breakout groups considered how easy or difficult available tools are to use (or misuse), particularly for actors with limited technical resources. Some participants noted that tools that require minimal expertise to operate or that can be used with widely available reagents for physical research lower the bar for malicious use. They also discussed how the ease of synthesis might amplify risk, especially if sequences generated through in silico design can be readily built using standard wet lab techniques. Example criteria include the following:

  • Ease of model use and biological synthesis (e.g., user-friendly interfaces, automated tools);
  • Low requirement for technical expertise for using computational methods;
  • Ability to deploy models and share code via open-access platforms; and
  • Compatibility with lab automation or home synthesis kits.
Suggested Citation: "2 Challenges Related to the Dissemination of Biological Study Results, Models, and Tools." National Academies of Sciences, Engineering, and Medicine. 2025. Disseminating In Silico and Computational Biological Research: Navigating Benefits and Risks: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/29174.

Several participants mentioned considerations about the different types of actors who might misuse in silico research tools. One participant suggested that non-state actors are generally less resourced and perhaps more likely to be deterred by oversight and access controls. Given the scale and access to proprietary and shared data, state actors pose a more complex challenge, particularly if they operate outside global transparency norms.

Many participants noted that biology has a much lower infrastructure threshold than, for example, nuclear technologies, meaning that computational tools may pose a potential risk even if their implementation requires relatively minimal technical knowledge. Although many in silico tools still require technical expertise to use, some participants suggested that this barrier is gradually decreasing. The convergence of LLMs, cloud notebooks (e.g., Google Colab), and AI-driven biodesign platforms increase these tools’ accessibility, including to individuals with limited experience. This situation raises the risk that lone actors might eventually bypass traditional lab work entirely and synthesize harmful toxins through online services. Current mechanisms to guard against this risk, such as DNA screening by synthesis providers, are reactive and limited in scope.

Oversight, Provenance, and Access Control

Recognizing that the decision to openly share data or tools is effectively irreversible, most participants emphasized the importance of durable policies about dissemination. All participants discussed a range of considerations related to governance, dissemination, tracking, and institutional review of computational tools, data, and other outputs. Example criteria include the following:

  • Provenance of metadata and traceability of model inputs and outputs;
  • Attribution (i.e., documenting who built and used the model, and for what purpose);
  • Open versus managed access to models and datasets; and
  • Institutional Review Board (IRB) reviews a similar process when a model is presented as a biological organism.

Several participants discussed the ways in which existing oversight frameworks could be adapted to address the emerging challenges posed by in silico research. Although frameworks such as the former United States Government Policy for Oversight of Dual Use Research of Concern and Pathogens with Enhanced Pandemic Potential offer a starting point, most participants posited that in silico research presents distinct characteristics that may involve both extensions of current policy and entirely new approaches. For example, some participants suggested that existing oversight frameworks provide a strong foundation for identifying and managing high-consequence biological risks that would trigger enhanced review and governance. However, many participants also noted that, unlike traditional wet lab research, computational models often do not present standalone risks. Instead, they may become concerning only when integrated with other tools or data sources.

Several participants highlighted how in silico research differs from wet lab work in ways that challenge oversight. The growing ability to engineer biology through computational design, especially when combined with lab automation, could enable rapid testing of many candidate molecules at scale. As AI improves and generalizes beyond its training data, curation of large data sets, especially concerning traits, becomes a critical point of control. The norm of open science further complicates the problem. In silico outputs are typically published or released openly, and an individual data set or model might not, on its own, rise to DURC/PEPP levels. As one participant highlighted, “The typical default is that work should be shared openly, open science. But the assessment of individual items [for biosecurity risks] is problematic.” Most participants highlighted aggregation, whereby many seemingly harmless components combine to form a powerful and potentially harmful capability, as a significant risk area. Incremental in silico innovations, like “a frog in a slowly boiling kettle,” could therefore cross a risk threshold without activating existing oversight mechanisms.

Suggested Citation: "2 Challenges Related to the Dissemination of Biological Study Results, Models, and Tools." National Academies of Sciences, Engineering, and Medicine. 2025. Disseminating In Silico and Computational Biological Research: Navigating Benefits and Risks: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/29174.

Regarding this issue, many participants expressed support for practical oversight approaches modeled on existing systems, such as the IRBs used for human subjects’ research or the requirement by many scientific journals that researchers deposit sequences in a public database such as GenBank ® as a condition of publication. Many researchers are already familiar with these processes, which could be extended to encompass certain types of in silico work. One idea, expressed in several plenary sessions, was the creation of a tiered review system in which most research could pass quickly through low-friction channels, but a “yes” to key questions—such as “Does this model enable de novo synthesis?” or “Could this increase virulence?”—would trigger deeper review. A few participants also stressed the role of industry in establishing responsible norms, sharing red teaming practices, and setting internal standards for evaluating and disseminating model outputs. Although many participants recognized that oversight cannot and need not be one-size-fits-all, they emphasized the importance of supporting responsible conduct across the full range of environments in which this research is conducted, including academia, startups, government laboratories, and large corporations.

Finally, several participants stressed the importance of ensuring that oversight mechanisms are risk-based and scalable. Not every project needs intensive scrutiny, but “casting the net broadly” may be useful to ensure that a wide range of organizations, both private and public, engage in basic due diligence, even if the process is lightweight for low-risk research. Several participants shared that in silico research may not be high risk on its own, but it becomes so when models are combined or deployed in ways that accelerate biological development. Some participants highlighted the value of clearly distinguishing between the risks that could be handled during the in silico stage (i.e., digital) versus those that could be better handled during lab implementation (i.e., physical). A participant noted the importance of designing oversight to cover the entire lifecycle of engineered microbes, including how and when they are expected to become inactive or even go extinct. Some participants suggested that oversight of in silico research, although still evolving, may include a mix of durable policies, institutional responsibility, and adaptive, friction-based governance to keep pace with also evolving computational tools and capabilities.

Potential for Misuse and Malicious Intent

Participants discussed the potential malicious applications of computational tools and the types of actors most likely to misuse them. These examples were discussed in the context of the criteria originally explored for physical laboratory research. Example criteria include the following:

  • Alignment with known pathways for mass harm (e.g., terrorism, genocide);
  • Economic efficiency of malicious applications (i.e., low-cost, high-damage); and
  • Relevance to non-state actors or lone actors.

Several participants noted that not all bad actors seek maximum destruction. Some actors aim for cost-effective harm or disruption. Thus, they suggested that computational tools that enable scalable or highly efficient misuse of models for predicting, designing, and analyzing biological data, for example, may warrant particular scrutiny.

Disproportionate Impacts

Both the risks and benefits of in silico biological research may be distributed unevenly across populations. Example criteria include the following:

  • Disproportionate risk to specific groups (e.g., by ethnicity, age, geography);
Suggested Citation: "2 Challenges Related to the Dissemination of Biological Study Results, Models, and Tools." National Academies of Sciences, Engineering, and Medicine. 2025. Disseminating In Silico and Computational Biological Research: Navigating Benefits and Risks: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/29174.
  • Potential targeting of certain populations using genetic, protein, or epidemiological data; and
  • Community-level ecological interactions.

Many participants suggested that risk assessments are particularly critical for research that could disproportionately impact vulnerable populations, especially if models are trained on biased data or used to design targeted bioweapons.

Question 2:
Does Dissemination in Different Outlets Present Different Risks, and What Additional Criteria, If Any, Should Be Used to Assess the Biosecurity Risk of In Silico Research for Biological Systems for Each Dissemination Outlet?

Several participants noted that different dissemination outlets, such as code repositories, journals, preprint servers, databases, and social media, present distinct types of risks when sharing in silico research outputs. Some participants perceived repositories such as GitHub and GitLab as especially “viral” because of their ease of forking (i.e., copying someone else’s code to build their own version), derivatization (i.e., using, aggregating, and/or manipulating the data to provide new information), and wide distribution. Another participant noted that, unlike static journal articles, which spread “more linearly,” repositories enable rapid transformation and reuse of code. Others countered that much of the code on these platforms is noisy and difficult to run, although they acknowledged that a small number of “well-maintained items” may pose greater concern. For example, another participant flagged Docker containers (i.e., a self-contained, easy way to bundle computational tools and run a software application from all types of platforms) and precompiled binaries (i.e., directly executable files) as riskier than GitHub. Some participants also noted that making code broadly usable often requires significant effort and cost, such as creating better documentation or packaging tools for ease of dissemination, which can act as a temporary barrier to misuse. However, well-resourced actors or communities may still overcome these barriers. Still, one participant argued that source code might be more dangerous in some cases because it “can be derivatized and take on new life” more quickly—that is, adapted, modified, or repurposed in unexpected or unintended ways—and therefore potentially evolving far beyond its original form or intent. One participant suggested that a cross-disciplinary group with expertise in both science and security may need to consider different technology readiness levels when assessing risk across dissemination types.

Beyond the technical distinctions of different dissemination outlets, several participants explored how the context and infrastructure surrounding these outlets influence their risk profiles. Peer-reviewed journals were perceived as generally lower risk because of their editorial safeguards, expert review, and mechanisms for withdrawal. Yet, although not related to in silico research, as one participant noted, these processes rely heavily on the institutional capacity of the journals and can still fail. The participant cited the publication in Cell of research on how a type of bat coronavirus, HKU5, could infect human cells, as a case where oversight, in their opinion, was insufficient (Chen et al., 2025). A key thread in the discussion was the importance of including dissenting voices in the development and ongoing refinement of dissemination policies, with a few participants stressing that broad buy-in, especially from nontraditional publishers, may be achieved only if those voices are meaningfully engaged, heard, and integrated into the oversight process through sustained communication and collaboration.

Several participants emphasized the importance of integrated policies developed collaboratively by government, academia, industry, and civil society, with special attention to certain platforms that may lack formal oversight. Some participants noted that a key risk in dissemination arises when information, even if originally considered low risk due to being obscure or difficult to access, becomes easily discoverable and integrable, particularly by AI systems that can aggregate dispersed content. In such cases, the transition from obscurity to

Suggested Citation: "2 Challenges Related to the Dissemination of Biological Study Results, Models, and Tools." National Academies of Sciences, Engineering, and Medicine. 2025. Disseminating In Silico and Computational Biological Research: Navigating Benefits and Risks: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/29174.

discoverability can significantly elevate the potential for misuse. In addition, some participants highlighted the concept of managed risk, whereby access to sensitive outputs is selectively controlled. However, they acknowledged operational hurdles to this approach, including deciding who defines legitimate users and who is responsible for maintaining the infrastructure. A few participants stressed that risk is dynamic and often additive—for example, the effects of adding new data on previously published results—and suggested that dissemination policies could consider both individual outputs and the ability of new publications to enable dangerous linkages when layered onto existing knowledge. As an example, one participant suggested that, “if a previous study publishes mechanism A to B, and a new study intends to publish B to C,” then determining whether the capability to go from A to C is dangerous would be important.

Returning to the criteria that might be used to assess risks in the context of different dissemination outlets, several participants suggested that a cross-cutting criterion is information that could be considered “game-changing” in terms of reducing barriers to using in silico information to produce dangerous physical products. For example, a participant suggested that special consideration is warranted for AI-enabled advances that can bridge from digital to physical products, such as autonomous research and self-driving labs, or create iterative feedback loops between LLMs, biological design tools, and AI-enabled wet labs.

Many participants expanded on the idea of developing clear if-then commitments, as also proposed in a recent 2025 National Academies’ report and elsewhere;5,6 tripwires for evaluating in silico models based on filters or other approaches to flag opportunities for dangerous outcomes; and standardized benchmarks to quantify risk more effectively. For example, one area warranting exploration is the capabilities that can be used to circumvent existing biosecurity or biodefense measures. Other key actions discussed included advances that fill key gaps in existing data or capabilities or those that significantly alter the expected “return on investment” for doing harm. One participant suggested monitoring whether in silico models could be used to identify ways to evade or dysregulate the innate immune response. However, another participant noted that implementing this use would be challenging because making predictions across the range of human responses is complex. Another participant noted that some capabilities may emerge after dissemination that the original designers of a tool could not have anticipated, alluding to the design of protein binders through backpropagation using the AlphaFold Protein Structure Database. Backpropagation uses computational tools originally designed to predict a protein’s structure from an amino acid sequence. Still, it is now applied to identify which amino acid sequence will produce a protein with the desired function, thereby accelerating protein design.

The irreversibility of dissemination was highlighted as a key issue, with one participant noting: “There is a fundamental asymmetry. A decision not to disseminate something can continually be revisited, whereas a decision to disseminate openly is something that can never be reversed.” The research lifecycle presents many opportunities to assess risk, with different actors being well positioned to influence decision-making at different stages. Provenance and metadata tagging were deemed to be critical control points for oversight and accountability. A participant suggested that National Security Decision Directive 189 (NSDD-189),7 which establishes a national policy that governs the dissemination of scientific, technical, and engineering information produced through federally funded fundamental research at colleges, universities, and laboratories, may also be relevant to decisions around the dissemination of in silico research. Conceptually, a participant suggested that criteria grounded in existing DURC and biosecurity principles can help guide whether in silico research output should be shared. The participant underscored the importance of considering risks to agricultural crops and other plants, animals, materials, and the environment in addition to those to human health and national security, which, in this participant’s view, have traditionally attracted the greatest focus. Some participants also drew a distinction between decisions about the conduct of research and those about its dissemination. In some contexts, it may

___________________

5 See https://nap.nationalacademies.org/read/28868/chapter/2#6 (accessed July 18, 2025).

6 See https://carnegieendowment.org/research/2024/09/if-then-commitments-for-ai-risk-reduction (accessed July 18, 2025).

7 See https://irp.fas.org/offdocs/nsdd/nsdd-189.htm (accessed July 18, 2025).

Suggested Citation: "2 Challenges Related to the Dissemination of Biological Study Results, Models, and Tools." National Academies of Sciences, Engineering, and Medicine. 2025. Disseminating In Silico and Computational Biological Research: Navigating Benefits and Risks: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/29174.

be determined that the work should be done but not shared. Further, “whether to develop the model is perhaps different from whether to disseminate the model,” which has been a common discussion topic with DURC.

Moving forward, participants suggested that publishers and other disseminators may benefit from expert-developed guidance to draw upon in informing their decisions, which can be grounded in existing frameworks. In addition, some suggested a global collaboration to evaluate flagged research, monitor and regulate the transition from in silico to wet lab work, and engage with funders to establish appropriate output filters (e.g., based on sequence). Built-in security protocols that facilitate data provenance—for example, metadata associated with how the model was used, attribution, and other elements—can be helpful, and some participants suggested prompts to authors and reviewers to consider potential risks and adhere to existing oversight mechanisms and best practices from other domains, such as cybersecurity or nuclear research.

Question 3:
What Are the Current Challenges and Gaps in Assessing and Evaluating the Possible Benefits and the DURC and PEPP Potential of Disseminating In Silico Research in Biological Systems?

Participants explored many challenges and gaps in evaluating in silico research concerns. In comparison to wet lab research, in silico research often advances more rapidly than physical review processes and the lack of physical location challenges governance. One significant gap related to in silico research is the lack of established risk assessment frameworks to determine the ease at which a design can be transformed into a physical product. Other gaps relate to the securing of models that, by default, operate outside physical spaces that can be easily controlled; the full enforcement of oversight policies across the broad spectrum of contexts in which these tools are used (e.g., different sectors, regulatory environment, countries); and the vetting of users for legitimacy. Many participants discussed better ways to assess additive risks, research chokepoints (e.g., funding and peer review), and restrictions on ML-assisted designs. However, a participant noted that a simple modification may turn low-risk research into DURC, which is captured in the definition of DURC in the now rescinded United States Government Policy for Oversight of Dual Use Research of Concern and Pathogens with Enhanced Pandemic Potential.

Many participants posited that current review and publication workflows may be inadequate for the risk assessment that is desired. For example, a participant suggested that applying the review process used for wet lab work to in silico work environments may be overly cumbersome, because in silico research often progresses more rapidly than wet lab experimentation and would often outpace conventional review processes. In addition, as noted by a participant, oversight approaches developed for journal publication may be insufficient or not well suited for the evolving dissemination channels. Preprint servers, in particular, were noted as operating quickly and lacking clear risk assessment guidelines. Participants commented that managed access platforms could help to balance the tension between openness and risk, and described how tiered access and secure infrastructure can provide dissemination outlets with tools to evaluate and customize what information is shared and how it is shared, based on the specific risks associated with each project. They suggested that shared principles are important factors in the creation of workflows to address risk assessment and other issues, such as reproducibility. Such shared-principle-based workflows help to identify when specific mitigation approaches, such as information redaction, may be beneficial and help to balance the importance of restriction in some instances with the overall ethos of open science. Several participants posited that greater alignment among the norms and practices used by various dissemination outlets, governments, and research ecosystems would create more robust and secure workflows that facilitate risk assessment and provide clear criteria for responsible dissemination. They suggested that concrete, real-world examples of in silico research that can and cannot be safely released would support harmonization of different institutional and governmental policies. The capabilities and limitations related to institutional resources are also important considerations.

Suggested Citation: "2 Challenges Related to the Dissemination of Biological Study Results, Models, and Tools." National Academies of Sciences, Engineering, and Medicine. 2025. Disseminating In Silico and Computational Biological Research: Navigating Benefits and Risks: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/29174.
BOX 1
Key Points Highlighted by Individual Participants
*

All participants were invited to share one important takeaway from the workshop’s Day 1 discussions and their ideas for overcoming the barriers to implementing effective voluntary guidelines noted during discussions. Participants highlighted issues related to understanding and responding to risks, deciding what information to disseminate, and identifying the actors and mechanisms that could be involved in facilitating risk assessment, managed access, and responsible dissemination.

  • Qualitative rather than quantitative consideration of risks may improve risk assessments of model precision and technology readiness levels.
  • Measures aimed at slowing progress toward, if not fully preventing, bad outcomes could provide countermeasure research a head start.
  • A list of foundation model capabilities that could trigger the need for further risk assessments could be useful.
  • As in silico work moves into wet labs and its capabilities grow, chokepoints that minimize risk could be identified on a case-by-case basis.
  • AI is enabling advances that bridge in silico and wet lab work, exponentially multiplying the risks.
  • Tools with general capabilities and broad applications could be the most dangerous.
  • Biological research risks no longer scale linearly.
  • In silico capabilities and development may be underestimated.
  • A systematic, risk-informed approach is important for identifying trade-offs between open and restricted access.
  • Non-research entities, such as hobbyists or foreign states, could misuse in silico models.
  • Any frameworks could consider how to release previously restricted research.
  • Even artificially constructed, engineered microbes may benefit from an extinction assessment.
  • The communities from which data are derived could be afforded input into their uses and share in their potential benefits.
  • In silico research has more dissemination options than wet lab research.
  • Distinguishing the different risks posed by in silico and wet lab work is important.
  • The acknowledged need to assess outputs prompted the question, “How will journals and preprint servers find the guidance they need for responsible dissemination?”
  • The community could better organize, catalog, and annotate code and workflows.
  • The assumption that dual-use in silico research should be published can be challenged.
  • Openness could be the default sharing mode, unless there are strong arguments against doing so.
  • Different stakeholders and actors have unique roles, and clarification is needed about who is granted the decision-making power to determine access, who enforces the necessary policies, and from whom potentially dangerous information would be kept safe.

* This list is the rapporteurs’ summary of the main points made by individual speakers, and the statements have not been endorsed or verified by the National Academies of Sciences, Engineering, and Medicine.

Suggested Citation: "2 Challenges Related to the Dissemination of Biological Study Results, Models, and Tools." National Academies of Sciences, Engineering, and Medicine. 2025. Disseminating In Silico and Computational Biological Research: Navigating Benefits and Risks: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/29174.

Finally, some participants noted areas in which cultural incentives may be misaligned with responsible research, which can pose career risks to scientists involved in pursuing or evaluating DURC. For example, scientists who decide not to publish their research or who raise concerns about the dual-use potential of the work of others are not rewarded, and many institutions do not encourage scientists who conduct research that may not be published. Biologists and computational researchers exhibit some cultural differences (e.g., preferring certain publishing and dissemination platforms, evaluating the risks of sharing research results and tools) that may lead to differing decisions regarding dissemination.

PARTICIPANT REFLECTIONS

In closing reflections, Alex John London noted that the workshop’s first day focused on the various tools and outputs that are associated with in silico biological research, the many outlets through which they are disseminated, and the challenges to identifying, quantifying, and categorizing the resulting biosecurity concerns. Although some in silico biological research may qualify as DURC according to the criteria discussed during the workshop, the risks are generally considered by some participants to be lower than those in physical research, and many participants stated that existing biosecurity frameworks may not always be fully applicable to the in silico space. As a result, stakeholders may benefit from different strategies to effectively minimize or manage biosecurity risks; assure the public that they have done so; and preserve scientific integrity, quality, openness, and research benefits.

Suggested Citation: "2 Challenges Related to the Dissemination of Biological Study Results, Models, and Tools." National Academies of Sciences, Engineering, and Medicine. 2025. Disseminating In Silico and Computational Biological Research: Navigating Benefits and Risks: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/29174.
Page 19
Suggested Citation: "2 Challenges Related to the Dissemination of Biological Study Results, Models, and Tools." National Academies of Sciences, Engineering, and Medicine. 2025. Disseminating In Silico and Computational Biological Research: Navigating Benefits and Risks: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/29174.
Page 20
Suggested Citation: "2 Challenges Related to the Dissemination of Biological Study Results, Models, and Tools." National Academies of Sciences, Engineering, and Medicine. 2025. Disseminating In Silico and Computational Biological Research: Navigating Benefits and Risks: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/29174.
Page 21
Suggested Citation: "2 Challenges Related to the Dissemination of Biological Study Results, Models, and Tools." National Academies of Sciences, Engineering, and Medicine. 2025. Disseminating In Silico and Computational Biological Research: Navigating Benefits and Risks: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/29174.
Page 22
Suggested Citation: "2 Challenges Related to the Dissemination of Biological Study Results, Models, and Tools." National Academies of Sciences, Engineering, and Medicine. 2025. Disseminating In Silico and Computational Biological Research: Navigating Benefits and Risks: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/29174.
Page 23
Suggested Citation: "2 Challenges Related to the Dissemination of Biological Study Results, Models, and Tools." National Academies of Sciences, Engineering, and Medicine. 2025. Disseminating In Silico and Computational Biological Research: Navigating Benefits and Risks: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/29174.
Page 24
Suggested Citation: "2 Challenges Related to the Dissemination of Biological Study Results, Models, and Tools." National Academies of Sciences, Engineering, and Medicine. 2025. Disseminating In Silico and Computational Biological Research: Navigating Benefits and Risks: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/29174.
Page 25
Suggested Citation: "2 Challenges Related to the Dissemination of Biological Study Results, Models, and Tools." National Academies of Sciences, Engineering, and Medicine. 2025. Disseminating In Silico and Computational Biological Research: Navigating Benefits and Risks: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/29174.
Page 26
Suggested Citation: "2 Challenges Related to the Dissemination of Biological Study Results, Models, and Tools." National Academies of Sciences, Engineering, and Medicine. 2025. Disseminating In Silico and Computational Biological Research: Navigating Benefits and Risks: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/29174.
Page 27
Suggested Citation: "2 Challenges Related to the Dissemination of Biological Study Results, Models, and Tools." National Academies of Sciences, Engineering, and Medicine. 2025. Disseminating In Silico and Computational Biological Research: Navigating Benefits and Risks: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/29174.
Page 28
Suggested Citation: "2 Challenges Related to the Dissemination of Biological Study Results, Models, and Tools." National Academies of Sciences, Engineering, and Medicine. 2025. Disseminating In Silico and Computational Biological Research: Navigating Benefits and Risks: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/29174.
Page 29
Suggested Citation: "2 Challenges Related to the Dissemination of Biological Study Results, Models, and Tools." National Academies of Sciences, Engineering, and Medicine. 2025. Disseminating In Silico and Computational Biological Research: Navigating Benefits and Risks: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/29174.
Page 30
Suggested Citation: "2 Challenges Related to the Dissemination of Biological Study Results, Models, and Tools." National Academies of Sciences, Engineering, and Medicine. 2025. Disseminating In Silico and Computational Biological Research: Navigating Benefits and Risks: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/29174.
Page 31
Suggested Citation: "2 Challenges Related to the Dissemination of Biological Study Results, Models, and Tools." National Academies of Sciences, Engineering, and Medicine. 2025. Disseminating In Silico and Computational Biological Research: Navigating Benefits and Risks: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/29174.
Page 32
Suggested Citation: "2 Challenges Related to the Dissemination of Biological Study Results, Models, and Tools." National Academies of Sciences, Engineering, and Medicine. 2025. Disseminating In Silico and Computational Biological Research: Navigating Benefits and Risks: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/29174.
Page 33
Next Chapter: 3 Potential Solutions and Considerations for Implementation
Subscribe to Email from the National Academies
Keep up with all of the activities, publications, and events by subscribing to free updates by email.