The Science and Practice of Team Science (2025)

Chapter: 5 Evaluating Team Science

Previous Chapter: 4 Institutional and External Supports for Team Science
Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.

5

Evaluating Team Science

This chapter identifies key outcomes, methods, and measures for evaluating team science training and performance. The chapter uses the term team science evaluation broadly to refer to the systematic assessment of team-based scientific research efforts (Stokols et al., 2008). This process involves measuring and analyzing various aspects of how science teams function and collaborate, the outcomes they produce, and/or the broader impacts they achieve. Team science evaluation can help reveal the dynamics within a science team; assess the effectiveness of the team’s collaboration processes; and determine how the team’s work contributes to advancing scientific knowledge, achieving institutional objectives, and generating societal benefits.

Team science evaluation can refer to both traditional evaluation efforts, dictated by funders, and the measurement of team processes and outcomes as part of research investigating scientific collaboration. Based on the committee’s literature searches, this chapter provides guidance for research evaluators in understanding which phenomena to assess when determining whether grant recipients are effectively meeting their funding objectives. It also guides scholars studying scientific collaboration with a broad understanding of the different constructs involved in team functioning—from inputs to processes to outcomes.

IMPORTANCE OF TEAM SCIENCE EVALUATION

The effectiveness of a science team can have far-reaching implications for a wide array of individuals and groups, all of whom have a vested interest in the team’s success (Falk-Krzesinski et al., 2011). For example,

Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.

the individual members of a science team—such as graduate students, postdoctoral researchers, faculty members, professional scientists, and administrators—are impacted directly by the team’s functioning and effectiveness. There could be potential setbacks for an early career researcher if a dysfunctional team limits opportunities for publishing or networking. On the other hand, their professional growth, recognition, professional networks, and intellectual contributions could be enhanced through involvement in a productive science team. Additionally, the team can benefit as a whole from each member’s contributions and collaborative processes. Successful team science collaboration can lead to groundbreaking research outputs and shared achievements (Thayer et al., 2018; Xu et al., 2022). Beyond the team, scientific institutions, such as universities and research centers, rely on the success of science teams to build reputations, secure funding, and fulfill their missions of advancing knowledge (e.g., Jones et al., 2008; National Science Foundation, n.d.). The impact of team science extends to an institution’s ability to attract top-tier talent, secure multimillion-dollar grants (i.e., center grants), and maintain a leadership role in the advancement of knowledge. Entire scientific fields also stand to benefit from the efforts of science teams, driving innovation, expanding knowledge, and setting new standards within the discipline (Stokols et al., 2008). Society at large is another critical party interested in science teamwork, as the discoveries made by science teams could lead to technological advancements, informed public policies, solutions to pressing global issues, and ultimately improving quality of life and addressing societal needs (Stokols et al., 2008). The stakes can be high, such that, without effective team science, lifesaving discoveries or technological breakthroughs may remain out of reach.

Evaluating how science teams collaborate is vital for improving research partnerships across all science fields (Klein, 2008). Evaluation can help leaders and teams understand team dynamics, improve processes, adapt to challenges, manage risks, and foster inclusion and continuous improvement. Evaluating research outputs, such as publications, patents, and outreach efforts, can be vital for determining overall team effectiveness and highlighting the innovation and creativity fostered by team science. Evaluation can also help clarify the broader impacts of team science research, including societal, educational, and policy contributions, ensuring that wider goals are being met. This, in turn, can inform future resource allocation and funding decisions, ensuring the effective use of resources and justifying future investments. Moreover, evaluating teamwork processes, psychological states, and practices can help identify and refine best practices and successful strategies that could be shared and replicated across other team science initiatives (Klein, 2008; Stokols et al., 2008). Insights gained from team science evaluation may help lead to improvements in research

Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.

methodologies, processes, and administrative support, thereby optimizing the overall effectiveness and efficiency of collaborative work (Belcher et al., 2015; Love et al., 2022; Roelofs et al., 2019). In short, systematic evaluation of team science can enable researchers, institutions, and funders to better support and enhance the contributions of collaborative research to scientific advancement and societal benefit.

CHALLENGES OF TEAM SCIENCE EVALUATION

Evaluating team science effectively is complex because of the need to address impacts on many types of groups, team dynamics, research types, research and organizational contexts, project time frames, and more.

The effectiveness of team science can impact different groups at multiple levels (e.g., individual, team, institutional, scientific, societal), so evaluation may need to consider the impact of team science across these different levels. For example, a thorough understanding of team functioning goes beyond team-level factors (e.g., team output and collaboration) to consider individual (e.g., attributes, contributions, individual outcomes) and contextual factors (e.g., institutional characteristics and outcomes) that both shape and are shaped by team dynamics. Thus, a key challenge for evaluators is identifying the most relevant factors at each level—individual, team, and context—to be included in the evaluation process.

The dynamics of science teams are complex, as teams may be composed of individuals with different goals, cultural contexts, and disciplinary areas. For example, team members may be dispersed geographically or have status differences within the team (e.g., junior faculty vs. senior faculty, medical researcher vs. community member). Some teams have fluid membership, in that members may join or leave the research over time. Thus, tailored evaluation methods are needed for capturing team interactions and outputs effectively.

Evaluations need to adapt to the type of work teams are pursuing. For instance, a team focused on biomedical research may have different collaborative dynamics, communication styles, and success metrics than a team working in environmental science. Teams conducting biomedical research may involve clinical (e.g., doctors, nurses) and laboratory researchers with different working schedules. Environmental science teams may conduct field studies and plan for long-term data collection efforts.

The cultural context, including the norms and values that guide behavior within the team, can also vary widely depending on the team’s geographic location, institutional setting, and the professional or personal backgrounds of its members. Similarly, the disciplinary focus of a team influences the methodologies it uses; the nature of its research questions; and the types of outputs it generates, such as publications, patents, or policy

Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.

recommendations. These complexities often necessitate flexible, context-specific approaches to team science evaluation that can accommodate the variable and evolving nature of team science endeavors (Hall et al., 2018; Stokols et al., 2003).

Team science evaluation can consider the timing of measurement as well as the evolution of team goals and dynamics over time (Mâsse et al., 2008; Stokols et al., 2008). Longitudinal data collection can be resource intensive, and teams often operate in different contexts, making it difficult to apply a standardized evaluation approach (Cummings et al., 2013; Hall et al., 2018; O’Connor et al., 2003). Identifying and consistently measuring relevant metrics that accurately reflect performance is another hurdle, as performance encompasses both outcomes and processes that can shift over time. Additionally, attributing changes in performance to specific factors is complicated by the influence of various internal and external variables (Ilgen et al., 2005). Temporal patterns and lag effects further complicate the assessment, as the impact of actions may not be immediately apparent (Ilgen et al., 2005). Furthermore, as team goals and success criteria often evolve throughout a project, evaluations may need to adapt to these changes.

The organizational context represents another key consideration when evaluating team science (Lee & Jabloner, 2017). For example, organizations outside of academia may have established training programs, best practices, and accountability and evaluation procedures (e.g., Barry et al., 2024; Savannah River Site, 2020) that can support team science. These practices often derive from hierarchical, command-and-control structures, yet productivity, cohesion, and retention within teams ultimately depend on the skills and behaviors of the direct leader and team members. Consequently, many industry organizations invest heavily in training employees in nontechnical areas, such as project management, conflict resolution, positive leadership, and effective communication (Carucci, 2018; Day et al., 2021), all of which enhance collaboration and team science effectiveness (Delise et al., 2010).

Industry organizations frequently use anonymous employee surveys to gather feedback on what is working well and where improvements are needed, subsequently creating action plans to address identified gaps. Additionally, mechanisms for confidential peer feedback can contribute to continuous improvement by informing development plans. Regular performance discussions between employees and leaders provide ongoing real-time feedback that reinforces positive behaviors and allows leaders to address issues before they escalate. When more complex issues arise that team members cannot resolve themselves, experts from human resources and/or compliance and ethics departments can assist in investigating and mediating solutions. This emphasis on accountability can cultivate a culture

Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.

in industry that supports team science. Although these systems are not flawless, they can establish a solid foundation for collaboration (Schneider et al., 1996). This feedback loop helps maintain productivity and ensures a positive return on investment.

However, since companies rarely share data on the outcomes linked to these practices, it is challenging to scientifically evaluate the strengths and weaknesses of various industry approaches to improving team science. In one exception, Google LLC has publicly shared the results of its internal research to understand what variables make some teams more successful than others, with a summary reported by Duhigg (2016) in The New York Times. Under Project Aristotle, researchers reviewed external literature and data from over a hundred teams throughout Google and found that psychological safety was the most important indicator for how a team would perform. High team psychological safety tended to track with equal speaking time and conversational turn-taking and high average social sensitivity (Duhigg, 2016). Although it is rare for companies to release such information externally, the results from Project Aristotle reinforce and are consistent with best practices for effective team science captured in the academic literature (e.g., Edmondson & Roloff, 2008).

In summary, the multifaceted nature of scientific ecosystems, the varied goals and strategies pursued by different science teams, the complexities of team dynamics over time, and the need to account for organizational practices and industry norms make it challenging to establish universal metrics or approaches for team science evaluation. The variable and evolving nature of science teams renders a one-size-fits-all approach to team evaluation inadequate. Instead, effective evaluation requires tailored methods for capturing the specific interactions, processes, and outcomes relevant to each team. This may involve adapting existing evaluation frameworks to align with a team’s unique goals or developing entirely new metrics to accurately assess performance in context. As teams evolve—adapting to new challenges, incorporating new members, or shifting their research focus—the evaluation approach must also be flexible and adaptive to remain relevant and insightful, and to provide valuable feedback that guides the team’s ongoing development and success.

A FRAMEWORK FOR GUIDING TEAM SCIENCE EVALUATION

Three overarching types of criteria could be considered when assessing the effectiveness of a work team (Hackman, 1987; Kozlowski & Ilgen, 2006). The first criterion, referred to in this chapter as team dynamics, assesses whether the team’s social processes and emergent psychological states are effective and efficient in achieving the desired performance standards, while also fostering the team members’ ability to learn and collaborate on

Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.

future tasks. The second criterion assesses whether the teams’ performance output meets or exceeds the performance standards expected by those who receive and/or review it. The third criterion considers whether, on balance, the group experience satisfies, rather than frustrates, the personal needs of individual members. Building on this framework and considering the multilayered nature of scientific ecosystems, the following sections identify methods and measures for evaluating: (a) team dynamics (e.g., team composition; teamwork processes; emergent states), (b) team performance outputs (e.g., intellectual merit, broader impact), and (c) the team’s impact on individual members (e.g., career success, opportunities for learning and development).

From a national laboratory perspective, one notable gap is the lack of formal evaluation of team dynamics by laboratory leaders, which the committee views as a limitation. Traditionally, teams in national labs have been assessed based on project outcomes, not on the quality of their collaboration. Adopting a more rigorous approach to evaluating teamwork from a team science perspective would be valuable. An essential question to consider would be: How effectively did the team function?

Evaluating Science Team Dynamics

Some teams “burn themselves out” in the process of completing the team task, ultimately compromising their effectiveness. Other teams can sustain their performance, learn from one another, and improve over time. Likewise, team performance may be high, but members may be so dissatisfied with the team’s dynamics that they do not want to continue working with the same members (this phenomenon is measured by what is known as team viability [Tekleab et al., 2009]).

The scientific literature on team functioning has identified attitudes, behaviors, and cognitive states (ABCs) that are “signatures” of highly effective teams (Salas et al., 2008).

Attitudes include psychological states such as cohesion, where members feel a strong bond and commitment to the team, and trust, which enables members to rely on one another; these attitudes play a pivotal role in sustaining high performance (Beal et al., 2003; De Jong et al., 2016). Additionally, psychological safety, an environment where team members feel safe to speak up, ask questions, and challenge ideas without fear of negative consequences, also enhances team performance (see also Chapter 4; Frazier et al., 2017).

Behaviors include core team processes that highlight interaction among members (Marks et al., 2001). For example, effective communication can ensure that information flows freely—reducing misunderstandings, aligning efforts, and improving performance (Marlow et al., 2018).

Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.

Coordination enables members to align their actions so that tasks are integrated seamlessly (Rico et al., 2008). Dysfunctional types of conflict, such as relational conflict (tension between team members) and process-related conflict (disagreements about logistics) reduce productivity, whereas task conflict (disagreements about the content of work) may increase performance under certain circumstances (de Wit et al., 2012).

Emergent cognitive states, such as shared mental models and transactive memory systems, allow teams to operate smoothly and efficiently by creating a common understanding of tasks, roles, and distributed knowledge (Mohammed et al., 2021). Information-sharing allows teams to capitalize on each member’s unique expertise, whereas team-learning enables members to collectively acquire and apply knowledge to achieve common goals, both of which enhance team performance (Mesmer-Magnus & DeChurch, 2009; Wiese et al., 2021).

The broader team literature demonstrates that teams that develop and nurture the ABCs of teamwork are well positioned to improve team well-being and performance. But research on teamwork specific to the science team context rarely assesses team emergent states and processes, focusing instead on performance outcomes captured via archival measures such as publications, patents, or grants. However, teams that excel do not focus on task completion alone—they also invest in sustaining these critical processes and psychological states over time (Salas et al., 2008).

Researchers can assess the quality of a team’s ABCs by collecting self-reported data via surveys. Table D-1 in Appendix D provides a sampling of survey-based assessments used in the team literature, including a construct/scale description, measurement specifics (number of items, dimensions, rating scale), and references indicating scale development and validation evidence. Following the popular input-mediator-output-input team framework (Ilgen et al., 2005), scales in Table D-1 are divided into seven main categories:

  1. compositional/individual difference surveys (e.g., team roles, collaboration readiness);
  2. emergent states (affective states such as trust and cohesion, cognitive states such as team-learning and transactive memory systems, and behavioral states such as workload-sharing and task interdependence);
  3. team behavioral processes (e.g., conflict, coordination);
  4. team climate and context (e.g., work group inclusion, team perceived virtuality);
  5. team leadership;
  6. team outcomes (e.g., affective outcomes such as team viability and collaboration, productivity outcomes such as performance); and
Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.
  1. composite team surveys, which measure a variety of dimensions, including team readiness, functioning, climate, and outcomes (e.g., TeamSTEPPS [Agency for Healthcare Research and Quality, n.d.]).

To facilitate an understanding of when these measures may be helpful, these categories are represented in the form of questions (e.g., Are team members positioned for effective teamwork? How happy are team members with how they worked together?).

Although most scales in Appendix D are self-reported, some are designed to be completed by peers (e.g., the comprehensive assessment of team member effectiveness by Ohland et al. [2012] for self- and peer evaluation). For some constructs—such as conflict (e.g., Jehn & Mannix, 2001), transactive memory systems (e.g., Lewis, 2003), and psychological safety (e.g., Edmondson, 1999)—there is consensus on measurement, with most research utilizing the most popular scale. In contrast, constructs such as cohesion, team climate, team viability, and team performance currently have no agreed-upon measure in the team literature (see Appendix D).

Considerable variability exists for these measures regarding validation evidence, with more robust methods showing consistent factor structures over multiple samples and demonstrating favorable psychometric properties and multiple forms of validity (e.g., content, discriminant, convergent, predictive). For example, seeking to validate 50-, 30-, and 10-item versions of their team process scale, Mathieu et al. (2020) utilized data from 700 teams across laboratory and field contexts to demonstrate a consistent confirmatory factor structure over 10 samples, along with high content and discriminant validity.

Context (e.g., level of virtuality, nature and difficulty of team tasks, resources and support, team culture) needs to be considered carefully when determining what scales to adopt. Because most of the scales in Appendix D were developed in the broader team literature, adaptations in wording may be needed to fit science teams.1 Although scale adaptations (e.g., changing the item context or referent, shortening scales, adding new items) are widespread in the team literature because of varying types of teams and tasks, adaptations may introduce threats to validity and psychometric properties (e.g., Heggestad et al., 2019). Justifying scale adaptations and providing evidence to support the validity of scales altered to fit a science team context would be possible but would require funding.

___________________

1 The items for many of the scales provided in Appendix D are freely available at https://ctsi.psu.edu/research-support/team-science-toolbox/assessment/

Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.

Evaluating Science Team Performance Outputs

The outputs that science teams and multiteam systems produce—such as publications, invention reports, patents, funding, and developmental opportunities for junior scholars—can be valuable indicators of their effectiveness. These metrics can reflect a team’s ability to generate novel ideas and solutions and advance scientific knowledge. A consistent flow of quality scientific output suggests that the team is actively engaged in research with practical applications (Hall et al., 2018; Trochim et al., 2008) and may indicate that the team is successfully integrating different expertise and perspectives and has strong collaboration and synergy. Thus, performance outputs can serve as key measures of team productivity and the tangible impact of research efforts (Xu et al., 2022; Yang et al., 2022).

Publications

Publishing scientific articles is a fundamental outcome of science teamwork, as publications can disseminate research findings to the broader community, contribute to the body of knowledge, and establish the credibility of the team’s work. The collaborative process of writing and submitting manuscripts involves synthesizing different perspectives, adhering to rigorous methodologies, and presenting coherent and impactful narratives. The quality and quantity of publications can be evaluated by examining a variety of metrics such as the number of articles published, the impact factors of the journals in which they appear, and the citation rates they receive. Furthermore, evaluating the collaborative process leading to publication—such as the division of labor, the integration of different disciplinary insights, and the ability to meet deadlines—provides valuable information on the team’s operational efficiency and cohesion (Hall et al., 2018; Nancarrow et al., 2013; Shuffler & Carter, 2018). Bibliometric analysis is a research evaluation methodology rooted in information science (Lyu et al., 2023). It uses statistical methods to analyze patterns in publications and citations, and it is rooted in the principle that the impact and influence of research can be quantified by examining how it is cited and used by other researchers.

There are different approaches to using publication output as a performance metric for science teams. Beyond simply counting the number of publications produced by a science team, researchers studying scientific collaboration have also assessed the quality of publications by considering their citation counts (Uzzi et al., 2013). Some scholars have used impact factors of the journals in which the publications appear (Llewellyn et al., 2020, 2024). Using impact factors and/or citation counts to evaluate science team performance has both advantages and disadvantages. On the positive side, these metrics can provide quantifiable measures of the potential

Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.

influence and reach of a team’s research. High citation counts suggest that the team’s work is being recognized and used by other researchers, indicating its relevance and impact within the scientific community. Similarly, publishing in high-impact journals can signal that the team’s research meets high standards of quality and significance, as these journals often have rigorous peer-review processes and are widely read. Demonstrating team performance through these metrics may be beneficial for securing funding, establishing collaborations, and enhancing the reputation of both the team and their embedding institution(s) (Carpenter et al., 2014). However, there are also multiple drawbacks to relying solely on citation counts and impact factors to evaluate science team performance. These metrics can be misleading, as they do not necessarily reflect the quality or innovation of the research (Donaldson & Cooke, 2014; Michalska-Smith & Allesina, 2017). For instance, citation counts for a team could be inflated by a few highly cited papers, which may not represent the overall contribution of the team. Additionally, a journal’s impact factor reflects the average citation rate of all articles within that journal, not the specific impact of any single article, which can make it a poor indicator of individual team performance in many cases (Waltman & Traag, 2021).

For teams with interdisciplinary and/or transdisciplinary research goals, evaluating the degree to which publications integrate knowledge, methods, and perspectives from multiple disciplines to address complex research questions can be relevant (Laursen et al., 2022; Tremblay et al., 2011). Evaluation of interdisciplinarity can be approached through several key indicators and methods. For example, a qualitative evaluation of interdisciplinarity could involve peer review and expert evaluation, where feedback is solicited from experts in different disciplines to assess the perceived interdisciplinarity of the team’s work (Mansilla, 2006). These peer reviews and expert panels can provide valuable qualitative insights into how effectively the research integrates and advances knowledge across multiple fields. Additionally, self-reported interdisciplinarity can be included, where the research team conducts a self-assessment, reflecting on their interdisciplinary practices, the integration of knowledge from various disciplines, and the challenges and benefits they encountered while working across these different fields (e.g., Palmer et al., 2016).

Quantitative analysis of disciplinary differences can involve examining the disciplinary backgrounds of team members listed as authors on scientific publications, categorized by their departmental or institutional affiliations and areas of expertise. Additionally, analyzing the variety of citations and references in the team’s publications can reveal the extent to which the team draws on and contributes to multiple disciplines. Assessing the variety of publication venues, particularly the range of journals in which the team’s work is published, also provides insight into the research’s relevance across

Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.

different academic fields (Carr et al., 2018). This can be measured by categorizing journals by disciplinary focus and calculating the proportion of publications in each category. Considering impact factors and target audiences of journals, as well as citation counts, is important in this context, as inclusion in high-impact, interdisciplinary journals indicates that the research is accessible and valuable to a broad academic audience beyond a single discipline (e.g., Carr et al., 2017; Leahey et al., 2017; Okamura, 2019).

Invention Reports and Patents

Invention reports and patents can be valuable metrics for evaluating the success and impact of team science, particularly in fields where research has the potential to lead to practical applications and technological innovations. These indicators generally offer tangible evidence of a team’s ability to translate scientific discoveries into real-world solutions, reflecting the practical significance and economic value of their research (Fortunato et al., 2018; Tigges et al., 2019; Wuchty et al., 2007).

Invention reports serve as preliminary documentation of novel ideas or technologies developed by a team, and they provide a gate check before applying for a patent (e.g., Harvard University, n.d.; National Institute of Health SEED, n.d.). These reports may be useful for evaluating the creativity and innovative capacity of a science team, as they capture the early stages of ideation and problem-solving. The number and quality of invention reports may indicate a team’s effectiveness in generating new ideas and their potential for future technological breakthroughs (University of Michigan Innovation Partnerships, 2024; Tigges et al., 2019).

Patents, on the other hand, represent a more advanced stage of innovation, providing legal protection for inventions and enabling commercialization. The number of patents filed and granted may serve as a key indicator of a team’s success, not only in developing novel technologies but also in advancing these ideas to a stage where they have clear utility and market potential (Allen et al., 2016; Vestal & Mesmer-Magnus, 2020). Patents also contribute to the broader impact of team science by facilitating the transfer of knowledge and technology from academic settings to industry and society, potentially leading to new products, processes, and services that address societal needs (National Center for Science and Engineering Statistics, 2024).

However, although invention reports and patents are important indicators of innovation, they have potential limitations in terms of evaluating team science across all fields. For example, newer fields of research may have higher invention potential compared with more established fields, by virtue of how much “blank space” exists. As a result, the number of

Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.

invention reports and patents could potentially be skewed by the type of scientific research being conducted, irrespective of a science team’s quality. Therefore, although publications, invention reports, and patents can be useful for assessing certain aspects of team science performance, these approaches can be complemented by broader evaluative approaches that include narrative accounts of success. A holistic perspective to team science evaluation can enable a more comprehensive evaluation of team contributions, acknowledging the diverse forms of impact that team science can achieve across different disciplines (e.g., Wooten et al., 2014).

Invention reports and patents are also used as key output metrics in industry to help appraise, as part of a broader evaluation matrix, the return on investment at the project, department, and organizational levels. Also trackable are the conversion of patented and unpatented technologies into new commercial products (Nerkar & Shane, 2007) that yield monetary gains and the percentage of sales that come from newly innovated products. However, these output metrics need to be put into context with the macroeconomic environment, as changes in the economy will likely result in changes to inputs such as funding for research and development and outputs such as revenue (Mezzanotti & Simcoe, 2023). In addition, the quality of innovation management systems between and within companies and industries can vary widely, which can strongly influence output metrics and be disconnected from the quality of team science taking place on a particular project. The quality of innovation management systems is important enough that the International Organization for Standardization (2019) has issued a comprehensive set of guidelines: Innovation management—Innovation management system. As a result, from a team science perspective, control variables can be confounded by external and internal factors that hinder the ability to adequately isolate how an organization’s structural changes directly impact the quality of team science and associated outcomes (Memon et al., 2024). Temporal impacts also need to be considered, as any change that is made to a procedure or process will usually take months or years to impact scientific and/or innovation output metrics (Miller et al., 2021). As a result, it can be difficult to parse the correlational and causational factors when using invention reports, patents, and other innovation metrics as a proxy for the effectiveness of team science.

Research Awards

Securing research funding is sometimes used as an output metric to evaluate a science team’s success. Effective team science can lead to the development of new fields or lines of research that attract funding and inspire new sponsored programs or institutional strategic initiatives. This

Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.

influx of resources enables teams to explore innovative ideas, support interdisciplinary collaborations, and build research capacity that would otherwise be unattainable. For example, the National Academies Keck Futures Initiative (NAKFI) was a 15-year program supported by a $40 million grant from the W. M. Keck Foundation. Its goal was to advance the future of science by supporting innovative interdisciplinary research ideas generated during think tank–style conferences. Each of these conferences focused on different real-world challenges. Attendees were able to compete for seed fund grants, allowing them to pursue what the NAKFI program viewed as bold and new research ideas (for more information, see National Research Council, 2018).

Funding serves as a catalyst for scholarly growth, providing the necessary support for conducting high-impact research, building infrastructure, and training the next generation of scientists. However, the true measure of success can lie in how well these financial resources are leveraged to generate meaningful scientific contributions, address complex societal challenges, and foster sustained academic and community engagement (Shrivastava et al., 2020; Tebes & Thai, 2018). Therefore, although the ability to secure research dollars is a critical component of evaluating team science, it needs to be considered in the context of how these resources are used to achieve broader intellectual and societal impacts.

Other Performance Output Considerations

As mentioned throughout this discussion, those researching and evaluating science teams ought to take a broad perspective when considering scientific outputs. This can include intellectual output beyond publications and patents. For example, science teams can sometimes produce spin-off projects based on new ideas generated, which sometimes lead to new grants. They might create new methodologies from combining approaches developed in separate fields or from addressing a need in their innovation process. Science teams can also be evaluated on outcomes such as development of new scientific software or unique datasets that came out of their project, either out of need or from discovery; datasets are particularly important given that they require intensive interdisciplinary collaboration.

A related form of evaluation pertains to how the team members are changing. Science teams can create an informal learning culture where knowledge and skills are transferred as the research progresses. This could include learning new methods for research or creating a shared vocabulary that transcends disciplinary language barriers (Dietl et al., 2023; Tannenbaum et al., 2009). In this process, science team members are developing collaboration competencies. While these competencies are important, they might not always be tracked consistently (Strimel et al., 2014). This

Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.

informal learning, however, is crucial to the scientific ecosystem, as it is the main way new knowledge is acquired after graduate school.

Although some funders already account for educational outcomes (e.g., students graduated, theses supported), evaluation of team science needs to also consider interdisciplinary integration in student research (e.g., Laursen et al., 2023). Team science projects commonly lead to the development of new courses and even new degree programs based upon the need for interdisciplinary graduate training. What is more, the enhanced professional networks developed through collaboration cannot be captured via coauthor networks. These social–professional ties are important to track because they can lead to collaborations that are not normally traced with traditional methods.

New subgroups in professional societies can be produced when a research topic has broader implications and may lead to new interdisciplinary societies (e.g., funding for smart cities research led to the development of technical groups and dedicated meetings in computer science societies). In this vein, researcher output can also lead to articles reflecting on the implication of findings for science policy and to informal science articles, such as online magazines or blogs, or popular press articles in science magazines. These kinds of outreach activities become increasingly important to demonstrate the broader impact science teams can have on the community at large and to educate the public about new discoveries.

Impact on Financial and Human Resources for Communities and Institutions

When evaluating team science output, it is also crucial to measure and document the impact these efforts have on the institutions and communities in which the science teams operate. For example, science team output may directly impact training and development opportunities for junior scholars. Indicators such as the creation of new academic programs and classes at universities, the integration of cutting-edge research findings into curricula, and the inclusion of team science outcomes in textbooks and other education resources are key metrics to consider (Steer et al., 2017; Wallen et al., 2019). These indicators could help assess whether the team is producing new knowledge that is being disseminated to relevant communities and whether the next generation of scientists is gaining a comprehensive understanding of interdisciplinary approaches and collaborative research methods.

Furthermore, it is important to track and document the competitive advantages that team science can provide for institutions, particularly in terms of financial profitability and resource acquisition. Many of the metrics typically used to assess the performance of science teams, such

Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.

as publications, patents, and grant funding (noted previously), can also be applied at the center, institution, or community levels. For example, metrics such as increased grant funding, which is often a result of successful interdisciplinary collaborations, can serve as valuable indicators of the initiative’s financial impact. This approach provides valuable insights into whether science teams are achieving their desired outcomes and whether institutional-level interventions are effectively driving these results.

Additionally, documenting the recruitment and retention of talented personnel highlights the role of team science in making institutions more attractive to scholars and professionals. These factors are crucial for understanding how well the institution is leveraging its commitment to innovative, collaborative research to enhance its overall standing and resource base, as well as its ability to positively impact local and global communities for the benefit of their workforce and for scientific reach (e.g., Parilla & Haskins, 2023; Valero & Van Reenen, 2019).

At the macro level, the impact of team science on connections among groups and disciplines could be measured and documented to evaluate whether large-scale initiatives are having the desired impact on the communities that have invested in them and on the communities where these initiatives are invested. Metrics such as the development of more interconnected publication networks of scientists from different disciplines, the strengthening of bonds within and across research groups, and the establishment of connections with relevant agencies and other invested groups are all potential indicators of team science success. As an example, the National Science Foundation (NSF) had funded the Research Coordination Networks program to advance a field or create new directions in research by supporting investigators across international, geographic, and organizational boundaries (National Science Foundation, n.d.). Metrics that assess these connections might provide insight into how effectively team science initiatives are breaking down silos, facilitating knowledge, promoting the translation of scientific findings into practical applications, and improving the communities that they draw from and are embedded in. Therefore, by systematically tracking the development and evolution of connectivity among relevant parties, and their impact on science and on various communities, evaluators can better understand the role of team science in advancing scientific progress.

Evaluating Broader Impacts

Defining societal impact in research and academia goes beyond the traditional metrics of success, such as impact factor, publication counts, invited talks, and patents. Societal impact is the tangible influence that research has on society, encompassing the ways in which knowledge creation improves public well-being, shapes policies, and drives societal advancements. In 2020,

Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.

the United Nations (UN) published the report Shaping the Trends of Our Time, which lists five interconnected megatrends—the greatest challenges facing humanity—as (a) climate change and environmental degradation, (b) demographic trends and population aging, (c) sustainable urbanization, (d) digital technologies, and (e) inequalities. The UN prioritized these megatrends, which will shape the world for the next several decades, because they are human challenges that can be modified by human choices (United Nations, 2020). Effective team science can help address each of these societal challenges by accelerating advances in science and technology and developing practical and scalable solutions. While traditional metrics highlight the academic reach of research, they often fail to capture how that research contributes to solving real-world problems such as those identified by the UN. To comprehensively assess societal impact, one must consider alternative measures, such as community engagement, public policy influence, and the practical application of research findings in nonacademic sectors (D’Este & Robinson-García, 2023).

One of the key challenges in measuring societal impact is the significant time lag between when research is conducted and when its broader societal effects become evident (Siar, 2023). For instance, a study on public health might not show its true societal benefits until years after the findings have been applied in policy reforms or health care interventions. Similarly, environmental research could take decades before its recommendations lead to significant ecological or regulatory changes. As a result, tracking societal impact requires a long-term view and an understanding that traditional, short-term metrics may not capture the full scope of influence.

Another difficulty lies in determining who is responsible for tracking societal impact. Unlike academic citations, which are relatively easy to quantify through established databases, societal impact is diffuse, involving multiple entities such as government agencies, nonprofits, industry partners, and community organizations. Researchers may lack the resources or time to systematically track how their work is being utilized outside academia (e.g., Oliver et al., 2014). As such, institutions may need to play a larger role in this effort, possibly through dedicated offices or staff that monitor the application of research in broader societal contexts. Collaboration with external partners who are involved in the implementation of research could also provide valuable insights.

Supporting such tracking efforts is another significant hurdle, as these efforts require a focused effort (Celeste et al., 2014), particularly to go beyond anecdotal evidence and develop robust methods for documenting influence. Universities and research institutions need to allocate resources to track the diffusion and implementation of research across various societal domains. This could involve establishing partnerships with policymakers, industries, or public organizations that can monitor and report on the

Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.

outcomes of research projects. Furthermore, platforms where researchers can share the broader impacts of their work, such as policy briefs or community impact reports, could help document societal contributions more systematically.

Ultimately, the challenge of defining and tracking societal impact invites a broader conversation about what one values in research. Expanding the understanding of impact to include societal contributions requires a shift in both institutional priorities and reward systems. Instead of focusing solely on traditional academic outputs, institutions can recognize and incentivize efforts that engage with communities, influence public policy, and address pressing societal challenges (Ozer et al., 2023). Broadening the criteria for success fosters a research ecosystem that not only advances knowledge but also drives meaningful change in the world.

The Translational Science Benefits Model2 is a framework that helps determine the range of societal impacts research can have on society. The model takes into consideration a number of potential domains that can be influenced, such as policy, economic, health or clinical, as well as the community. One valuable aspect of this model is its ability to help researchers think in the planning phase about how their research will have impact and how they will measure the benefits of its impact. For example, in health care, researchers can examine whether scientific papers are referenced in policy documents and whether these policies lead to improved health care or societal outcomes.

Societal trust in science is another critical component of understanding science’s broader impact. If people in society have a high degree of trust in scientific findings, research-backed policies and clinical practices may be more likely to be accepted and implemented effectively. Conversely, if public trust in science is low, even the most robust policies and practices may struggle to gain traction (Anderson et al., 2024; Goldenberg, 2023).

Evaluating a Science Team’s Impact on Individual Members

Psychological Impact

Evaluating the impact of being part of a science team on an individual team member can involve several important dimensions (Tay et al., 2023). One key aspect is individual well-being, which encompasses job satisfaction, engagement, and a sense of meaning and professional identity (Gibson et al., 2023). Being part of a collaborative team can enhance these elements by providing a supportive environment where scientists feel valued and

___________________

2 More information about the Translational Science Benefits Model is available at https://translationalsciencebenefits.wustl.edu/

Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.

connected to a larger purpose. When team members experience high levels of well-being, they are more likely to be motivated, productive, and committed to their work (Gibson et al., 2023). Another important factor is the individual’s experience of inclusion, or even fusion within the team (Swann et al., 2009). When scientists perceive a fair and inclusive environment, they are more likely to contribute effectively and feel a sense of belonging, which is crucial for maintaining a healthy and positive team dynamic (Gibson et al., 2023; Salas et al., 2015).

Learning and development are also critical outcomes of team participation. Scientists not only advance their own skills and knowledge through collaboration but also contribute to the development of those around them (Bennett & Gladlin, 2012). This reciprocal process of learning fosters a team culture of continuous improvement and innovation, which can have long-lasting benefits for both the individual and the group (Thayer et al., 2018).

Professional Networks

Being part of a science team could significantly impact an individual scientist by broadening and diversifying their network of collaborators. This expanded network can be tracked over time, providing insights into the development of the scientist’s professional relationships (Fortunato et al., 2018; Okamoto & Centers for Population Health and Health Disparities Evaluation Working Group, 2015). As a scientist works with new colleagues across different disciplines, their network might grow not only in size, but also in perspective diversity, which can lead to richer, more innovative collaborations (van Knippenberg et al., 2020). This diversification of connections may enhance the scientist’s ability to tackle complex research problems by bringing in fresh perspectives and expertise from a wide array of fields.

Furthermore, the social capital gained from being part of a diverse science team or group may facilitate the recruitment and retention of even more diverse team members over time (e.g., Harris et al., 2025). As scientists work together with new colleagues from different networks, they not only produce collaborative outputs such as publications, but also can establish lasting professional relationships. This is particularly evident in academia, where the freedom to pursue various research topics and the support provided by universities or agencies—such as through multidisciplinary workshops and collaborative programs—make it easier to connect with new people (Ertas et al., 2003; Hannon et al., 2018). Additionally, the transient nature of student populations in academic settings provides a continuous influx of new talent, further enriching the team’s diversity and collaborative potential.

Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.
Career Success

Strengthening a network of collaborators, including participating in team science, can also have meaningful outcomes for a scientist’s career. As collaborative relationships depend and become more productive, they can lead to more significant achievements, such as coauthored publications, joint grant applications, and shared researched projects, that are reflected on the scientist’s curriculum vitae. The value of these connections is often seen in the sustained collaborations that result in regular outputs over many years, which can be particularly beneficial in an academic setting, where long-term partnerships may be more feasible (Bu et al., 2018). For example, it is generally a baseline requirement in industry that employees will develop a proven track record of being able to collaborate effectively; this ability can be considered in the recruitment and promotion processes (Klein & Falk-Krzesinski, 2017). A key differentiator in hiring can be an individual’s experience with complex collaborations in team science, such as inter- or transdisciplinary work, especially with teams that are geographically dispersed. This may be true for global industrial firms, where team science can stretch across disciplines and geographies (Hung et al., 2021; Jones et al., 2008; Mazzucchelli et al., 2021).

Potential Limitations

It is important to recognize certain limitations when evaluating the impact of team science at the individual level. While many aspects discussed are inherently positive, such as the expansion of professional networks and the promotion of learning and development outcomes, some scholars have highlighted significant trade-offs and potential negative consequences associated with team participation (Benson et al., 2016; Conn et al., 2019). For instance, Forscher et al. (2023) underscored the risk of unaccountable leadership within large-scale team science initiatives, and Mäkinen et al. (2025) identified challenges faced by untenured faculty in gaining recognition for their contributions to interdisciplinary research. Additionally, research has shown that for junior employees, membership in multiple teams is associated with greater role ambiguity, lower job performance, and higher absenteeism (van de Brake et al., 2020). Berkes et al. (2024) demonstrated that researchers who engage in interdisciplinary work early in their careers tend to have less career success and stop publishing sooner than those who initially stay within their discipline. Ensuring that early career researchers receive guidance, recognition, and resources can help mitigate these risks and promote more sustainable career paths in interdisciplinary research. Taken together, these findings suggest the potential trade-offs inherent in team science participation warrant consideration.

Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.

CONCLUSION AND RECOMMENDATION

Conclusion 5-1: Data collection and evaluation, supported by both institutions and science team leaders, are critical for answering questions about key features of science teams:

  • How social processes on the team are unfolding (e.g., team members’ perception of experiences of participation on the team, team member satisfaction, psychological safety, trust).
  • What the team is producing (e.g., successfully completed team objectives, publications, patents and invention reports, research grant applications and awards, educational outcomes).
  • What impact the team is having on individual members (e.g., well-being, skill and knowledge advancement, growth of professional networks, career success).

Recommendation 5-1: Funding agencies, including the National Science Foundation, the National Institutes of Health, and the many other agencies and foundations that support research, should require that the science teams they support develop an evaluation plan to assess their effectiveness and impact. The plans should incorporate team dynamics (e.g., social processes), team performance (e.g., bibliometric metrics), and impact on members (e.g., learning and development outcomes). Regular review periods should be established with the team to monitor and track progress and team effectiveness.

REFERENCES

Agency for Healthcare Research and Quality. (n.d.). TeamSTEPPS Teamwork Perceptions Questionnaire.

Allen, T. J., Gloor, P., Fronzetti Colladon, A., Woerner, S. L., & Raz, O. (2016). The power of reciprocal knowledge sharing relationships for startup success. Journal of Small Business and Enterprise Development, 23, 636–651. https://doi.org/10.1108/JSBED-08-2015-0110

Anderson, J. D., Malone, T., & Akridge, J. T. (2024). Strategies for land-grant universities to foster public trust. Journal of Agricultural and Applied Economics, 1–16.

Barry, E. S., Varpio, L., Teunissen, P., Vietor, R., & Kiger, M. (2024). Preparing military interprofessional health care teams for effective collaboration. Military Medicine, 190(3-4), e804–e810. https://doi.org/10.1093/milmed/usae515

Beal, D. J., Cohen, R. R., Burke, M. J., & McLendon, C. L. (2003). Cohesion and performance in groups: A meta-analytic clarification of construct relations. Journal of Applied Psychology, 88, 989–1004.

Belcher, B. M., Rasmussen, K. E., Kemshaw, M. R., & Zornes, D. A. (2015). Defining and assessing research quality in a transdisciplinary context. Research Evaluation, 25(1), 1–17.

Bennett, L. M., & Gadlin, H. (2012). Collaboration and team science: From theory to practice. Journal of Investigative Medicine: The Official Publication of the American Federation for Clinical Research, 60(5), 768–775. https://doi.org/10.2310/JIM.0b013e318250871d

Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.

Benson, M. H., Lippitt, C. D., Morrison, R., Cosens, B., Boll, J., Chaffin, B. C., Fremier, A. K., Heinse, R., Kauneckis, D., Link, T. E., Scruggs, C. E., Stone, M., & Valentin, V. (2016). Five ways to support interdisciplinary work before tenure. Journal of Environmental Studies and Sciences, 6, 260–267. https://doi.org/10.1007/s13412-015-0326-9

Berkes, E., Marion, M., Milojević, S., & Weinberg, B. A. (2024). Slow convergence: Career impediments to interdisciplinary biomedical research. Proceedings of the National Academy of Sciences, 121(32), e2402646121. https://doi.org/10.1073/pnas.2402646121

Bu, Y., Ding, Y., Liang, X., & Murray, D. S. (2018). Understanding persistent scientific collaboration. Journal of the Association for Information Science and Technology, 69(3), 438–448.

Carpenter, C. R., Cone, D. C., & Sarli, C. C. (2014). Using publication metrics to highlight academic productivity and research impact. Academic Emergency Medicine, 21(10), 1160–1172. https://doi.org/10.1111/acem.12482

Carr, G., Blanch, A. R., Blaschke, A. P., Brouwer, R., Bucher, C., Farnleitner, A. H., Fürnkranz-Prskawetz, A., Loucks, D. P., Morgenroth, E., Parajka, J., Pfeifer, N., Rechberger, H., Wagner, W., Zessner, M., & Blöschl, G. (2017). Emerging outcomes from a cross-disciplinary doctoral programme on water resource systems. Water Policy, 19(3), 463–478. https://doi.org/10.2166/wp.2017.054

Carr, G., Loucks, D. P., & Blöschl, G. (2018). Gaining insight into interdisciplinary research and education programmes: A framework for evaluation. Research Policy, 47(1), 35–48. https://doi.org/10.1016/j.respol.2017.09.010

Carucci, R. (2018). When companies should invest in training their employees — and when they shouldn’t. Harvard Business Review. https://hbr.org/2018/10/when-companies-should-invest-in-training-their-employees-and-when-they-shouldnt

Celeste, R. F., Griswold, A., & Straf, M. L. (2014). Committee on Assessing the Value of Research in Advancing National Goals. Measuring Research Impacts and Quality. https://www.ncbi.nlm.nih.gov/books/NBK253890/

Conn, V. S., McCarthy, A. M., Cohen, M. Z., Anderson, C. M., Killion, C., DeVon, H. A., Topp, R., Fahrenwald, N. L., Herrick, L. M., Benefield, L. E., Smith, C. E., Jefferson, U. T., & Anderson, E. A. (2019). Pearls and pitfalls of team science. Western Journal of Nursing Research, 41(6), 920–940.

Cummings, J. N., Kiesler, S., Bosagh Zadeh, R., & Balakrishnan, A. D. (2013). Group heterogeneity increases the risks of large group size: A longitudinal study of productivity in research groups. Psychological Science, 24(6), 880–890. https://doi.org/10.1177/0956797612463082

Day, D., Bastardoz, N., Bisbey, T., Reyes, D., & Salas, E. (2021). Unlocking human potential through leadership training & development initiatives. Behavioral Science & Policy, 7(1), 41–54. https://doi.org/10.1177/237946152100700105

D’Este, P., & Robinson-García, N. (2023). Interdisciplinary research and the societal visibility of science: The advantages of spanning multiple and distant scientific fields. Research Policy, 52(2), 104609.

De Jong, B. A., Dirks, K. T., & Gillespie, N. (2016). Trust and team performance: A meta-analysis of main effects, moderators, and covariates. Journal of Applied Psychology, 101(8), 1134.

de Wit, F. R., C., Greer, L. L., & Jehn, K. A. (2012). The paradox of intragroup conflict: A meta-analysis. Journal of Applied Psychology, 97(2), 360–390.

Delise, L., Gorman, C. A., Brooks, A., Rentsch, J., & Steele-Johnson, D. (2010). The effects of team training on team outcomes: A meta-analysis. Performance Improvement Quarterly, 22(4), 53–80. https://doi.org/10.1002/piq.20068

Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.

Dietl, J. E., Derksen, C., Keller, F. M., & Lippke, S. (2023). Interdisciplinary and interprofessional communication intervention: How psychological safety fosters communication and increases patient safety. Frontiers in Psychology, 14, 1164288.

Donaldson, M. R., & Cooke, S. J. (2014). Scientific publications: Moving beyond quality and quantity toward influence. BioScience, 64(1), 12–13. https://doi.org/10.1093/biosci/bit007

Duhigg, C. (2016, February 25). What Google learned from its quest to build the perfect team. The New York Times. https://www.nytimes.com/2016/02/28/magazine/what-google-learned-from-its-quest-to-build-the-perfect-team.html

Edmondson, A. (1999). Psychological safety and learning behavior in work teams. Administrative Science Quarterly, 44(2), 350–383. https://doi.org/10.2307/2666999

Edmondson, A. C., & Roloff, K. S. (2008). Overcoming barriers to collaboration: Psychological safety and learning in diverse teams. In E. Salas, G. F. Goodwin, & C. S. Burke (Eds.), Team effectiveness in complex organizations (pp. 217–242). Routledge.

Ertas, A., Maxwell, T., Rainey, V. P., & Tanik, M. M. (2003). Transformation of higher education: The transdisciplinary approach in engineering. IEEE Transactions on Education, 46(2), 289–295.

Falk-Krzesinski, H. J., Contractor, N., Fiore, S. M., Hall, K. L., Kane, C., Keyton, J., Thompson Klein, J., Spring, B., Stokols, D., & Trochim, W. (2011). Mapping a research agenda for the science of team science. Research Evaluation, 20(2), 145–158. https://doi.org/10.3152/095820211X12941371876580

Forscher, P. S., Wagenmakers, E. J., Coles, N. A., Silan, M. A., Dutra, N., Basnight-Brown, D., & IJzerman, H. (2023). The benefits, barriers, and risks of big-team science. Perspectives on Psychological Science, 18(3), 607–623.

Fortunato, S., Bergstrom, C. T., Börner, K., Evans, J. A., Helbing, D., Milojević, S., Petersen, A. M., Radicchi, F., Sinatra, R., Uzzi, B., Vespignani, A., Waltman, L., Wang, D., & Barabási, A.-L. (2018). Science of science. Science, 359(6379). https://doi.org/10.1126/science.aao0185

Frazier, M. L., Fainshmidt, S., Klinger, R. L., Pezeshkan, A., & Vracheva, V. (2017). Psychological safety: A meta-analytic review and extension. Personnel Psychology, 70, 113–165.

Gibson, C., Thomason, B., Margolis, J., Groves, K., Gibson, S., & Franczak, J. (2023). Dignity inherent and earned: The experience of dignity at work. Academy of Management Annals, 17(1), 218–267.

Goldenberg, M. J. (2023). Public trust in science. Interdisciplinary Science Reviews, 48(2), 366–378.

Hackman, J. R. (1987). The design of work teams. In J. W. Lorsch (Ed.), Handbook of organizational behavior (pp. 315–342). Prentice-Hall.

Hall, K. L., Vogel, A. L., Huang, G. C., Serrano, K. J., Rice, E. L., Tsakraklides, S. P., & Fiore, S. M. (2018). The science of team science: A review of the empirical evidence and research gaps on collaboration in science. American Psychologist, 73(4), 532. https://doi.org/10.1037/amp0000319

Hannon, C. (2018). How to plan a design workshop. Chris Hannon Creative. https://www.chrishannoncreative.com/blog/2018/8/9/how-to-plan-a-design-research-workshop

Harris, H., Tan, I. M. C., Qiu, Y., Brouwer, J., Sosa, J. A., & Yeo, H. (2025). Faculty characteristics and surgery trainee attrition. JAMA Surgery. https://doi.org/10.1001/jamasurg.2025.0274

Harvard University. (n.d.). Inventions and invention reporting. https://osp.finance.harvard.edu/inventions-and-invention-reporting

Heggestad, E. D., Scheaf, D. J., Banks, G. C., Monroe Hausfeld, M., Tonidandel, S., & Williams, E. B. (2019). Scale adaptation in organizational science research: A review and best-practice recommendations. Journal of Management, 45(6), 2596–2627. https://doi.org/10.1177/0149206319850280

Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.

Hung, S. W., Cheng, M. J., Hou, C. E., & Chen, N. R. (2021). Inclusion in global virtual teams: Exploring non-spatial proximity and knowledge sharing on innovation. Journal of Business Research, 128, 599–610.

Ilgen, D. R., Hollenbeck, J. R., Johnson, M., & Jundt, D. (2005). Teams in organizations: From input-process-output models to IMOI models. Annual Review of Psychology, 56, 517–543. https://doi.org/10.1146/annurev.psych.56.091103.070250

International Organization for Standardization. (2019). Innovation management—Innovation management system (ISO No. 56002:2019). https://www.iso.org/standard/68221.html

Jehn, K. A., & Mannix, E. A. (2001). The dynamic nature of conflict: A longitudinal study of intragroup conflict and group performance. Academy of Management Journal, 44(2), 238–251. https://doi.org/10.5465/3069453

Jones, B. F., Wuchty, S., & Uzzi, B. (2008). Multi-university research teams: Shifting impact, geography, and stratification in science. Science, 322(5905), 1259–1262.

Klein, J. T. (2008). Evaluation of interdisciplinary and transdisciplinary research: A literature review. American Journal of Preventive Medicine, 35(Suppl 2), S116–S123.

Klein, J. T., & Falk-Krzesinski, H. J. (2017). Interdisciplinary and collaborative work: Framing promotion and tenure practices and policies. Research Policy, 46(6), 1055–1061.

Kozlowski, S. W., & Ilgen, D. R. (2006). Enhancing the effectiveness of work groups and teams. Psychological science in the public interest, 7(3), 77–124.

Laursen, B. K., Motzer, N., & Anderson, K. J. (2022). Pathways for assessing interdisciplinarity: A systematic review. Research Evaluation, 31(3), 326–343. http://dx.doi.org/10.1093/reseval/rvac013

___. (2023). Pathway profiles: Learning from five main approaches to assessing interdisciplinarity. Research Evaluation, 32(2), 213–227. https://doi.org/10.1093/reseval/rvac036

Leahey, E., Beckman, C. M., & Stanko, T. L. (2017). Prominent but less productive: The impact of interdisciplinarity on scientists’ research. Administrative Science Quarterly, 62(1), 105–139. https://doi.org/10.1177/0001839216665364

Lee, S. S.-J., & A. Jabloner (2017). Institutional culture is the key to team science. Nature Biotechnology, 35(12), 1212–1214.

Lewis, K. (2003). Measuring transactive memory systems in the field: Scale development and validation. Journal of Applied Psychology, 88, 587–604. https://doi.org/10.1037/0021-9010.88.4.587

Llewellyn, N., Carter, D. R., DiazGranados, D., Pelfrey, C., Rollins, L., & Nehl, E. J. (2020). Scope, influence, and interdisciplinary collaboration: The publication portfolio of the NIH Clinical and Translational Science Awards (CTSA) program from 2006 through 2017. Evaluation & the Health Professions, 43(3), 169–179. https://doi.org/10.1177/0163278719839435

Llewellyn, N., Nehl, E. J., Dave, G., DiazGranados, D., Flynn, D., Fournier, D., Hoyo, V., Pelfrey, C., & Casey, S. (2024). Translation in action: Influence, collaboration, and evolution of COVID-19 research with Clinical and Translational Science Awards consortium support. Clinical and Translational Science, 17(1), e13700. https://doi.org/10.1111/cts.13700

Love, H. B., Fosdick, B. K., Cross, J. E., Suter, M., Egan, D., Tofany, E., & Fisher, E. R. (2022). Towards understanding the characteristics of successful and unsuccessful collaborations: A case-based team science study. Humanities and Social Sciences Communications, 9(1), 371.

Lyu, P., Liu, X., & Yao, T. (2023). A bibliometric analysis of literature on bibliometrics in recent half-century. Journal of Information Science. https://doi.org/10.1177/01655515231191233

Mäkinen, E. I., Evans, E. D., & McFarland, D. A. (2025). Interdisciplinary research, tenure review, and guardians of the disciplinary order. The Journal of Higher Education, 96(1), 54–81.

Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.

Mansilla, V. B. (2006). Assessing expert interdisciplinary work at the frontier: An empirical exploration. Research Evaluation, 15(1), 17–29. http://dx.doi.org/10.3152/147154406781776075

Marks, M. A., Mathieu, J. E., & Zaccaro, S. J. (2001). A temporally based framework and taxonomy of team processes. The Academy of Management Review, 26(3), 356–376.

Marlow, S. L., Lacerenza, C. N., Paoletti, J., Burke, C. S., & Salas, E. (2018). Does team communication represent a one-size-fits-all approach?: A meta-analysis of team communication and performance. Organizational Behavior and Human Decision Processes, 144, 145–170.

Mâsse, L. C., Moser, R. P., Stokols, D., Taylor, B. K., Marcus, S. E., Morgan, G. D., Hall, K. L., Croyle, R. T., & Trochim, W. M. (2008). Measuring collaboration and transdisciplinary integration in team science. American Journal of Preventive Medicine, 35(Suppl 2), S151–160.

Mathieu, J. E., Luciano, M. M., D’Innocenzo, L., Klock, E. A., & LePine, J. A. (2020). The development and construct validity of a team processes survey measure. Organizational Research Methods, 23(3), 399–431. https://doi.org/10.1177/1094428119840801

Mazzucchelli, A., Chierici, R., Tortora, D., & Fontana, S. (2021). Innovation capability in geographically dispersed R&D teams: The role of social capital and IT support. Journal of Business Research, 128, 742–751.

Memon, M. A., Thurasamy, R., Ting, H., Cheah, J. H., & Chuah, F. (2024). Control variables: a review and proposed guidelines. Journal of Applied Structural Equation Modeling, 8(2), 1–18.

Mesmer-Magnus, J. R., & DeChurch, L. A. (2009). Information sharing and team performance: A meta-analysis. Journal of Applied Psychology, 94(2), 535–546.

Mezzanotti, F., & Simcoe, T. S. (2023, August). Research and/or development? Financial frictions and innovation investment (NBER Working Paper No. w31521). National Bureau of Economic Research. https://ssrn.com/abstract=4533235

Michalska-Smith, M. J., & Allesina, S. (2017). And, not or: Quality, quantity in scientific publishing. PloS One, 12(6). https://doi.org/10.1371/journal.pone.0178074

Miller, B. M., Metz, D., Schmid, J., Rudin, P. M., Blumenthal, M. S. (2021). Measuring the value of invention: The impact of Lemelson-MIT prize winners’ inventions. RAND. https://www.rand.org/pubs/research_reports/RRA838-1.html

Mohammed, S., Rico, R., & Alipour, K. K. (2021). Team cognition at a crossroad: Toward conceptual integration and network configurations. Academy of Management Annals, 15(2), 455–501. https://doi.org/10.5465/annals.2018.0159

Nancarrow, S. A., Booth, A., Ariss, S., Smith, T., Enderby, P., & Roots, A. (2013). Ten principles of good interdisciplinary team work. Human Resources for Health, 11, 1–11. https://doi.org/10.1186/1478-4491-11-19

National Center for Science and Engineering Statistics. (2024, February 29). Knowledge transfer indicators: Putting information to use. https://ncses.nsf.gov/pubs/nsb20241/knowledge-transfer-indicators-putting-information-to-use

National Institute of Health SEED. (n.d.). Intellectual property and iEdison invention report requirements. https://seed.nih.gov/small-business-funding/small-business-program-basics/grant-policy/ip#:~:text=Invention%20Reporting%20though%20iEdison,of%20the%20participating%20federal%20agencies

National Research Council (2018). Collaborations of consequence: NAFKI’s 15 years igniting innovation at the intersections of disciplines. The National Academies Press. https://doi.org/10.17226/25239

National Science Foundation. (n.d.) Research coordination networks program. https://www.nsf.gov/funding/opportunities/research-coordination-networks/11691/nsf23-529

Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.

Nerkar, A., & Shane, S. (2007). Determinants of invention commercialization: An empirical examination of academically sourced inventions. Strategic Management Journal, 28(11), 1155–1166.

O’Connor, G. C., Rice, M. P., Peters, L., & Veryzer, R. W. (2003). Managing interdisciplinary, longitudinal research teams: Extending grounded theory-building methodologies. Organization Science, 14(4), 353–373. https://www.jstor.org/stable/4135115

Ohland, M. W., Loughry, M. L., Woehr, D. J., Bullard, L. G., Felder, R. M., Finelli, C. J., Layton, R. A., Pomeranz, H. R., & Schmucker, D. G. (2012). The comprehensive assessment of team member effectiveness: Development of a behaviorally anchored rating scale for self and peer evaluation. Academy of Management Learning & Education, 11, 609–630. https://doi.org/10.5465/amle.2010.0177

Okamoto, J., & Centers for Population Health and Health Disparities Evaluation Working Group. (2015). Scientific collaboration and team science: A social network analysis of the centers for population health and health disparities. Translational behavioral medicine, 5(1), 12–23.

Okamura, K. (2019). Interdisciplinarity revisited: Evidence for research impact and dynamism. Palgrave Communications, 5(1), 141. https://doi.org/10.1057/s41599-019-0352-4

Oliver, K., Innvar, S., Lorenc, T., Woodman, J., & Thomas, J. (2014). A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Services Research, 14, 1–12.

Ozer, E. J., Renick, J., Jentleson, B., & Maharramli, B. (2023). Scan of promising efforts to broaden faculty reward systems to support societally-impactful research. The Pew Charitable Trusts.

Palmer, M. A., Kramer, J. G., Boyd, J., & Hawthorne, D. (2016). Practices for facilitating interdisciplinary synthetic research: The National Socio-Environmental Synthesis Center (SESYNC). Current Opinion in Environmental Sustainability, 19, 111–122. https://doi.org/10.1016/j.cosust.2016.01.002

Parilla, J., & Haskins, G. (2023, February 9). How research universities are evolving to strengthen regional economies. Brookings. https://www.brookings.edu/articles/how-research-universities-are-evolving-to-strengthen-regional-economies/

Rico, R., Sánchez-Manzanares, M., Gil, F., & Gibson, C. (2008). Team implicit coordination processes: A team knowledge-based approach. The Academy of Management Review, 33, 163–184.

Roelofs, S., Edwards, N., Viehbeck, S., & Anderson, C. (2019). Formative, embedded evaluation to strengthen interdisciplinary team science: Results of a 4-year, mixed methods, multi-country case study. Research Evaluation, 28(1), 37–50.

Salas, E., Cooke, N. J., & Rosen, M. A. (2008). On teams, teamwork, and team performance: Discoveries and developments. Human Factors, 50(3), 540–547. https://doi.org/10.1518/001872008x288457

Salas, E., Grossman, R., Hughes, A. M., & Coultas, C. W. (2015). Measuring team cohesion: Observations from the science. Human factors, 57(3), 365–374. https://doi.org/10.1177/0018720815578267

Savannah River Site. (2020, February 10). General employee training. https://www.srs.gov/general/training/TREGGETALPLN.pdf

Schneider, B., Ashworth, S. D., Higgs, A. C., & Carr, L. (1996). Design, validity, and use of strategically focused employee attitude surveys. Personnel Psychology, 49(3), 695–705. https://doi.org/10.1111/j.1744-6570.1996.tb01591.x

Shrivastava, P., Smith, M. S., O’Brien, K., & Zsolnai, L. (2020). Transforming sustainability science to generate positive social and environmental change globally. One Earth, 2(4), 329–340.

Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.

Shuffler, M. L., & Carter, D. R. (2018). Teamwork situated in multiteam systems: Key lessons learned and future opportunities. American Psychologist, 73(4), 390. https://psycnet.apa.org/doi/10.1037/amp0000322

Siar, S. (2023). The challenges and approaches of measuring research impact and influence on public policy making. Public Administration and Policy, 26(2), 169–183.

Steer, C. J., Jackson, P. R., Hornbeak, H., McKay, C. K., Sriramarao, P., & Murtaugh, M. P. (2017). Team science and the physician–scientist in the age of grand health challenges. Annals of the New York Academy of Sciences, 1404(1), 3–16.

Stokols, D., Fuqua, J., Gress, J., Harvey, R., Phillips, K., Baezconde-Garbanati, L., Unger, J., Palmer, P., Clark, M. A., Colby, S. M., Morgan, G., & Trochim, W. (2003). Evaluating transdisciplinary science. Nicotine & Tobacco Research: Official Journal of the Society for Research on Nicotine and Tobacco, 5(Suppl 1), S21–S39. https://doi.org/10.1080/14622200310001625555

Stokols, D., Hall, K. L., Taylor, B. K., & Moser, R. P. (2008). The science of team science: Overview of the field and introduction to the supplement. American Journal of Preventive Medicine, 35(Suppl 2), S77–S89. https://doi.org/10.1016/j.amepre.2008.05.002

Strimel, G., Reed, P., Dooley, G., Bolling, J., Phillips, M., & Cantu, D. V. (2014). Integrating and monitoring informal learning in education and training. Techniques: Connecting Education & Careers, 89(3), 48–54.

Swann Jr, W. B., Gómez, A., Seyle, D. C., Morales, J., & Huici, C. (2009). Identity fusion: The interplay of personal and social identities in extreme group behavior. Journal of Personality and Social Psychology, 96(5), 995.

Tannenbaum, S. I., Beard, R. L., McNall, L. A., & Salas, E. (2009). Informal learning and development in organizations. In Learning, training, and development in organizations (pp. 303–331). Routledge.

Tay, L., Batz-Barbarich, C., Yang, L. Q., & Wiese, C. W. (2023). Well-being: The ultimate criterion for organizational sciences. Journal of Business and Psychology, 38(6), 1141–1157.

Tebes, J. K., & Thai, N. D. (2018). Interdisciplinary team science and the public: Steps toward a participatory team science. American Psychologist, 73(4), 549.

Tekleab, A. G., Quigley, N. R., & Tesluk, P. E. (2009). A longitudinal study of team conflict, conflict management, cohesion, and team effectiveness. Group & Organization Management, 34(2), 170–205.

Thayer, A. L., Petruzzelli, A., & McClurg, C. E. (2018). Addressing the paradox of the team innovation process: A review and practical considerations. American Psychologist, 73(4), 363.

Tigges, B. B., Miller, D., Dudding, K. M., Balls-Berry, J. E., Borawski, E. A., Dave, G., Hafer, N. S., Kimminau, K. S., Kost, R. G., Littlefield, K., Shannon, J., Menon, U., & Measures of Collaboration Workgroup of the Collaboration and Engagement Domain Task Force, National Center for Advancing Translational Sciences, National Institutes of Health (2019). Measuring quality and outcomes of research collaborations: An integrative review. Journal of Clinical and Translational Science, 3(5), 261–289. https://doi.org/10.1017/cts.2019.402

Tremblay, D., Roberge, D., Cazale, L., Touati, N., Maunsell, E., Latreille, J., & Lemaire, J. (2011). Evaluation of the impact of interdisciplinarity in cancer care. BMC Health Services Research, 11, 1–10. https://doi.org/10.1186/1472-6963-11-144

Trochim, W. M., Marcus, S. E., Mâsse, L. C., Moser, R. P., & Weld, P. C. (2008). The evaluation of large research initiatives: A participatory integrative mixed-methods approach. American Journal of Evaluation, 29(1), 8–28. https://doi.org/10.1177/1098214007309280

United Nations. (2020, September). Report of the UN Economist Network for the UN 75th anniversary: Shaping the trends of our time. United Nations. https://www.un.org/en/desa/unen/report

Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.

University of Michigan Innovation Partnerships. (2024). Impact report. https://innovationpartnerships.umich.edu/wp-content/uploads/2024/09/91224_V3_Digital_Impact_Report_2024.pdf

Uzzi, B., Mukherjee, S., Stringer, M., & Jones, B. (2013). Atypical combinations and scientific impact. Science, 342(6157), 468–472. https://doi.org/10.1126/science.1240474

Valero, A., & Van Reenen, J. (2019). The economic impact of universities: Evidence from across the globe. Economics of Education Review, 68, 53–67. https://doi.org/10.1016/j.econedurev.2018.09.001

van de Brake, H. J., Walter, F., Rink, F. A., Essens, P. J., & van der Vegt, G. S. (2020). Benefits and disadvantages of individuals’ multiple team membership: The moderating role of organizational tenure. Journal of Management Studies, 57(8), 1502–1530.

van Knippenberg, D., Nishii, L. H., & Dwertmann, D. J. (2020). Synergy from diversity: Managing team diversity to enhance performance. Behavioral Science & Policy, 6(1), 75–92.

Vestal, A., & Mesmer-Magnus, J. (2020). Interdisciplinarity and team innovation: The role of team experiential and relational resources. Small Group Research, 51(6), 738–775. https://doi.org/10.1177/1046496420928405

Wallen, K. E., Filbee-Dexter, K., Pittman, J. B., Posner, S. M., Alexander, S. M., Romulo, C. L., Bennett, D. E., Clark, E. C., Cousins, S. J. M., Dubik, B. A., Garcia, M., Haig, H. A., Koebele, E. A., Qiu, J., Richards, R. C., Symons, C. C., & Zipper, S. C. (2019). Integrating team science into interdisciplinary graduate education: An exploration of the SESYNC Graduate Pursuit. Journal of Environmental Studies and Sciences, 9, 218–233. https://doi.org/10.1007/s13412-019-00543-2

Waltman, L., & Traag, V. A. (2021). Use of the journal impact factor for assessing individual articles: Statistically flawed or not? F1000Research, 9, 366. https://doi.org/10.12688/f1000research.23418.2

Wiese, C. W., Burke, C. S., Tang, Y., Hernandez, C., & Howell, R. (2021). Team learning behaviors and performance: A meta-analysis of direct effects and moderators. Group & Organization Management, 47(3). https://doi.org/10.1177/10596011211016928

Wooten, K. C., Rose, R. M., Ostir, G. V., Calhoun, W. J., Ameredes, B. T., & Brasier, A. R. (2014). Assessing and evaluating multidisciplinary translational teams: A mixed methods approach. Evaluation & the Health Professions, 37(1), 33–49. https://doi.org/10.1177/0163278713504433

Wuchty, S., Jones, B. F., & Uzzi, B. (2007). The increasing dominance of teams in production of knowledge. Science, 316(5827), 1036–1039. https://doi.org/10.1126/science.1136099

Xu, F., Wu, L., & Evans, J. (2022). Flat teams drive scientific innovation. Proceedings of the National Academy of Sciences, 119(23), e2200927119. https://doi.org/10.1073/pnas.2200927119

Yang, Y., Tian, T. Y., Woodruff, T. K., Jones, B. F., & Uzzi, B. (2022). Gender-diverse teams produce more novel and higher-impact scientific ideas. Proceedings of the National Academy of Sciences, 119(36), e2200841119. https://doi.org/10.1073/pnas.2200841119

Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.

This page intentionally left blank.

Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.
Page 159
Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.
Page 160
Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.
Page 161
Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.
Page 162
Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.
Page 163
Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.
Page 164
Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.
Page 165
Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.
Page 166
Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.
Page 167
Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.
Page 168
Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.
Page 169
Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.
Page 170
Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.
Page 171
Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.
Page 172
Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.
Page 173
Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.
Page 174
Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.
Page 175
Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.
Page 176
Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.
Page 177
Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.
Page 178
Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.
Page 179
Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.
Page 180
Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.
Page 181
Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.
Page 182
Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.
Page 183
Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.
Page 184
Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.
Page 185
Suggested Citation: "5 Evaluating Team Science." National Academies of Sciences, Engineering, and Medicine. 2025. The Science and Practice of Team Science. Washington, DC: The National Academies Press. doi: 10.17226/29043.
Page 186
Next Chapter: 6 Forward-Looking Research Recommendations and Infrastructure Needs
Subscribe to Email from the National Academies
Keep up with all of the activities, publications, and events by subscribing to free updates by email.