The scope of an evaluation of the kind described in Chapters 3–6 is ambitious because the mandate of the U.S. Global Change Research Program (USGCRP) is ambitious: climate change affects everyone and all sectors of the economy. Over time, USGCRP has broadened the range and number of experts and interests that it engages in developing the National Climate Assessment (NCA), and the assessment itself has grown. As a result, carrying out an evaluation of the NCA and associated products is a challenging task. One overarching consideration is the need to continuously learn from the results of evaluations, incorporating knowledge gained into future efforts. This is important for a dynamic program such as USGCRP with large-scale, complex audiences. USGCRP’s work is nested within expansive networks of activity and the USGCRP is an important convener of activity within those networks. Part of the value of incorporating a network perspective into evaluation work is that it embraces the dynamic nature of how this web of relationships evolves over time. Who is included or left out, events or entities emerging and disappearing, structural holes opening and closing are all examples of how the context USGCRP is operating within can change rapidly—especially when USGCRP itself intervenes in these networks. How information disseminates and the impact of strategies for reaching targeted audiences may change significantly over time.
The NCA itself also continues to change, producing new aspects to examine with every iteration. The cyclical nature of many USGCRP processes and products offers opportunities to measure whether changes made in response to an evaluation have resulted in improvement toward goals, and to evaluate the success of process innovations as well as product outcomes. Careful attention to the timing and sequencing of evaluation efforts will result in more impactful products, and each phase of an evaluation strategy can be informed by the knowledge that has been gained in previous phases.
Because the work of USGCRP is continuous, but the NCA and specific products are released at particular times, attention to timing and sequencing are important to designing an evaluation, with important considerations related to the sequences for both development and evaluation of the NCA, needs of evaluation users, and the time lag between release of an NCA and the decisions and actions that may be taken based on that assessment.
At any point in time, USGCRP is simultaneously disseminating an existing NCA and developing the next. Evaluators should think about how an evaluation design can both inform and leverage the NCA process. For example, NCA-related workshops and engagement activities
could be avenues for evaluators to conduct observation or focus groups to gather information on the use, utility, and gaps of prior NCAs as well as desires for future improvements.
In order to support continuous improvement of USGCRP products, one goal could be to release evaluation findings before key milestones, for example, before authors begin outlining NCA chapters or before a concept for the next generation of NCA tools is finalized. To achieve this, it would be important for evaluators, USGCRP, and key evaluation users to identify when evaluation outputs could inform decision-making during the NCA process.
Decisions that incorporate the NCA and other USGCRP products may not be apparent or measurable for some time after the release of a product. One approach to addressing this time-lag issue is to plan ongoing or sequenced evaluation activities to capture information about activities soon after they are concluded, as well as longer-term or ripple effects.
Planning and resourcing a single-stage evaluation is impractical, if only because the scope of uses and the range of audiences can only be uncovered in stages. Evaluating the uses of evolving USGCRP products implies the need to continuously learn from previous efforts, and some information can only be collected in steps, based on what has been learned previously.
Several methods may help sequence evaluation processes:
As described in Chapter 2, there has long been an interest in developing an assessment process that “incorporates ongoing evaluation of effectiveness, which facilitates
adaptive management; supports adaptation actions across various timescales; stimulates civic engagement; and enhances the nation’s capacity to effectively respond to the many challenges of accelerating global change” (Buizer et al., 2013, p. 16; see also Moss et al., 2019).
Collecting feedback on a continuous basis can support process improvement (see Chapter 3). For example, USGCRP may wish to collect feedback on its workshops and meetings so it can quickly implement improvements. USGCRP may want to monitor other metrics on an ongoing basis, such as tracking which products or chapters are most frequently used, in order to focus dissemination strategies. Network analysis could be used to examine changes in the networks and how they are tied to the NCA; the potential for such changes could be built into the logic model, and over time the findings might result in changes to the logic model.
Nonfederal participants have played a major role in the NCA since the beginning, and as contributors they are by definition first-order nodes in the network of networks. An evaluation might focus on how these participants’ use of the NCA affects how they contribute to the dissemination of information, as well as how they contribute to future NCAs. Case studies may be helpful in understanding how the dynamic and complicated collaborations among USGCRP, its federal partners, and nonfederal participants have evolved over time and how they can be improved going forward.
On the other hand, the type of outcome evaluation described in this report is not well suited to be implemented in a continuous manner. Outcome evaluations provide information on results and perceptions at a particular point in time. Yet, there are opportunities to use a more formal outcome evaluation (as described in this report) as a starting point for ongoing evaluation. For example, if an evaluation develops an effective methodology for conducting case studies about use, USGCRP could apply that methodology to develop future case studies. USGCRP could also develop less formal case examples through other methods, such as an online forum where audiences could provide brief examples of how they have used the NCA or other products. Similarly, an outcome evaluation might develop and refine survey instruments to elicit feedback about audience perceptions of and actions taken with the NCA. Of the questions that prove most valuable in the initial survey, a limited set could be selected to solicit ongoing feedback through tools such as online forms.
Communication of evaluation findings is of instrumental importance to ensuring that results will be understood and utilized in a way that informs policy and programmatic changes (Greene, 1988; Neuman et al., 2013). An evaluation effort is best designed in light of its intended use, and its communication strategies and products planned to include sufficient time to meet the needs of evaluation users, to receive feedback before finalizing findings, and to disseminate results effectively. As mentioned above, USGCRP is often simultaneously disseminating an existing NCA, developing related products, and planning the next NCA, meaning that ongoing evaluation and timely communication about results is necessary to inform continuous improvement.
The broader decision science and research communication literature points to a number of best practices in communicating evaluation results. One key is communicating with those involved in the evaluation about how their input was used, what was learned, and what is being done about it. Incorporating participants’ perspectives can also inform prioritization of future efforts (Oliver et al., 2004, 2008; Rosenstock et al., 1998). Demonstrating that participants’ input has been acknowledged, understood, and used can also help to improve the inclusivity of future efforts, helping to create a more participatory evaluation framework (Colorado Trust, 2002) to support ongoing learning and improvement.
Any evaluation funded by the federal government will need to be cognizant of a range of applicable statutes and best practices, including Section 508 of the Rehabilitation Act1 for inclusion and accessibility, the Freedom
___________________
1 Rehabilitation Act of 1973, Public Law 93-112, 93rd Cong. (September 26, 1973), as amended through Public Law 117-286 (December 27, 2022).
of Information Act (FOIA),2 the Federal Advisory Committee Act (FACA),3 and others of which USGCRP staff and member agencies are well aware.
Note that the Paperwork Reduction Act (PRA)4 places important limitations and restrictions on the demands federal agencies can place on the public to respond to information collection. Agencies’ obligations under the PRA exist regardless of the method of collection—paper, virtual/online, or other survey mechanisms. Although expansive, there are some limitations to the law’s application—for example, when fewer than 10 people are surveyed, or data are generated during discussion at a public event (online, hybrid, or in person). Neither is PRA approval required if federal employees are surveyed as part of federal work duties (see “When Doesn’t the PRA Apply,” GSA and OMB, n.d.). These exceptions point to the potential advantages of small, in-depth case studies with targeted, case study–specific questions to provide information about the variety of mechanisms in which the NCA is used in decision-making. It will be important for evaluation efforts to consider PRA requirements and, where the PRA applies, it will be necessary to factor in the time required to get clearance from the Office of Management and Budget (OMB).
USGCRP may also want to explore the possibility of engaging with evaluators that are not sponsored by the federal government. External evaluations could make important contributions to an ongoing process for evaluation and learning. There is a history of evaluations of the NCA and USGCRP conducted by independent academic researchers (Jacobs et al., 2016; Meyer, 2011; Morgan et al., 2005; Moser, 2005; Parson et al., 2003), but often based only on publicly available information. USGCRP could gain much more from such evaluations by establishing ways to collaborate, within the limits of law and policy, with interested researchers. Collaboration would allow the Program to communicate its priorities for evaluation and learning, as well as ensure that the researchers understand enough about the Program’s inner workings to develop useful, actionable evaluation results. In addition, external evaluators (with nonfederal funding) have some freedoms that federally funded researchers would not, such as not being subject to the PRA or FACA. External evaluators who are given insight into the program could conduct an array of analyses that, with guidance on the Program’s needs, could support ongoing learning, prioritization, and improvement.
It is likely that USGCRP will benefit by bringing in outside resources such as a contractor to assist with the evaluation. Performing an evaluation of this type can be a large undertaking, requiring a wide range of expertise, a substantial commitment of personnel time, and possibly specialized tools. USGCRP will still need to devote time to work with and monitor the contractor, but this would be much less of a stretch than trying to do the evaluation with Program staff alone. Following are some particular ways in which an outside contractor can be helpful:
___________________
2 FOIA Improvement Act of 2016, Public Law 114-185, 114th Cong, 2nd sess. (January 4, 2016), §552
3 Federal Advisory Committee Act, Public Law 92-463, 92nd Cong. (October 6, 1972). Amended by Public Law 117-286, 117th Cong. (December 27, 2022).
4 Paperwork Reduction Act, Public Law 104-13, 104th Cong. (May 22, 1995).
The term contractor here is used somewhat loosely, including, for example, contractual arrangements, grants, or even interagency agreements. It also might include a group of organizations, each providing a particular set of services; for example, there might be multiple contracts or a single contract that includes a prime contractor and a subcontractor. In such cases, it is important to clearly delineate the respective responsibilities and to have clear lines of authority and responsibility.
Specific requirements that might be specified in a request for proposals or grant announcement include the following:
The indefinite nature of this work adds complexity in creating contracts of this type; the cost and level of effort may vary depending on which audiences are selected and what approaches are used to examine their use of the NCA. One approach could be designing a contract that has both base requirements (e.g., developing a logic model; creating an evaluation design; performing preliminary research to identify the audiences to be considered, including through the use of network analysis) and optional tasks (e.g., survey research, case studies) that might be awarded depending on the findings in the preliminary phase. Also, the optional tasks might be billed on a time-and-materials basis to allow for variations in size across the potential audiences.
Conclusion 7-1: Careful attention to the timing and sequencing of evaluation activities is essential for realizing the potential for evaluation to inform future improvements to the NCA and its associated products.
Conclusion 7-2: Effective integration of communication strategies within this timing and sequencing of evaluation is critical for ensuring that the results are understood and used, and that they meet the needs of decision-makers.
Recommendation 7-1: In implementing evaluation for the National Climate Assessment and other products, the U.S. Global Change Research Program (USGCRP) should adopt a strategy that enables ongoing learning about how the processes and products are informing decisions, in order to support continuous improvement in USGCRP processes and resulting products.
Recommendation 7-2: The U.S. Global Change Research Program should sequence evaluation into manageable components, allowing for iterative testing and learning about how to best pursue evaluation over time. Sequenced components may include conducting evaluability assessments, piloting focused on certain agencies or chapters of the National Climate Assessment, picking low-hanging fruit first, or developing case studies.
Recommendation 7-3: In communication about evaluation efforts, the U.S. Global Change Research Program (USGCRP) should aim for active two-way communication with users. Communication mechanisms may include ongoing feedback, interim findings, meetings to tailor the communication of evaluation findings to particular situations, and communication about how input was used that helps connect evaluation efforts with USGCRP’s objectives.
Recommendation 7-4: The U.S. Global Change Research Program should consider bringing in outside expertise and research capabilities—such as through contractors, consultants, grantees, or interagency agreements—to assist in designing and implementing the evaluation.
This page intentionally left blank.