Previous Chapter: Methods
Suggested Citation: "Additional Considerations." Lam, M., A. Falcon, and N. Merhill. 2023. Approaches to the Evaluation of Sexual Harassment Prevention and Response Efforts. Washington, DC: The National Academies Press. doi: 10.17226/27267.

Additional Considerations

As indicated in many of the method descriptions, the success of any evaluation or data-gathering exercise can be enhanced by several additional factors related to culture, collaboration, planning, and policies.17 The following section provides an introduction to some of these considerations.

Creating a Culture of Feedback

Creating a culture of feedback is a critical aspect of the evaluation process, regardless of the intervention being evaluated (Pappas et al., 2021). When institutions cultivate psychologically safe environments, students, faculty, staff, postdoctoral fellows, and researchers are more comfortable providing feedback on interventions and programs. Clear and transparent messaging when requesting feedback and participation is one aspect of a culture of feedback, and it includes specific language describing what feedback is being requested, why it is needed, and how it may be used to update or improve interventions. Another aspect of clear and transparent messaging is communicating how the interventions were updated or improved based on the responses. Highlighting progress based on feedback may incentivize continued or increased engagement. It is also important to consider the timing. For example, many undergraduate students are actively affiliated with the institution for a shorter time frame, so clear and transparent messaging about interventions can help those students understand the current work and contribute through feedback.

Institutional leadership also plays a role in creating a culture of feedback through messaging about the importance of the interventions and the commitment to improving and adapting them through campus community involvement. To demonstrate this commitment, communication about the interventions could clearly connect the interventions with campus, division, or departmental mission statements, values, and goals. In addition, to ensure that all campus groups are represented and have opportunities to provide feedback, institutional leadership and the evaluation team could consider committee and working group representation during each phase of the intervention.

Diverse methods for collecting feedback can increase the accessibility for different campus groups and contribute to a culture of feedback. Examples of feedback methods include digital surveys, town halls, departmental outreach, and comment boxes, to name a few. Opportunities for responding anonymously could be provided when possible. For all feedback, but especially anonymous feedback, providing clear information on how to submit it, what someone can expect after submitting it, and any limitations associated with submitting it is important. When possible, opportunities could be created for divisions or departments to highlight ongoing interventions or collect feedback, and engaging through smaller communities on campus would allow participants to interact with teams or leadership with whom they may have a closer relationship.

Finally, institutions could consider incorporating opportunities for feedback in existing review processes. Universities have often used a combination of methods such as periodic anonymous surveys, site visits, individual and group interviews, and general data procurement to evaluate and improve the quality of interventions in academic departments and programs. These feedback streams identified the presence of

___________________

17 The Evaluation Working Group of the Action Collaborative on Preventing Sexual Harassment in Higher Education is currently working on a paper that will examine factors related to planning what to do with the data, communicating about the data, and supporting leaders to take action based on the data.

Suggested Citation: "Additional Considerations." Lam, M., A. Falcon, and N. Merhill. 2023. Approaches to the Evaluation of Sexual Harassment Prevention and Response Efforts. Washington, DC: The National Academies Press. doi: 10.17226/27267.

climate issues and specific incidents among department/program members that affected the quality of the interventions. Institutions can incorporate specific questions to target the desired data into existing review processes to capture this information and evaluate it regularly. One way to facilitate the incorporation of feedback in ongoing review is to include the institution’s Title IX coordinator as a consultation resource during the creation of the intervention or program development plan.

A culture of feedback empowers the community and incorporates its members as partners in the work; they can offer suggestions for improvement and hear how those suggestions are incorporated in real time. This approach can incentivize data collection, improve intervention implementation, and drive progress.

Leveraging “Data Ecosystems”

Within the body of research on sexual harassment, analysts describe the concept of a data ecosystem, which uses the data collected across campus by different methods and groups to understand the climate and the effectiveness of ongoing programs and interventions. Data ecosystems incorporate data beyond the traditional campus-wide climate survey, such as information related to behavioral counts (e.g., incident reports, service utilization), attitude and experience surveys separate from comprehensive climate surveys, mixed methods studies, and program evaluation. The results from these varied sources are synthesized regularly to inform more effective and campus-specific interventions (Driver-Linn and Svensen, 2017).

Beyond the various sources and methodologies for collection, data ecosystems incorporate data gathered at different times. Some researchers recommend that campus-wide climate surveys be conducted every 4 years to allow time for data collection and analysis, communication of results, and implementation and evaluation of recommendations (Driver-Linn and Svensen, 2017). Although larger climate surveys are conducted less frequently, smaller annual surveys, mixed methods studies, and behavioral count data provide more real-time information about campus services and intervention efforts. Smaller data collections implemented more frequently allow leadership to share regular updates that reflect the current state of affairs informed by the broader context of a climate survey. Communicating the results of these analyses with the campus community strengthens a culture of feedback, builds trust, and increases engagement with ongoing evaluation efforts. Building on that concept, the development of a data ecosystem involves the intended users and other involved parties, such as the campus community, researchers, and institutional leadership, from the very beginning of the process, which, when coupled with the regular communication of findings, can strengthen services and intervention work (McMahon et al., 2022).

Timing of Data

The timing of data collection affects the evaluation of interventions and interpretation of results. Broadly, data can be defined as distal or proximal. Distal data reflects information that is further removed from the present moment. For example, in a survey of faculty, data related to early childhood adverse experiences would be considered distal, while data regarding on-campus sexual harassment experienced over the past year would be proximal. Distal data is often important for understanding the context and interpreting the meaning of more proximal data.

Suggested Citation: "Additional Considerations." Lam, M., A. Falcon, and N. Merhill. 2023. Approaches to the Evaluation of Sexual Harassment Prevention and Response Efforts. Washington, DC: The National Academies Press. doi: 10.17226/27267.

When selecting a method of evaluation, it is important to consider the timing of data required to answer the research questions. As described earlier in the paper, text mining is a broad term that refers to the examination of large datasets of (typically) written texts, such as Twitter posts. Text mining and other methods that rely on existing information provide a wide range of data, from accounts of years-old events (distal) to “live-Tweeting” one’s reaction to a real-time event (proximal). To evaluate change over a shorter period, using data captured in a more proximal fashion could prove to be efficacious. Experience sampling is one approach to gathering more proximal data (Verhagen et al., 2016), including the use of ecological momentary assessments. Overall, clearly describing when the data was captured will help the campus community to interpret the findings in the context of ongoing intervention efforts.

Data Interpretation

A common initial assumption in evaluating the effectiveness of an intervention is to seek a quantitative, measurable improvement to the exclusion of qualitative data. Most typically, people may assume that the number of reports of sexual harassment directly reflect real-world incidents of harassment, and that a decline in reported incidents is an indicator of successful prevention. One consideration, however, is whether this metric is the most accurate measure of effectiveness. For instance, some interventions could result in greater awareness of sexual harassment and, as individuals receive improved training in identifying and understanding how to respond to incidents, an increase in reports could occur. That increase may reflect an effective implementation of an intervention strategy, despite increasing the number of reports. Relatedly, a decline in the number of reports may reflect situations in which reporting mechanisms are not accessible or individuals do not feel that they can safely make a report without suffering retaliation or other negative consequences. In this case, employing a qualitative analysis in addition to quantitative methods may yield more accurate assessments and help highlight the nuanced goals of the programs, such as one that targets underreporting. Applying data triangulation across multiple sources and using data ecosystems can provide context for interpreting both proximal and distal results.

Implementation, Continuous Improvement, and De-implementation

Gathering high-quality data is only the first step in ensuring that efforts to prevent and respond to sexual harassment are evidence-based. After evaluating the effectiveness of an intervention across different periods of time, populations, and contexts, researchers can determine whether it is worth implementing in other settings, whether it is in need of improvement and further study, or whether it should be de-implemented.

The use of research networks or research centers as described earlier in the paper may be a particularly useful way to identify successful interventions and connect with individuals who can assist other institutions in implementing similar programs. When trying to replicate a successful intervention, institutions should be mindful of the contextual factors and process considerations that have contributed to its success. For example, an intervention that was successful with undergraduates may not be successful with faculty and staff, just as interventions that are implemented by campus security may not be as successful as those implemented by peers. Given the expansion of virtual and hybrid programs in recent years, researchers should also be mindful that a change in format may also affect an intervention’s success.18

___________________

18 For further discussion of the evaluation of complex, training-based interventions in the related field of responsible conduct of research, see Appendix C of Fostering Integrity in Research (NASEM, 2017).

Suggested Citation: "Additional Considerations." Lam, M., A. Falcon, and N. Merhill. 2023. Approaches to the Evaluation of Sexual Harassment Prevention and Response Efforts. Washington, DC: The National Academies Press. doi: 10.17226/27267.

This is not to say that modifications should not be made to successful interventions—in fact, tailoring the design and implementation of existing interventions is a key way to learn about the limitations and opportunities of those interventions in new settings. Similarly, simply because an intervention has been evaluated previously does not mean that evaluations are unnecessary moving forward. Interventions that have been shown to be successful repeatedly may eventually justify recommendations or requirements for their use from policymakers. Performing evaluations on interventions that are expected to be successful is necessary to build the evidence base needed to support those decisions.

Very rarely will interventions be clearly and immediately successful. As noted in the Evaluating the Effectiveness of Interventions Workshop, “evaluation and program efforts can mature as you go” (NASEM, 2021), and there is ample opportunity for maturation in not only the core intervention design, but also contextual factors and process considerations. As discussed earlier in the paper, the University of California, Berkeley (2022) evaluated the first two phases of its #WeCARE campaign, revealing that men “did not respond to the campaign messaging as positively as people of other genders” (para. 2). Rather than ending the intervention, the #WeCARE team tailored the third phase of the campaign to men, modifying the content of its messages to address the gaps identified by the earlier evaluation. Had an evaluation not been performed, the University of California, Berkeley may not have learned that a key population was not being reached successfully and the team would not have had the opportunity to improve the intervention.

In cases where interventions are determined to be ineffective or even harmful, institutions may consider de-implementation—that is, ceasing to execute the activities in question. Because there are many reasons why interventions have institutional support aside from effectiveness, de-implementation can be a challenging route to pursue, even when the science indicates it is warranted. Some interventions may have taken considerable time and effort to implement, some may have staff or faculty champions supporting them, some may serve public relations functions, and some may simply be within the allotted budget. Moreover, institutions may have legitimate concerns around the effects of de-implementing ineffective interventions, such as fear of litigation or fear of displacing the staff involved in implementing these interventions.

Despite the many challenges associated with de-implementation, there is research about why it is important and how organizations can do it (Norton and Chambers, 2020; Rodriguez Weno et al., 2021; Walsh-Bailey et al., 2021). As emphasized by the field of medical research, “de-implementing inappropriate health interventions is essential for improving population health, maintaining public trust, minimizing patient harm, and reducing unnecessary waste in health care and public health” (Norton and Chambers, 2020). These sentiments can be applied to sexual harassment interventions as well, because there is ample opportunity to better utilize campus resources, better serve the campus community, and to build trust across campus populations.

Suggested Citation: "Additional Considerations." Lam, M., A. Falcon, and N. Merhill. 2023. Approaches to the Evaluation of Sexual Harassment Prevention and Response Efforts. Washington, DC: The National Academies Press. doi: 10.17226/27267.
Page 25
Suggested Citation: "Additional Considerations." Lam, M., A. Falcon, and N. Merhill. 2023. Approaches to the Evaluation of Sexual Harassment Prevention and Response Efforts. Washington, DC: The National Academies Press. doi: 10.17226/27267.
Page 26
Suggested Citation: "Additional Considerations." Lam, M., A. Falcon, and N. Merhill. 2023. Approaches to the Evaluation of Sexual Harassment Prevention and Response Efforts. Washington, DC: The National Academies Press. doi: 10.17226/27267.
Page 27
Suggested Citation: "Additional Considerations." Lam, M., A. Falcon, and N. Merhill. 2023. Approaches to the Evaluation of Sexual Harassment Prevention and Response Efforts. Washington, DC: The National Academies Press. doi: 10.17226/27267.
Page 28
Next Chapter: Conclusions
Subscribe to Email from the National Academies
Keep up with all of the activities, publications, and events by subscribing to free updates by email.