Step 8: Evaluate your campaign

Highlights

It's important to know what worked and what didn't. A well aligned campaign that is a component of a larger comprehensive approach to suicide prevention can have important impacts. Establishing a set of evaluation metrics to examine direct campaign reach and influence will be the best approach.

Woman and man sitting across from each other in an office.

Selecting evaluation methods

Now it's time to focus on how you want to evaluate your campaign.

Timely measurement of the number of suicides prevented is very difficult. Most campaigns are unlikely to show a direct impact on morbidity or mortality. However, a well aligned campaign that is a component of a larger comprehensive approach to suicide prevention can have important impacts. Establishing a set of evaluation metrics to examine direct campaign reach and influence will be the best approach. You can combine your findings with other evaluation metrics being collected through the other comprehensive suicide prevention actions happening in your community or with your key audience.

Focus on establishing a set of evaluation metrics that are directly tied to your actions. These are metrics that can help you assess if your campaign is being implemented as planned and achieving its intended purpose. Evaluation metrics can help you:

  • Understand how well the campaign is being implemented
  • Determine progress toward short-, mid-, and longer-term outcomes
  • Identify audience reach
  • Count website visits
  • Understand changes in awareness and actions

Consider establishing a baseline for your metrics prior to the kickoff of your campaign. You can use these to help you track progress, success, and opportunities for improvement.

Evaluation metrics can include both qualitative and quantitative data. Quantitative data focuses on measuring numbers of something such as how many people visit your website or what actions are happening. It can offer valuable information about reach and engagement.

Qualitative data captures more nuanced insights like opinions, emotional reactions, or suggestions. It can help you understand why something is happening. Both types of evaluation have pros and cons, and they can be used independently and together.

Involving communities and experts to gather campaign feedback is an important step in evaluating campaigns. Feedback from the campaign audience or subject experts provides different perspectives that could uncover insights or highlight areas for improvement. The type of data you collect needs to align with the objectives of the campaign. This would be a good time to revisit your logic model and check for alignment.

Quantitative indicator data examples:

  • Number of social media pieces disseminated
  • Number of likes and shares
  • Level of engagement such as number of clicks or page navigation collected by website metadata
  • Reported change in knowledge, belief, or behaviors collected through a survey
  • Number of calls to a crisis line collected from local and national crisis line

Qualitative indicator data examples:

  • Campaign audience opinions collected through key informant interviews
  • Beneficiary group perceptions of images and language used in campaign materials collected through focus groups
  • Website user feedback through qualitative survey prompts
  • Content analysis of campaign materials and media coverage

Consider a phased approach to gathering data at multiple points throughout your campaign rather than relying on one-time efforts. A phased approach allows for continuous monitoring and the ability to adjust campaign strategies as needed. It's important to collect only data that will be used. Identify the information you need to determine your data collection priorities.

Interpreting and sharing evaluation findings

It's important to know what worked and what didn't. It can be difficult to share that something didn't work as expected, but identifying opportunities for improvement can strengthen activities and ultimately improve outcomes. Try to embrace all evaluation data as information you can use to improve your efforts. Evaluation can be compared to a GPS. A GPS doesn't say "you'll fail to reach your destination" if you miss a turn. Rather, it will help you reroute to your destination.

Use evaluation data to make data-driven adjustments as you move forward. It can help you determine the cause of why something did not work according to plan and whether the strategy should be simply adjusted for better impact or discontinued entirely. You may want to consider an independent evaluator to conduct your evaluation. If you have the resources, an independent evaluator can provide you with an outside, data-driven perspective on your work and provide some recommendations for improvement. Don't forget to think through how you're going to share your evaluation findings and who needs to see them.