Why We Evaluate Engagement Initiatives

Jacquie Dale, DevelopmentalEvaluation

465929165

We evaluate aspects of our life and work every day: we might start a new fitness routine and check out our progress in six weeks, or switch to a different software program and then evaluate how effective it is, or where we are with it. We evaluate the minor and the major to answer a simple question: Did it work? Then we ask more complex questions: If it did, what made it work? If it did not, what prevented us from achieving the desired outcome? In the field of public engagement, evaluations can be a powerful tool for learning and improvement. 

Michael Quinn Patton’s “utilization-focused evaluation” approach posits that evaluations be judged on their usefulness. Part of that use component, he says, “concerns how real people in the real world apply evaluation findings and experience and learn from the evaluation process.” This underscores the multiple focuses of evaluations:

Learning. Evaluation should include a learning focus, and it should be understood from multiple perspectives. For instance, a patient panel we have been involved in was established to inform improvements in the health care experience. To understand how well they are doing and whether things can be improved, it is important to get the panelists’ view of the key criteria – what would success look like from their perspective, as well as the outlook of the group doing the engagement exercise. They are in this process together, and they need to be able to talk about what it is that they are learning and how they are measuring success.

Quality Improvement. Evaluation is often used to improve public engagement work. In some cases, you may decide an approach is not working. You may jettison one aspect of a program and implement something different. If something worked, why, and can we replicate the results in other applications? If something did not work, why not? Might it under other conditions? Using data culled from initiatives is critical in both improving that project, if applicable, and creating more effective programming in the future.

Impact. Increasingly, to demonstrate value over and above the moral reason for doing public engagement on issues that concern the public, public engagement needs to demonstrate impact. Did public engagement lead to better outcomes than would have been achieved otherwise? For example, a recent study on patient engagement assessed the association between public involvement and the efficiency of recruitment for health research. Where there was a low level of public involvement, a marginal increase in efficiency was observed, as measured by the ability to meet timeline targets; in contrast, studies with strong public involvement throughout the course of the research were more likely to hit timeline targets, demonstrating better implementation and delivery.

Measuring impact can be difficult because any change that happens can be driven by the engagement work, but also influenced by other factors. Take the work we did with a citizens’ panel on municipal policy in Edmonton, for instance: The panel was charged with making recommendations on how to spend tax revenues, but how would we know if policies enacted by city council were a direct result of those suggestions? Councilors had the panel’s input, but also had input from city staff and suggestions from other stakeholders. The link between the engagement and the policy outcome was not a one-to-one relationship – few things are – and this creates another challenge in assessing impact.

Doing longer-term evaluations can help as often we can’t determine impact until after a program is completed. For example, say a program ends in January. If we are able to come back and talk to people in June, we can ask: What has happened since the end of the program? What was the impact of this engagement? How did you use that information? Did it influence any decisions? This gives us the opportunity to look at impact in a more in-depth way.

Replicability. Another benefit of a thorough evaluation is an understanding of how effective the program in question was, with a mind to reproducing any positive approaches and outcomes in other situations.

Say we wanted to improve teachers’ ability to deliver effective educational programs on global issues. Assuming we have common indicators, we could look out into the public engagement world and say, “This is what I’m trying to do. What can I learn about what is working and having an impact?” We could look at programs that have achieved positive results and learn from them.

Evaluations can give us a dynamic research and development tool. When we can compare what we are doing and learn about processes that have been proven effective and have made a measurable impact, we can deliver more powerful and effective public engagement programming.

Contact Us

One World Inc (OWI)
14-1830 Walkley Rd.
Ottawa, ON
K1H 8K3