Public engagement initiatives vary in size, scope, time frame and purpose, from projects with tens of thousands of participants around the world to panels involving 10 citizens from across town. The objectives may be to effect a change, to do things better, to foster involvement, to increase knowledge and/or to build common ground.
Whatever the intent, the ultimate goal is to make a difference. The challenge lies in measuring this difference. What have we accomplished? What difference has the engagement initiative made? Is one kind of engagement better than another to achieve certain goals? In the public engagement field, sometimes a lack of comparable data makes it difficult to answer these questions. For example, a group might be doing a citizens’ panel in Edmonton to advise city council on policy. It has chosen one set of criteria for evaluation. But another group doing a citizens’ panel in Guelph might have a different set and it becomes difficult to compare the success and achievements of the two panels (even allowing for the importance context can play).
I’m not advocating that criteria must be the same all the time — but, for example, if there was a set around process integrity, and people knew them and were using them, then we would start to get some comparability around that aspect of public engagement work.
What is Process Integrity?
Process integrity is achieved when participants clearly know the purpose for their involvement, have good and comprehensive information to support their participation, are able to freely voice their ideas, concerns and opinions, know clearly what will happen with their input, and have clear evidence of the commitment of decision-makers to respecting public input and taking it into account.
This is easier to do where there is a similar scope and a structure that could help foster comparability for collective learning. For example, we worked with the Fisheries and Oceans Canada (DFO) in British Columbia, and together built an evaluation framework around one public involvement mechanism, what they call their advisory committees. DFO has several of these committees made up of of industry stakeholders, including those doing the fishing. But they had no evaluation system in place that would allow them to share learning across the committees with regard to how well they were functioning and what difference they were making.
The evaluation framework we developed gives them a template they can use to look at comparable questions across the committees. At the same time, they can still tailor the template to the specific needs of individual committees. This provides greater consistency and comparability within just one entity, but it will allow us to collect rich impact data that can be shared.
Another example is the Inter Council Network, which is a group of provincial councils involved with international development and public engagement. Their collective membership base is over 100 organizations across the country. We have worked with them to build a common Theory of Change for the public engagement work they are doing, connected to some common indicators that will allow them to start to measure collective impact and also to measure the comparative success of different initiatives.
It’s all part of making evaluation really useful and the findings — to use the words that Michael Quinn Patton, who’s a leader in the evaluation field, uses — “utilization-focused,” producing insights that really help assess how well things have worked, where they might be improved, what have we learned from the process, and what has been the impact. By using some comparable indicators across initiatives and sharing the results of evaluations, we can also begin to make evaluation utilization-focused for the field of public engagement and begin to provide better evidence of impact and what works best, when, and with whom.