How to Make Evaluation Practical

Jacquie Dale, DevelopmentalEvaluation

453503959As a professional field of endeavour, public engagement is relatively new. In the past, it has not been a priority to collect and disseminate evidence on the impact and efficacy of engagement initiatives, but that is changing. By combining qualitative evidence with quantitative data we can determine to what extent  an initiative was successful, if it had the impact we wanted and how it could be improved upon. There is so much that can be measured that a key challenge is making the evaluation as practical as possible.

Often, when we think about  evaluation, the number of questions we have multiply rapidly as we brainstorm what we want to find out and discover. As we go through the process, the list of items that we want to measure and examine outgrows the time and resources available.

For example, recently we worked with an organization during the launch phase of a patient/caregiver panel. The client was considering 40 evaluation indicators. That was unmanageable, so we had to narrow the 40 down to the most critical.

But when everything seems important, how do we determine what is most “critical?” Which sets of criteria, indicators and data sources are really key to getting the answers we need and still fit within the budget? We worked with the client to develop a tool to help cull its list. First, we rated each indicator in two dimensions.

  • The importance of the data.
  • The ease with which it could be collected and assessed.

Considered in this manner, the 40 indicators then fell into four quadrants:

  1. Important and easy to collect.
  2. Not important and difficult to collect.
  3. Important but difficult to measure.
  4. Not all that important, but easy to collect.

In the case of indicators falling into the first two quadrants, our response was obvious – yes, collect for quadrant one; no, don’t collect for quadrant 2. But determining what to do about indicators in the third and fourth quadrants required some decisions about priorities. Working through this together, we were able to come up with a manageable list of about 20 indicators for evaluation.

There is no golden number for criteria: how many you evaluate depends on a variety of factors, including time frame and available resources. The patient panel is a five-year initiative; the number of indicators is going to be greater than in a short-term project.

The key to making an evaluation practical is to focus sharply on what you want to learn and then to prioritize those items. It comes down to making important decisions about what you really want a program or initiative to accomplish, what you want to be able to learn about it and what is do-able in terms of data collection, compilation and analysis.

Making these critical choices before the program is launched will allow you to collect the data you need to achieve your evaluation objectives and will give you the information and evidence you (and the field as a whole) need for future public engagement work.

 

Contact Us

One World Inc (OWI)
14-1830 Walkley Rd.
Ottawa, ON
K1H 8K3