Evaluating impact means many different things for different organisations across the non-profit landscape. It’s an opportunity. A burden. The latest trend. A way to attract funding and investment. Common sense. A way to keep funding and investment.

Whatever your view, it’s becoming increasingly common for for-purpose organisations to evaluate the effectiveness of their programs and demonstrate their impact.

But it’s not always clear how to do it and where to start.

PRF’s Evaluation Lead George Argyrous (pictured) says, “Evaluation begins with outcomes – whose lives are we making better, and in what ways?”

George, who has spent his career working with governments and organisations to improve their evidence-based decision making, defines an outcome as an end-state.

“When framing outcomes,” he says, “you need to ask yourself ‘What should we see if the desired goals and objectives are reached?’

“Be specific about who will be affected and what the effect will be.”

According to George, it’s not enough to define the beneficiary (for example, ‘children’), it’s necessary to also specify their characteristics according to place, context, or time (e.g., ‘children living in remote areas who currently cannot attend preschool).

The same goes for the effect. Will the impact be new skills, knowledge, or behaviour? Or a different attitude, set of values or identity?

“In some instances,” continues George, “it’s easier to define a concept like 'outcomes' in terms of what it’s not. And there are four mistakes organisations commonly make when they start defining their outcomes.”


George’s four common traps when framing outcomes

1.     Don’t include indicators in the outcome statement

Simply put, an outcome is not the same as an indicator. Intelligence is not the same as IQ scores; student learning is not the same as NAPLAN scores. One outcome might have multiple indicators or ways to measure it.

George says, “Indicators are an important part of evaluation planning, but come after you’ve reached an agreement about outcomes.”

Common version: 90% of children at Year 3 achieve NAPLAN literacy Band 3 or higher.
Better example: Year 3 children read at a level appropriate to their age.

2.     An outcome is an end point, not a process

George says that outcome statements often describe the process of getting there, rather than the end-point.

“Outcomes that begin with a verb, such as ‘increasing’, ‘enhancing’ or ‘improving’, are not ideal because they don’t tell us what should happen when the ‘increasing’, ‘enhancing’ or ‘improving’ has finished.”

Common version: Increasing the literacy rates of Year 3 children.
Better version: Year 3 children read at a level appropriate to their age.
 

3.     An activity is not an outcome

Outcome statements should not describe what you are doing to achieve the outcome; these are actually activities. George gives the example of building a fence to stop a dog escaping from the yard.

“It’s very easy to let an activity or output slip into an outcome. Think of an example outcome of keeping your dog in the yard. An activity might be to build a higher fence, but completing that activity in itself is not the desired end-state.”  

“Beware the words ‘by’, ‘through’, and ‘to’ ” says George.

Common version: Providing quality teaching to improve Year 3 literacy.
Better version: Year 3 children read at a level appropriate to their age.

4.     Don’t combine multiple outcomes into one statement

Each statement should only describe a single outcome. There may be different data collection implications for each outcome, or you may need to do different things as part of your program delivery to achieve each outcome.

Common version: Year 3 children read at a level appropriate to their age and have positive social skills.
Better version:

•      Year 3 children read at a level appropriate to their age.

•      Year 3 children have positive social skills.