Outcomes, Impacts, and Indicators
September 18, 2015
“Despite all the attention and advice about program evaluation, those responsible for carrying it out still struggle to define their program outcomes, connect those to their program goals (impact), and figure out how to measure them (indicators).
Librarians often have difficulty talking about what we do in terms of concrete benefits; instead, we often default to the loftiest of our many missions: defending democracy, advancing freedom of thought, instilling the love of reading…. While these deeply held values of our profession should guide our ethics and decision-making, we still have a need and an obligation to measure what outcomes we can and demonstrate our impact on the multitudes who benefit from public libraries in real and significant ways.
Theory of change work is another way of organizing indicators and structuring program information. Instead of using the logic model, the program is connected through a series of “so that” statements that show a progression of steps an individual takes through a program and the change each step is to encourage along the way. The logic model can also be overlaid on the theory of change. The theory of change approach can be helpful if a program is started because someone had a great idea, but no one is quite sure how or whether it will work. In that case, sometimes it’s easier to use the theory of change backward, starting by asking what program participants need to know, have, or do in order to improve their lives or their communities and then working back into your program design.
An example that is seen a lot these days: someone wants to create a Maker space. A local foundation is willing to give a grant, but the library has to fill out a logic model and explain how success will be measured. Many librarians will start with what goes into the Maker space and then what kinds of programs will be held there. Yet when it comes to defining outcomes, they are stumped. “I just want the kids to have fun. How do I define fun as an outcome?” is a common refrain heard at library conferences (and in private mutterings over grant applications).
Theory of change work helps break through these blocks. It asks how and why over and over again until responses are exhausted. If it can’t be defined as a measurable outcome, it hasn’t been sufficiently interrogated.
Having fun is a worthy and measurable indicator of a satisfying event or program—it’s an output in this context—but an outcome needs to be connected to a higher level goal that resonates with the community and funders, and the indicator needs to be specifically connected to that. That doesn’t mean you have to (or can) prove that coming to a library Maker space leads to better school performance, but it means that you can show, theoretically, how your program could contribute to better school performance.
The theory of change also tells you what to measure—you are testing your theory.
- Can the kids make the 3-D game pieces? How many did they make?
- Did the kids play the game? How many played? How many came back to play again?
- How many came to the library for other reasons? How many times? What else did they do?
When first doing this work, it’s best to write down most every indicator you can think of that can answer your evaluation questions and prove or disprove your theory of change. Then, starting with whether the indicator measures something that matters, start whittling down the list, getting rid of what doesn’t meet the criteria for a good indicator. Once that’s done, methods for data collection can be considered, with survey questions reserved for indicators that can’t be collected by any other method.
A final word: be creative with your methods. Want to know how many unique program participants you have? Try a loyalty card. Want to know how many participants in a digital literacy class learned how to send email? Have them send a message to the library with a particular subject heading and keep a log.”
The Impact Survey was first used in 2009 to help gather data for the Opportunity for All study reports, conducted by the University of Washington’s iSchool with assistance from the Bill & Melinda Gates Foundation.