Hey everyone, I’m Phil Stoeklen! I’m excited to share some thoughts on a tool we leverage a lot with our stakeholders: logic models. Logic models are useful tools that help program stakeholders and evaluators gain an understanding of the strategic model or vision for their initiative. Logic models can help to map out what resources are needed to sustain program activities, and can also assist with visualizing the outcomes and impacts of program activities…something that can be difficult to describe in mere words (especially when you have a lot of stakeholders to consider). The question I would like to pose, however, is do they actually help understand all outcomes and impacts?
If you think about it, the way that we organize logic models primes us to miss unintended (occasionally negative) program effects. Programs are, after all, treatments for existing problems. Like any treatment, there is always a potential risk for negative side-effects. By ignoring these potential adverse impacts when we lay out program logic, we open ourselves up to not catching problems as early as we can–or in the worst-case scenario, after it is simply too late.
I think this is a commonly encountered phenomenon in logic model design, and I think it is because of a couple of important reasons. First, when programs are conceptualized the goals and imagined impacts are often quite lofty (not necessarily a bad thing), and almost always positive (again, not necessarily a bad thing). It would be quite odd, afterall, to design a deliberately harmful program, but it is worth pointing out that the negative considerations are not often a focal point of logic modelling sessions. A second reason for this is really a combination of the first observation and the literal format of logic models. We talk about inputs, outputs, outcomes, and impacts, but the latter two are almost never facilitated from the perspective of how this program design could conceivably harm populations.
Now, having ambitions for programs to have wonderful long-term impacts is great! It helps program stakeholders set big goals for themselves and fosters evaluation touch-points by identifying potential areas to measure for effect. The bigger problem that is happening is really a consequence of not thoroughly exploring what is realistic to expect as an outcome and impact of said program, and conversations about how we will react if/when something goes awry.
With this observation in mind, don’t you think it is time we have a real conversation about how we model program logic, and how we help our clients understand and anticipate as many program effects as we can? It isn’t about focusing on the negative…instead, it’s having an informed conversation about how we recognize that every treatment has the potential for both positive and negative effects. In future posts, we will share some strategies to incorporating this important (but often missed) element of logic models.
Phil is a Senior Managing Consultant at Viable Insights, where he leverages his strong background in evaluation. He has a Master’s degree in Applied Psychology with concentrations in Health Promotion and Disease Prevention, Evaluation Research, and Industrial-Organizational Psychology. Phil has been an evaluator and project manager on multiple projects, including: comprehensive needs assessments, community perception projects, formative and summative program evaluations, and impact evaluations. His projects have ranged from short-term to multi-year, and has collectively worked on more than $23 million in both grant and privately funded programs/initiatives. Clients he has worked with include Margaret A. Cargill Philanthropies, U.S. Department of Labor, University of Wisconsin System, the Wisconsin Technical College System, and the Annie E. Casey Foundation, among others. In addition to Phil’s professional consulting experience, he serves as an instructor in the Evaluation Studies and Institutional Research graduate certificate program at the University of Wisconsin- Stout. In that capacity, he teaches courses covering evaluation theory, data collection techniques and best practices, and evaluation applications. Whether in his role as an evaluator or instructor, his goal remains the same — providing individuals and organizations with the tools, skills, and capacity to collect and use data in their decision making process. Find him on LinkedIn or Twitter!