RESEARCH VS. EVALUATION

Hi everyone, Phil here from Viable Insights. The June learning session and blog topic came about after our team had a great meeting with the Pima Early Education Program (PEEP) Evaluation Committee, an informal advisory group for Pima County’s early education program. The meeting was kicked off with an orientation component, where they reminded everyone about what evaluation is compared to research. We had never participated in such an upfront reflective activity before, and it got us thinking about how we tell our story as evaluators. Specifically, how we explain what we do to people that are not part of our community, or that have preconceived notions of what we do. I have to feel like other people in our field went through (or are still going through) the process of figuring out how to best explain what evaluation is. My own description has been adapted many times over, and will continue to do so until my introduction is met with perfect understanding of what evaluation is and is not. The Era of the Zoom Meeting has been a great help in that regard, because every introductory meeting I participate in gets to be a new environment to test my elevator pitch. Evaluation is, afterall, complicated and has many connotations attached to it. 

Anyway, the reason why I started with the elevator pitch example is because evaluation does have a PR problem, or rather, a public awareness problem. The PEEP evaluation group activity was something of a catalyst for this topic, as a good starting point for awareness building is to start with a comparison of evaluation vs research. Research is just as broad of a field of practice, though people often think of it as something that is only done in a sterile laboratory. They are both systematic methods for understanding things, but they are also different in some pretty significant ways. 

Look – we are in no way the first or only practitioners to highlight the distinguishing features of evaluation and conventional research…

Going back to the founding of evaluation as a discipline in the middle of the 20th century, and in decades since, prominent theorists of evaluation like Michael Scriven (Goal-Free Evaluation), Michael Quinn Patton (Utilization-Focused Evaluation), Daniel Stufflebeam (CIPP Evaluation), and numerous others have been working to carve evaluation out into its own place among truth-finding techniques. The following are some of the more common distinguishing features that we have encountered both as students and practitioners of evaluation. 

How are evaluation and research different? There are five overarching differences between the two.

  1. The first is related to the underlying motivation. In other words, why are these activities initiated? Their research is usually undertaken to gain knowledge about the construct, phenomenon, or relationship being studied. Evaluation on the other hand is typically done for the purpose of accountability and improving programs, products, or even people.
  2. Outcome use is a second difference. In research, there doesn’t necessarily need to be a use associated with the findings…as mentioned above, the goal from the outset could simply be to learn something. Alternatively, almost every established framework for evaluation identifies utility as being a necessary outcome. Stakeholders have to end up with something that can be used to improve or make decisions about the evaluand (i.e., the thing that was evaluated).
  3. Generalizability is probably one of the more divergent points. In research, being able to generalize findings is a major goal. Randomized Controlled Trials (RCTs) are viewed as the gold-standard for research design and achieving that goal, and other researchers are constantly attempting to replicate findings to either add evidence or introduce doubt about a finding. Evaluators on the other hand rarely set out with the assumption that their findings will be relevant to environments outside the context of their evaluation. RCTs are still viewed favorably in most circles of evaluation, but they are also viewed as wildly impractical given common project constraints and ethical considerations that are more prominent in the evaluation world than the research world. ,
  4. The conclusive goals are also often different in these two activities. This one is similar to the utility difference, but is not quite the same. This refers to what practitioners (of research or evaluation) do with their findings. Researchers are about adding to the body of scientific knowledge, whereas evaluators often use the findings to make appraisals – evaluations, if you will, about the thing they are examining.
  5. Finally, Another difference is the amount of collaboration done with those outside of the team conducting the research or evaluation activity. In research, the investigator drives the process, and is usually pretty territorial when it comes to owning the power in the project. This is typically not the case among the many who practice participatory or use-oriented evaluation techniques – the most common types employed, in our opinion. The entire process is extremely collaborative, with the investigators sharing power with other stakeholders to set project priorities. Building on what was mentioned in outcome use, this power sharing builds buy-in among stakeholders, which in-turn increases the likelihood that the results are ultimately used.

How are evaluation and research similar?

The similarities between these two activities might include things like their capabilities and  methods. By capabilities we’re talking about how both are effective mechanisms to describe constructs, outcomes, and change. Again, the underlying motivation, outcome use, generalizability, and conclusive goals about those descriptions varies, but they are both valid vehicles of exploration nonetheless. Another similarity that probably isn’t a surprise is the methods employed in both activities. Both rely on quantitative and qualitative data, and similar techniques for data collection and analysis to gather and understand that data. 

These are the primary things we thought about highlighting, but we don’t believe that they are the only comparisons that exist. What about you – can you think of any other similarities or differences? We’re constantly trying to find ways to help raise awareness of evaluation, so please reach out if you are interested in continuing the conversation.

« Back to Blog