As evaluators, we are often asked to help our clients build evaluation capacity among staff in their organization. The motivation for these requests varies. Sometimes the primary motivator is professional development; other times it is perceived cost savings (since conducting professional evaluations can require resources that not all organizations have at their disposal). We welcome when an organization values evaluation enough to inquire about how to integrate it more fully into their staff’s daily work. If an organization has a true interest in using evaluation as a tool to learn about the relationship between its work and the public, building an organization’s evaluation capacity may be quite successful. On the other hand, if the primary motivator is to save costs associated with evaluation, often the outcome is much less successful, mostly because evaluation takes considerable time and invariably there is a trade-off; when the evaluation is being done, something else is being ignored.
Evaluation capacity building can take a variety of forms. It can range from building staff’s capacity to think like an evaluator, perhaps by helping staff learn how to articulate a project’s desired outcomes (I think this is the most valuable evaluation planning skill one can learn), to training staff to conduct an evaluation from beginning to end (identifying outcomes, creating an evaluation plan, designing instruments, collecting data, conducting analyses, and reporting findings). Even among the most interested parties, it is rare to find museum practitioners who are genuinely interested in all aspects of evaluation. As an evaluator, even I find certain aspects of evaluation more enjoyable than others. I’ve noticed that while practitioners may be initially intimidated by the data collection process, they often find talking with visitors rewarding and informative. On the other hand, they have much less enthusiasm for data analysis and reporting; I’ve only encountered a handful of museum practitioners who enjoy pouring over pages and pages of interview transcripts. We lovingly refer to these individuals as “data nerds” and proudly count ourselves among them.
There is yet another challenge, and it has to do with the fact that most museum practitioners are required to wear many hats. Conducting evaluations is my one and only job; it is what I am trained to do and have intentionally chosen for my vocation. While a practitioner may be intrigued by what evaluation can offer, often it is not the job he or she was trained or hired to do, which means that evaluation can become a burden—just one more hat for a practitioner to wear. Some organizations have addressed an organization’s evaluation needs by creating a position for an in-house evaluator and the individual who might fill that position is usually someone who is schooled in evaluation and research methodologies, much like all of us here at RK&A. I would caution organizations to be very realistic when considering building their organization’s evaluation capacity. Does your staff have the time and skills to conduct a thoughtful study and follow through with analysis and reporting? What responsibilities is your organization willing to put aside to make time for the evaluation? And, do you want your staff to think like evaluators or become evaluators?—an important distinction, indeed. Otherwise, even those with the best of intentions may find themselves buried in mountains of data. Worse yet is that what was once an exciting proposition may be perceived as an annoyance in the end.