A few months ago, Geraldine Kendall presented preliminary findings of the British research project Evaluating Evaluation in her article “Critical Thinking Not Used.” The research explored the impact of summative evaluations on the workings of British museums, and preliminary findings highlight a few barriers to maximizing the impact of summative evaluations. While we’re all aware of the challenges institutions encounter with summative evaluations, the article spurred me to think more about the broader hurdles institutions might face to using evaluation results. So for this post, I’d like to share my experiences with a museum that has boldly challenged the barriers to maximizing the impact of its evaluation work.
RK&A is currently working on a meta-analysis project with the United States Holocaust Memorial Museum, an institution that exemplifies dedication to the field of evaluation. The Museum’s evaluation work started when the Museum was but an aspiration and it has since completed well over 50 studies of museum activities ranging from evaluations of programs and exhibitions in all phases of development to market research. Like other museums, its evaluation work grew organically, dictated by the needs of individual staff members and departments. The reports were useful to those who commissioned the studies, but often the findings did not make their way to a larger audience. As the Museum’s wealth of studies continued to grow, so did its knowledge of visitors. The Museum recognized that one barrier it faced to taking full advantage of the knowledge it was generating was that there was no cross-organizational system for sharing information or accessing the reports.
Hand-in-hand with the problem of gathering, preserving, and sharing institutional knowledge is the challenge of identifying overarching trends that have emerged from the accumulated knowledge. The Museum’s evaluations vary by study goals, audience, method, phase of program/exhibition development, quality of the evaluation etc.—as would any group of evaluation studies found at a museum. So what greater meanings can an institution take away from more than 20 years of research and evaluation? To address this question, we worked with the Museum to develop a framework through which to view each study. Combing through reports, we created an abstract for each study using standardized categorical fields so we could group reports and identify trends in findings among different categories, (e.g., audiences). This work and final deliverable (a searchable database that staff can use to explore past studies to see what might be relevant to their work today) will allow the Museum to think about and learn from its evaluation work.
Bringing together over twenty years of evaluation studies, creating a standardized abstract for each report, conducting a meta-analysis, and creating a searchable database are big steps that the Museum is taking to give legs to its evaluation work. By tackling accessibility and providing a way to identify connections among evaluations, the Museum and other institutions can shape an organizational culture that maximizes the impact of evaluation on the work of museums. As a person who greatly values the role of evaluation in any organization, I have the utmost respect for institutions like The United States Holocaust Memorial Museum that are dedicated to learning from their practice and committed to the stewardship of institutional knowledge garnered through evaluation.