This spring, RK&A undertook an ambitious project with the Smithsonian National Museum of Natural History (NMNH) to conduct a meta-analysis of reports from the last 10 years of evaluation completed for the museum. In this context, “meta-analysis” essentially means reanalyzing past analyses with the goal of identifying larger trends or gaps in research. This project was both challenging and rewarding, and so I wanted to share our experience on the blog.
The specific goals for this project were to:
- Understand consistencies or inconsistencies in findings across reports;
- Identify areas of interest for further study;
- Help the museum build on its existing knowledge base; and
- Create a more standardized framework for future evaluations that would help the museum continue building its knowledge base by connecting past studies to present and future evaluations.
The first step of the meta-analysis process was to perform an initial review of the reports and determine criteria for inclusion in the analysis. One of the underlying goals for the project was to demonstrate to the institution at large (not just the Exhibits and Education departments) that evaluation is a useful, scientific, and rigorous tool that can inform future work at the museum. Therefore, we wanted to make sure that the evaluation reports included in the study adhered to these high standards.
For this reason, we omitted a few reports that we considered casual “explorations” of an exhibition or component type, rather than a systematic study using acceptable evaluation and research protocols. For example, an “exploration” might consist of a random short observation of an exhibition and casual conversations with a small number of visitors about their experiences. While these types of studies can be useful and informative on small-scale projects, they were not rigorous enough to support the larger goals of this project.
We also omitted reports in which the sampling and data collection methods were not clearly stated, because this left us unsure of exactly who was recruited, how they were recruited, and how that data were collected (e.g., Were the observations cued or uncued? What instrument was used?). Although these studies may have been rigorous, there is no way for us to know without a clear statement of the methodology in the report.
Next, we needed to develop a framework to use for analyzing and comparing evaluations. Over the course of several meetings with NMNH, we discussed and clarified the ideas and outcomes that were most important to the museum. Based on these discussions and a review of NMNH’s existing evaluation framework for public education offerings and the institution’s core messages, we developed a new evaluation framework which would be our analytic lens. The new framework centered on four main categories, with the most emphasis placed on the Influence category:
Within the Influence category, we looked at a number of specific outcomes that were important to NMNH, such as whether visitors are “awe-inspired” by what they encounter in the museum or whether visitors report becoming more involved in “preserving and sustaining” the natural world. To show some of the challenges we faced in making comparisons across reports, I’ll highlight an example from one outcome—“Visitors are curious about the information and ideas presented in the exhibition.”
Understanding whether visitors are “curious” about the information and ideas presented in an exhibition was difficult because many evaluations did not explore visitors curiosity. Instead, we had to think about what types of questions, visitor responses, and visitor behaviors might serve as proxy indicators that visitors were curious about what they had seen or experienced. For example, audience research studies conducted between 2010 and 2014 at NMNH asked entering visitors “Which of these experiences are you especially looking forward to in the National Museum of Natural History today” and exiting visitors “Which of these experiences were especially satisfying to you in the National Museum of Natural History today.” We decided that visitors who indicated they were especially looking forward to (entering) or satisfied by (exiting) “enriching my understanding” may be considered “curious” to learn more about the content and ideas presented by the museum. For other evaluations that didn’t explore elements associated with curiosity, we looked for indicators such as asking questions or seeking clarifications of staff and volunteers about something a visitor has seen in an exhibition.
However, we also acknowledge that visitors’ desire to “enrich” their “understanding” or “gain more information” about a topic does not always directly relate to curiosity. For example, one evaluation that asked about both “curiosity” and “gaining information” found that an exhibition exceeded visitors’ expectations about having their “curiosity sparked” but fell short in “enriching understanding” or “gaining information.” We learned from this that if curiosity is an important measure of NMNH’s influence on visitors, future evaluations should be clear in how they explore curiosity in their instruments and how they discuss it in their findings.
In light of the results of the meta-analysis, we are excited to see how NMNH uses the reporting tool we created from this work. The tool standardizes the categories that evaluators and museum staff use to collect information and measure impact so the museum can build on its knowledge of the visitor experience and apply it to future exhibition and education practices.