Explaining the Seemingly Unexplainable: Impact of Museums

This week, I’d like to begin to hone in on the idea of measuring impact that Randi raised in our first blog post.  We define impact as the difference museums can make in the quality of people’s lives, and measuring it can be both exciting and intimidating.  Exciting because just about every museum professional I’ve ever met believes museums have the potential to affect people in deeply powerful ways.  Stories abound from people who have distinct and palpable memories of museum visits from childhood—memories that became etched in their being and identity (for many, it is the giant heart at the Franklin Institute, for others it may be a beautiful Monet water lily, and for the nine-year old me, it was a historic house, My Old Kentucky Home).  It’s these kinds of experiences that draw museum professionals to their field.  On the other hand, the idea of measuring impact can be intimidating because some think it is impossible to evaluate, measure, or assess something as intangible as a personal connection, engagement, identity growth, a lasting memory, an aesthetic experience, or an “ah-ha” moment.  When these fears emerge, we try our best to allay them and try to move them towards an important first step in measuring impact—describing what impact looks like or sounds like.  Evaluators are accustomed to figuring out how to measure something—once the impact is described.

The Giant Heart at The Franklin Institute
The Giant Heart at The Franklin Institute

I, as a researcher and evaluator, become excited at the thought of being tasked with measuring impacts, such as “engagement” or “creativity.”  I relish the idea of studying something so I can explain the unexplainable, of drawing meaning from and describing unique human experiences.  As long as I can remember, I have been interested in the complexities of the human experience, especially in how it plays out in specific contexts, within the social realm, and in relationship to material culture, such as art, artifacts, and natural history specimens.  These interests led me to the field of anthropology and to work in social science research and museum evaluation, where I have the pleasure of spending my days exploring the ways people make meaning in museums and other similar institutions.

Sometimes when museums cite their impact, they fall back on the common practice of reporting visitation numbers.  While not unimportant, numbers indicate only that people came—they do not indicate the quality of visitors’ experiences.  Imagine hearing that a museum attracted a million visitors and then hearing about the qualitative difference a museum has made in people’s lives—wouldn’t that sound more meaningful? This brings me to discussing an often overlooked methodology in museum evaluation—case study research.  Its low rate of use is interesting in light of museums’ desire for evidence of impact, as a case study can provide rich details of a person’s or entity’s (e.g., a school) experience.  Case study research is “an in-depth description and analysis of a bounded system”—that “system” could be any number of things: an individual museum visitor, a school partner, or a community.  It provides a focused, in-depth study of one particular person or entity.  Practically speaking, what we do is follow several participants (or “cases”) over time (during and after a program for example), by interviewing them repeatedly, observing them in the program, and interviewing others within their sphere of influence (such as a parent, spouse,  museum professional, student, or community member) who can comment about their experience. The outcome is a concrete, contextualized, nuanced understanding of a particular phenomenon (for example,  a person’s growth over time, a relationship between a museum and school, or a museum’s affect on a community) that can explain not only what happened as a result of the program, but how it happened.   Knowing the “what” and the “how” are invaluable to museums; the “what” can offer indictors of impact and the “how” tells you what the museum might have been doing to create the “case” experience.

An example of case study research comes from an evaluation we did for an art museum that was launching a new multi-visit program in middle schools.  We began our work by helping the museum define its intended impact, which is articulated as: “Students are empowered to think and act creatively in their lives, their learning, and their community.”  We then worked with staff to operationalize the impact statement by developing a series of concrete, measureable outcome statements.  We identified our “cases” as three middle schools.  Each “case” study included a series of interviews with students, classroom teachers, a few parents, and program staff, as well as an observation of program activities in the school and in the museum—all over the course of several months.  The data were rich, specific descriptions of what happened to participants and how the program functioned in each school—all in relationship to the impact statement.  Not surprisingly, each school had a slightly different experience, with one school more closely meeting the impact statement than the others.  The case study approach unveiled the complex interplay of variables at each school to help explain why one school was more successful than the others.  Both the successes and challenges provided great insight as the museum considered its second year of program implementation.

I know what you are thinking.  How can we measure impact by focusing on a few individuals or one or two schools?  What about generalizability?  While these questions are reasonable, they miss the point of case studies, which I believe strongly aligns with what we know about museum experiences—case studies account for differences in people’s unique dispositions, life experiences, and knowledge; they value distinctiveness; and they recognize the complexities of life and situations and do not try to simplify them.  If a museum really has trouble accepting the essential value of what a case study can afford, plenty of museum programs are small enough to warrant conducting a case study without worrying about generalizability.  The example above is a relatively small program serving six schools, three of which we examined through our study.  In another example, we used case study research to assess the impact of a museum-based summer camp serving 20 or so teens.  Conducting case studies to demonstrate the impact of museums’ small programs might be just the perfect baby step towards museums’ beginning to measure impact.

I sense urgency in museums’ need to have evidence of the value of museums in the American landscape.  I think it is time we stopped worrying about what might be immeasurable and instead begin describing what success looks like.

Related Posts

6 Responses
  1. Stephanie,

    Thank you so much for sharing your insightful post. I think this is where literature reviews can come in very handy in helping us construct what success looks like and to then measure against that. Coming up with Education Missions and Teaching Goals that are all-encompassing and address all dimensions of our audiences can be a useful tool for both evaluators and museums by offering a tangible list to evaluate against. But who and how this list is constructed is key as sometimes such goals are imposed from outside and many times they are adopted selectively or interpreted differently.

    That was the hardest part of my final project I did for you, figuring out whether to measure my museum program against the department’s goals or against a thoroughly researched model of success made up of the latest findings and developments in the field. I think both museums and evaluators need to work together to define what success looks like to begin with. In my own work I found competing notions of success particularly challenging to work around.

    How can we all be on the same page about what success looks like and who’s version of success takes precedent- the president’s, the volunteer’s, the educator’s, the evaluator’s, the patron’s, or the audience’s?

  2. Stephanie

    Sehr, that is such a good question. There is no easy answer. Ideally, the Museum defines success collaboratively, meaning staff across the Museum or across a particular program division meet to define success together. An outside facilitator, like RK&A, can help guide staff through this process, but ultimately, it must come from the staff. Deciding who to include in those meetings and decisions is imperative; leaving out key players could eventually undermine decisions that are made. Thanks for such a thoughtful question.

  3. Love the post. However, throughout my career I have asked what does success look like. Without fail the answer has been we’re open and we still have jobs.

    How do you get folks to think about intangible visitors experiences when they can’t get past thinking about their continued employment? And I know: one of the answers is that in thinking about the visitor we will have continued employment, but that is never a successful answer.

    Just curious.

    As for visitors: if you want to have an unforgettable impact, let them do this:


    T.H. Gray, Director-Curator
    American Hysterical Society

  4. Stephanie

    Thank you for your comment. When we ask museum staff what does success look like, people usually respond thoughtfully, although it does take some continual probing to reach the essence of what success looks like. The video is really funny (thank you for bringing it to my attention).

Leave a Reply