In June, The Association of Science and Technology Centers (ASTC) invited professionals to respond to these questions for an upcoming issue of Dimensions magazine: When are evaluation and other visitor feedback strategies the most useful for helping advance a science center’s mission? When are such strategies less successful? We pondered this at a staff meeting and decided that a small but important tweak may be needed to begin addressing the questions. First, let’s clarify that mission describes what a museum does and impact describes the result of what a museum does—on the audiences it serves. We believe that anything a museum does—collect, exhibit, educate—is meaningless unless it is done in the pursuit of impact. So, when is evaluation most useful for advancing a science center’s mission? When it is done to advance impact not mission. It’s a little like that old adage: If a tree falls in the forest and no one is around to hear it, does it make a sound? With regard to mission and impact, we take a slightly different angle—if a museum does work or evaluation that does not lead to impact, are they really doing the work?
Evaluators are in the same boat as some museum practitioners. Evaluation is a means to an end, just as a museum’s collections are a means to an end. Unless evaluation is placed in a meaningful context, such as helping a museum pursue impact, evaluation doesn’t serve a purpose. As an evaluator, I suppose I should say evaluation is always valuable. But, that’s just not true. I’m a self-proclaimed data nerd. I love the minutia of evaluation—pouring over pages and pages of interview transcripts and pulling out those five key visitor trends. I can get lost in data for days and find myself pulled in many seemingly fruitful directions. “Oh, how interesting!” I will say to no one in particular. I often find myself lost in the visitors’ world, chuckling to myself about a quirky response to an exhibit or wondering who someone is and why he or she responded to a museum experience in a particular way. Getting lost in your work can be fun and, lucky me, happens to those of us who are passionate about what we do. So, while pursuing tangents in evaluation data is fun for me, there is a flip side to this coin—a lack of focus that can be detrimental to the pursuit of a larger goal. This is why we, as evaluators, push our clients to articulate what it is they want to achieve to keep us (and them) on track.
We consistently find museum practitioners to be among those most passionate about their work. Thus, these moments of losing oneself in one’s work, whether researching or examining an object, designing an exhibition, or creating a program, are frequent occurrences. When it comes to pursuing impact, this passion is both a joy and a burden. It is a joy because most practitioners can easily articulate what they do for their audiences. But, they often get lost in what they do and may not think about why they do what they do. A practitioner articulating the “why” is similar to the entire museum articulating its intended impact. Articulating impact provides a laser focus for all the work that museum practitioners do and helps keep them on track toward pursuing that larger goal. So, our response to ASTC’s second question, When are evaluation strategies less successful in helping advance a science center’s mission? When a science center and its collective staff have yet to articulate the impact they hope to achieve on the audiences they serve. Otherwise, we can all do evaluation until we are blue in the face but those reports will continue to collect dust on hundreds of science centers’ shelves. Of this I am certain—just like death and taxes.