A few weeks ago, Randi blogged about the lack of emphasis grantors place on professional learning as a valuable outcome of projects they have funded. The fear of failure I sense from practitioners when planning an evaluation is often palpable, as practitioners often think about evaluation as a judgment tool and fear the possibility of failure (especially in the eyes of the funder). The innovation-obsessed culture of the non-profit sector exacerbates the situation: be the best; make a discernible difference in people’s lives; be innovative; don’t make a mistake; and if you do err, certainly don’t tell anyone about it. Understandably, the possibility of failure creates a stress level that can override people’s professional sensibilities of what is really important. Yet, I personally feel refreshed when I hear museum practitioners reflect on their failures during a conference presentation; not because I want to see people fail but because mistakes often lead to learning. And, as an evaluator, it is my job to help museum practitioners wade through evaluation results and reflect on what did not work and why in the spirit of learning. My job is to help people value and use evaluation as a learning tool.
I recently had the pleasure of working on a project with the Science Museum of Virginia (SMV) in Richmond. The Museum, like many others, received funding from the National Oceanic and Atmospheric Administration (NOAA) to develop programming for Science on a Sphere® (SoS). And, the Museum, like many others, had high hopes of creating a compelling program—one that uses inquiry to engage visitors in the science behind the timely issue of climate change. Inquiry can be elegant in its simplicity but it is also incredibly difficult to master under even the best of circumstances. Staff quickly realized that creating and implementing such a program was a challenging endeavor for a whole host of reasons—some of which were unique to the Museum’s particular installation of SoS. The challenges staff faced are well documented in the evaluation reports they have shared on NOAA’s web site (http://www.oesd.noaa.gov/network/sos_evals.html) as well as informalscience.org (http://informalscience.org/evaluation/show/654). Yet, the specific challenges are not important; what is important is that they reflected on and grappled with their challenges throughout the project in the spirit of furthering everyone’s professional learning. They discussed what worked well and addressed elements that did not work as well. They invited colleagues from a partner institution to reflect on their struggles with them—something we all might find a bit scary and uncomfortable but, for them, proved invaluable. In the end, they emerged from the process with a clearer idea of what to do next, and they realized how far they had come.
SMV staff recognized that their program may not be unique and that other museums may have done or may be doing something similar. But each and every time staff members (from any museum) reflect on the lessons learned from a project, their experience is unique because learning always emerges, even if it is subtle and nuanced. The notion that every museum program has to be innovative, groundbreaking, or unique is an inappropriate standard, and, frankly, unrealistic. In fact, when museums embrace innovation as a goal, they too, must embrace and feel comfortable with the idea of failure, especially if they want to affect the audiences they serve. Grantmakers for Effective Organizations share this sentiment (http://www.geofunders.org/geo-priorities) when defining practices that support non-profit success. The organization states that “[embracing] failure” is one way we will know that “grantmakers have embraced evaluation as a learning and improvement mechanism.” An ideal first step would be for all of us—institutions, evaluators, and funders—to proudly share our failures and lessons learned with others.