This year, I was lucky to receive a full scholarship to attend the 42nd annual Museum Computer Network (MCN) conference in Dallas, TX. For those who don’t know, MCN’s a fantastic organization that focuses on digital engagement in the cultural sector. Here’s a video of some of the highlights from the conference:
I’d wanted to attend MCN for a long time after hearing many friends and colleagues rave about the amazing energy and talents of MCN-ers. Before I left, I set two (very broad) goals for myself for MCN2014:
- Deepen my own understanding of how digital is transforming museums
- Think about new ways to apply this understanding to my work as an evaluator
Luckily, this year’s conference theme—“Think Big, Start Small” —aligned perfectly with these goals. I figured I would “start small” by going to the conference, all the while remembering to “think big” about the relationship between evaluation and digital transformation in the cultural sector. And with that in mind, I dove headfirst into the MCN2014 madness.
I quickly realized that I was one of the only full-time evaluators there. Despite the high energy of Wednesday’s activities (including a workshop on dabbling with microcontrollers and a series of inspiring Ignite talks), I couldn’t shake the feeling that I was going to be out of my element among all of these “tech” people as I headed to the first official sessions on Thursday. Would anyone understand why an evaluator was at a conference that’s all about digital? Worries aside, I was curious to learn how other attendees were thinking about and creating digital experiences, and how, if at all, they were working to evaluate their impacts (even if we might use different vocabulary to talk about the evaluation process).
My worries were quickly alleviated. While there may not have been many full-time evaluators at MCN2014, I was blown away by the amount of evaluative thinking I observed in nearly every session I attended (and, frankly, in all of the side conversations I had throughout the conference). Not only are those who work at the intersection of the cultural and digital sectors a highly energetic and creative group of people, but they are also working hard to determine realistic goals for their projects and are thinking seriously about how to measure them. I was both inspired and amazed by the extent to which evaluation was part of the other attendees’ thinking and planning processes. This evaluative thinking showed throughout the conference tweets (#MCN2014 was actually trending on Twitter during the conference), so I figured I’d use a few of my favorites to talk about some of the many ideas I’ve been thinking about ever since I got back from Dallas:
These two perfectly sum up a few ideas that we are constantly thinking about at RK&A. To echo Simon Tanner, there’s no point in gathering data just for sake of having information. It’s essential to think about why you want to gather data and outline how you plan to use the data you collect. Without articulating a plan for using the data in the long-run, it becomes difficult to ensure that you’re gathering the types of data that will be most useful to you. And having a plan for how you will use the data will ensure that when it’s time for analysis you can align your analysis with your long-terms goals. At RK&A, we want to make sure our clients clearly understand that data are there to be used so that when it comes time to make changes based on the data, they are already prepared to do so. However, I think that data is primarily used to test assumptions rather than confirm them. The word “confirm” is misleading, because it presupposes positive assumptions, i.e. that a project is working well. If that’s the case then it’s understandable that people want to see those positive assumptions confirmed (which in turn would mean having to make few changes). Learning to accept what the data tells you, even when the results are negative, is no simple task. It’s very easy to become so attached to a project that you ignore the problems and only see what’s working well. But remaining “open to surprise” and letting the data shine new light on a project is the best way to develop a true understanding of what’s happening so you can adapt and make changes to help achieve your goals.
I have mixed feelings on drawing distinctions between testing for user experience and testing for content. In my opinion, the two are separated by a fine line—at what point in any museum interactive, mobile app, game, or other digital experience does the user experience become entirely separate from the content? All content matters in terms of the user experience because the content itself, no matter the particular subject, dictates the experience that visitors/users expect to have. In other words, I think that visitors’/users’ prior expectations of a (digital) experience and opinions of the experience after use inherently depend on the subject matter presented to them. Visitors’/users’ preconceptions about the particular content at hand are so much a part of their experience. While there are always smaller usability issues that can be addressed without giving much regard to content (the size of a button, for example), I ultimately think that the entire user experience can never be truly separated from the content that supports it. If you change the content, you can’t help but change the experience.
Those are just a few of the ideas discussed at MCN2014 that I am still thinking about weeks later. The conference evoked so many interesting issues and questions that I couldn’t possibly go into all of them in one post. Suffice it to say that I left MCN2014 feeling silly for ever being nervous about whether others would perceive the overlaps between the worlds of evaluation and digital. MCN turned out to be a fantastic experience that greatly expanded my own thinking on these issues, and I’m excited to put these new ideas to use in my work and to (hopefully) explore them further at MCN2015 in Minneapolis!
Didn’t make it to MCN2014 but want to view the sessions? Check out MCN’s YouTube channel. And don’t forget to check out the amazing (and short—9 minutes!) Ignite talks. You can also find all of the conference tweets using the hashtag #MCN2014.