You can’t escape technology in museums. Visitors use smartphones to take pictures. Exhibits use touch screens and high-tech interactives to share stories and information. Programs use technology to help visitors engage. Everywhere you look there is a screen . . . until you encounter an evaluator armed with a clipboard and a pencil. I don’t think this is because evaluators are luddites. I think this is because, much like exhibit and program designers, we want our use of technology to make sense. Technology has to make data collection easier not only for us, but for the visitor, too.
I have been using technology for data collection since I was in grad school. We had grant money to spend, so iPads were purchased and students were encouraged to try new things. We approached the task with gusto, certain that this would make things easier; after all, if we enter visitor data as we collect it, we will have eliminated the need to enter data later! It sounded like the perfect plan, until we realized we were collecting data outside—in the elements. In Seattle. In November. We learned a few lessons that day: iPads don’t do well in the cold and neither do the cold, bare fingers we needed for the touch screens. Perhaps in this case, using iPads wasn’t the best plan, given our data collection environment.
Since joining RK&A two years ago, we have experimented with technology as well. Each time we elect to use technology, we think about how it will affect the project. In some cases, it is simple – if we take interview notes on a laptop or tablet while talking to visitors, we can record more of what the visitor is saying and eliminate some work on the back end. This is a low-risk decision that we frequently make. In other cases, we rely much more heavily on technology by using tablets to collect survey data at museums. This is a higher-risk decision because while there are many positive aspects to non-paper data collection, there are also challenges.
Tablet data collection requires me to think about survey presentation in a different way. The survey is designed mostly for the data collector since we often administer surveys verbally, so it has to be easy to manipulate. Some question formats, such as the scales that RK&A often creates, which are non-traditional scales, don’t always translate well from paper to digital. I have to think about how the question is presented for the data collector, and what, if any, information they have to present to the visitor (e.g., a visual representation of the scale), then find a way to balance the two. Also, we ask the visitor to enter their own demographic information, formerly a single page of questions. But when collecting data on a tablet, we need to balance how much the visitor has to scroll with the number of pages they have to click through. This can be tricky when skip logic directs visitors to the appropriate questions. Regardless of the demographic information the visitor inputs, the survey has to be easy for them to complete. Each time, I learn something I need to change in future surveys to make it easier for visitors.
There are huge advances taking place in digital data collection as new software and platforms are created, and we researchers and evaluators develop best practices for their use. For me, every project that uses technology to help with data collection teaches me something new and makes me a better practitioner. What experiences have you had and what have you learned? The field is changing and I can’t wait to see what we think of next.