Data analysis can be pretty scary. That moment when you realise that making sense of the stuff you’ve so painstakingly generated comes down to you – just you. Well, relax. It’s not just you that has to leap into the unknown. We all put ourselves on the line when we make decisions about definitions, scales, codes, themes – and when we saywhat the analysis eventually means. But that realisation can be alarming – maybe it’s even paralysing.
You might think that qualitative researchers suffer most from nervousness about interpretation – facing up to those mountains of transcripts and/or field-notes and/or photographs can be really daunting. But of course surveys also often contain open questions too so mixed methods and quants researchers also often face lots of words, not to mention the task of actually converting numbers into something convincing and believable.
Doctoral researchers often get the worries at the beginning stages of data analysis. What if I am just being stupid? What if I am barking up the wrong tree altogether? What kind of criticisms might be levelled at the choices I have made?
So it’s not at all surprising that in the face of masses of data that doctoral researchers opt for a technique-led approach to analysis. After all, regardless of whether we are doctoral or highly experienced researchers, we all have to demonstrate that we have approached the processes of data analysis systematically, and consistently.
But sometimes the need to be technically proficient is more like clutching at a life raft. If I just do the analysis ‘the right way’ then that’s OK. Showing I can follow a set of ‘rules’ is all that matters. I’m not vulnerable because my exemplary technical prowess is all that counts.
We researchers quite rightly try so hard to get rid of anything that might seem dodgy. For example, one of the strategies that people employ to guard against idiosyncratic individual judgements is ‘inter-rater reliability’. This is where two people code the same transcript. An interpretation is reliable if both people agree – if they ‘see’ the same thing.
But isn’t that also a worry? If we are looking at two people with the same sets of understandings because they are in the same disciplines and have had the same kind of research training, then something else might be going on. In finding a common account of and for their data, they might be just acting really predictably and conventionally. They might be simply finding what they expect to find. Processes of choice and interpretation of data are precisely where taken-for-granted ways of thinking can really get in the way. You see this pattern, you opt for this direction because it just ‘seems right’ or ‘obvious’ or ‘expected’.
But what if it’s not so straightforward?
I like to occasionally stand the safety of data analysis technique and the cosy togetherness of being ‘right’ on their orthodox little heads. What if the discomfort with choice and uncertainty is actually helpful?
The grey area of researcher decision-making is where things get most interesting. It’s where you get to be most creative. And it’s where you can build a set of practices that help you step momentarily away from the usual and purely technical ways of doing data analysis.
I reckon it’s good to make time, when getting to grips with your data, to mix it up. Perhaps after you have done an initial analysis. Not right at the start. Get into the material as you usually would. Find the narratives in the interview, the codes in the transcript. Whatever. Then PLAY. Yes play.
But not any old play. Data play. Play that is designed to help you see new ways of connecting, new patterns, new groups, new associations, new commonalities, new aspects of context. In your data. Get out of the rut of the anticipated. Get rid of the grip of the immediately obvious.
And to that end here’s a few suggestions for data–play which does just that.
- Random associations
Write each theme you have identified on a post-it note. Put all the post-its face side down, as you would with scrabble tiles. Now pick up three. Stick them to a large sheet of paper so you make a large triangle. Draw a line which connects them all. Now write what that line means – what connects them. The ask yourself – What do these three things have in common? Write this in the middle of the triangle.
You can keep doing this exercise as long as its generative. And you can vary the number of postits too – four, six, ten, whatever you can manage. You do have to keep replacing the post-its that you’ve stuck to the paper of course, but remember that the object is for the set that you pick up to be random. Not planned.
You can also substitute what goes on the post-its too. You might put down ideas that you are currently working with.
A further variation is to work with data and possible theoretical explanations you might employ. Make two piles of different coloured post-its with one for ideas or themes, and the other for theoretical concepts. Now you simply pick up two or three themes or ideas and one theory post-it, stick them on a paper and see what lines and connections you can make.
- Scatter gun
This is a variation on random associations. Cut all of your themes up into individual strips. Throw them down onto a large sheet of paper. You now have the categories for a random mind map.
Now draw the associations between the post-its and connections – what links to what and why? What larger concepts are you working with as you make these connections? Are there associations that you haven’t seen before?
Take a page from an interview transcript and a marker. Cross out all of the words that don’t immediately strike you as interesting. When you have finished, you will have a combination of phrases and individual words.
If this is an A4 sheet of paper, it may help at this point to enlarge it to A3.
Now pin the sheet on the wall and read aloud the words that are visible. What strikes you as you listen to yourself reading? Next, look for repetitions and patterns among the words and phrases that remain on the page.
You can do this exercise with entire transcripts. You can also compare random redacted sheets from similar interviewees to see what putting their visible words next to each other shows you.
- Side by side
If you are using images in your study then switching from thinking about them as illustrations to actual data can be a bit tricky. One way to help make this adjustment is to select three or four images that particularly strike you – they seem to ‘say something’ about your research participants or the place you are researching. Pin the images up on a wall next to each other – or peg them on a clothes line – or lean them up against a wall – until you get what intuitively appears to be a pleasing sequence.
Now think about what holds these images together and why they are better displayed in this particular order.
You can also think about what joins individual images together and what sits in between them. Imagine that these images are the only things that you had to understand your research participants or place – what do you look for in the image that might give you information or a feeling about them?
I’m sure you can think of other ways to do serious data play.
As I’ve already suggested, it is worth putting aside a little time to get playful with data, as it can help you to break out of predictable sets of associating and patterning. And data play isn’t a substituted for more conventional and systematic data analysis – but it is a complementary practice, and a sometimes very necessary disruption.
And it’s great to do something playful like this in teams as these approaches generate great conversations.
Image – Ambiguous Cylinder Illusion