Every now and then patter offers a close-up of research writing. This near-sighted exercise is intended to illustrate how ‘reading for the writing’ can be helpful.
This particular ‘reading for writing’ post looks at writing qualitative methods in a journal article. It speaks to last week’s post about the need to be specific, not woolly and imprecise. As a result of this post, I was asked by several people how qualitative researchers actually avoided vagueness. Did they too resort to numbers? This example is by way of a partial answer to that question.
The paper I’m examining here is: Lynn McAlpine & Margot McKinnon (2013) Supervision – the most variable of variables: student perspectives, Studies in Continuing Education, 35:3, 265-280.
The abstract begins by establishing the warrant for the paper (it addresses the existing knowledge base and what contribution this study will make), the purpose of the paper (the question it will answer) and some information about the research design.
The supervision literature often conceptualizes the supervisor as the primary person in doctoral students’ progress. Yet, there is growing evidence that the supervisor is but one of many resources that students draw on. Our study takes up this idea in answering the question: What is students’ experience of their supervisory relationships over time? Sixteen social science participants in two UK universities, at different points in their doctoral journeys, completed logs of a week’s activities for a number of months before being interviewed.
The researchers finish off their abstract with the claim that:
This distinct longitudinal approach provides a more nuanced understanding of students’ perceptions of the supervisory relationship, specifically, varied reasons for seeking supervisory help, distinct needs related to where students were in their progress, and diverse ways in which they negotiated and characterized the supervisory relationship.
On the basis of this claim, readers would expect to see details of the research design in the paper. So what was actually said? (For purposes of annotation I have altered the original paragraphing slightly…)
| Participants and location
The study, from 2007 to 2009, involved 6 male and 10 female social science doctoral students, recruited through email listservs from two UK universities. Nine were international students, nine experienced co-supervision; all but one had some form of scholarship funding. Participation varied from 5 to 18 months with the average being 10 months.
Students also varied in where they were in the doctoral journey (details given in accompanying figure in which each DR was given a pseudonym):
(1) five were early in their journeys defining their projects, doing transfer, beginning fieldwork/data collection
(2) five were in the middle principally engaged in fieldwork/data collection, but also some analysis and writing
(3) six were near the end principally analyzing, writing, and submitting.
|When the study was undertaken
How many people were involved
How they were recruited
Their student status – country, income support, nature of supervision, general disciplinary background.
Stages in PhD were given in a figure in which each DR was given a pseudonym, and whole group details given in the text.
Now, none of the researchers’ decisions are ‘wrong’ – all research does some things and not others – as readers we simply have to think about what the research can and cannot do. There is sufficient detail here for readers to consider key elements of the design – what does this size group allow the researchers to see and say? What does having people at different stages of the PhD mean for what can be said and not said? We can consider the implications of the partiality of the design and, because of the details given, think about what the researchers are able to claim on the basis of their choices.
We can also assume from their description that when we get to results, we will see both some kind of numbers – how many of the group thought in a particular way or had common experiences – and also names, where individuals are the focus. And this allows us to see we can that being specific – using numbers where appropriate and useful – troubles a simplistic binary of quant v qual methods.
We might also still have some questions arising from this description of participants and location.
- Does the fact that we don’t know anything about the discipline and universities mean that they didn’t make any difference, or that the researchers thought the participation group was too small to say anything meaningful about these particularities?
- What kind of invitational notice was put out onto the listservs?
- Why is there a variation in time of participation – was this just the stage of the participants’ PhD or did some just opt out?
But don’t be too critical of the writers – bear in mind that what can be written in a journal article is inevitably brief. This text is already more detailed than many you will see.
We draw on a general social science view of narrative (Elliott 2005). The underlying premise is that narrative can integrate the permanence of an individual’s perception of him/herself combined with the sense of personal change rather than stability through time.
Participants provided accounts of their experiences in three ways: through biographic questionnaires at the beginning and end of the study, weekly activity logs requested approximately once a month (though sometimes responses were less frequent), and an interview.
The initial biographic questionnaire captured previous educational and work experience, the reasons for embarking on the Ph.D. as well as the intended career.
The structured log comprised questions aimed at capturing the activities, interactions, and perceptions of a particular week. (While providing a more fine-grained perspective than interviews, the logs still only capture at best one in four weeks.) The logs asked three questions specifically about supervision: whether the student had needed help, if so, why help was needed and if they received help or not.
Near the end of the study, a semi-structured interview explored the students’ overall experiences of doctoral work with part of the interview linked to what they had reported in the logs.
The biographic questionnaire at the end asked students to describe retrospectively key feelings or experiences in their journey.
Generally, the logs provided snapshots contemporaneous to the supervisory experience whereas the interviews (and other data) retrospective more extended perspectives.
These different data types were synthesized in researcher-constructed case narratives for each participant – short descriptive texts with minimal interpretation.
These narratives were developed through successive rereading of all data for each participant in order to capture a comprehensive, but reduced, account.
Each narrative (1) made connections between events, (2) represented the passage of time, and (3) showed the intentions of individuals (Coulter and Smith 2009).
The narratives were constructed by different team members with each case verified by at least one other person.
The narratives enabled us to preserve a focus on the individual while still looking for commonalities to examine in more depth (Stake 2006). Through this process, we came to see the value of a closer look at participant’s experiences of supervision which led to this analysis.
We chose four cases at random, and the research team (two of whom are the authors of this paper) read all the logs and interviews of these four cases. Through this process, a number of subquestions were refined (the first drawing on all data, the second and third largely on log data, the fourth and fifth on log and interview data):
(1) How were the individual’s supervisory interactions situated in a particular set of intentions, relationships, experiences, and time in the doctoral journey?
(2) What kinds of supervisory interactions were sought and negotiated?
(3) To what extent did positive and negative affect emerge in these interactions?
(4) To what extent did co-supervision influence the relationship and expectations?
(5) How did students characterize their relationships with their supervisors?
Then, the second author continued analyzing the data from the remaining participants with another member of the team verifying samples of the coding.
Finally, the first author reviewed the analysis in the light of her knowledge of the data and the literature.
From this analysis, new narratives were created focused principally on supervision. These provided a form of data display that enabled the interpretations emerging in this paper.
Thus, we had two ways of examining change over time: the first, change in individual experience over time and the second, change as regards where individuals were in the doctoral journey.
|The researchers specify the tradition they are working in – narrative studies and the overall position of the family of narrative approaches.
The research tools are named.
Frequency of use is given, with a caveat about some variation in actual responses.
Details of data foci are given for each research tool.
Each tool generates data about specific aspects of the DR supervision experience.
The data generated was complementary and also allowed for some cross checking.
The analytic approach is outlined – research reproduced synthetic case narratives.
The narrative production process is outlined (re-reading.)
A consistent structure was used for the case narratives.
Trustworthiness of interpretation was through shared analysis.
The case narratives had particular affordances for cross case analysis.
The whole research team conducted a cross case analysis of four of the participants and revisited their original data sets
This led to refined questions to refine and amplify the initial research question.
The remaining data was analysed using these subquestions using a ‘rater reliability’ approach.
New cross case narratives of the doctoral ‘journey’ were created.
These were put into conversation with the individual narratives as a form of checking.
This section provides a very clear description of the stages of the research process and the techniques the researchers used to guard against singular idiosyncratic interpretation. I can even imagine, from the details given, the meetings during which the cases were discussed and questions refined.
There is less detail about what was involved in re-reading, it seems like a grounded theory approach. It doesn’t seem to be a structured reading for narrative elements in the data – plot type and construction for instance. I’m guessing from the subsequent research questions that the reading combined looking for themes and key critical points on a time-line.
And what’s not there? Discussion of pluses and minuses of having researcher-constructed case narratives – these are in the literatures and a knowledgeable reader can bring this prior knowledge to this text. Someone new to this tradition of research might wonder. And I remained curious about the issues the researchers encountered doing longitudinal research. This is another entire paper of course, so the authors have whetted my interest about that. I was also uncertain about the size of the research team – clearly more than the two who produced this paper. That’s probably not at all important, I was just intrigued.
I’m sure that you could see other things when you read to see how much specific information had been provided. That’s good. But my point here is not about our various readings, it s a more obvious one.
Reading for writing – in this case, looking for what specifics are provided and what are not – can really help you to think about the decisions that you have to make in your own writing. What will you say and not say about your own research design?
Image credit: ClarkMaxwell, flickr commons