This is one of a very occasional set of posts about some of my own academic work that you might find useful.
A colleague and I have just undertaken what is called in the (academic) trade a Rapid Evidence Review. Or, as I have come to think of it, Quick Lit.
An RER is a form of literature review which is popular with policymakers and with organisations seeking to design and/or commission research. It aims to establish what research is available about a defined topic, as well as key results and major gaps. RERs are usually done by academics who already know a field of research well, as was the case with my colleague and me. When researchers bring existing knowledge of the field to an RER it makes for speedier work.
In this post I’m going to describe our RER process. I’ll use the term evidence throughout, even though you and I know that this is a highly contentious notion. It is however the terminology that is used when you are doing this type of literature work.
My goal in describing the RER is simply to make explicit one strategy for reviewing literature. It’s a strategy that you might want to use – and adapt – if you have to do a roughly similar task.
Another message to take from this post is that there is not one way to do literatures work. The literatures strategy you use often depends on the kind of literature review you are doing, what you hope to get from it, and who you are doing it for.
Beginning a rapid review – what’s in and what’s out
The RER begins with one or more focus questions. You may have to define these yourself – in our case, these were already established. Our RER commissioner had six questions that they wanted us to address.
Next, the boundaries of the RER have to be set. In our case, these were negotiated with the commissioner. The decision is about what literatures will be included and excluded and what are the cut off dates. In our case, we were interested in UK literature only, literature published since 2007.
The search terms – key words using for searching – and sources – the data bases searched – are then determined. And in our case, the terms and sources were agreed with our RER commissioner.
Sorting out categories
The difference between the usual form of systematic review and a rapid evidence review is that a systematic review often excludes research of particular types. Because we had to ascertain what kinds of evidence there were on our topic, we had to take an inclusive approach – our task included categorising research types.
We determined a numerical coding system for different types of research evidence. We used a fairly standard system for categorising research approaches
1 a. systematic review b. meta-analysis
2 randomised control trials
3 a. longitudinal studies b panel series c cohort studies d. secondary data analysis e. other
4 case-control studies. b. case series c. case reports d. mixed methods e. survey f. interview-based study g. ethnographic study h. theoretical development i. other
5 expert opinion
6 any other (eg thesis)
While this list could be – and sometimes is – read as a hierarchy of research types, we and our commissioners were clear that different research questions often require different approaches.
With this list in hand we then used the two number classifications – research question, and type of research – to construct a table. Our table also had four other columns. Six in total. See below. The first column was the bibliographic information to be used in referencing. The second was the country in which the research was conducted (where the data was from). The UK consists of four nations and it was of interest to see which countries the research addressed. The fifth column was further details about the research method – its sample, size scope etc. The sixth and largest column was for the key results. Anything we wrote in this column had to be short and pithy so that we could do the required task of identifying key results, debates and gaps.
I must pause here for a caveat. Our review was designed to meet our commissioners’ needs, but the codings can of course be varied. Different types of search might have different columns. At another time and in a different review we might, for example, look at the gender and race of the researchers. A further column could be added to look at key definitional terms. Or theoretical resources. The point is that using tables and coding categories allows you to do some counting and comparing. This may be useful.
Categorising and noting
It took us a few days to complete our first wave of searching. And when we had all of the relevant papers in hand, we went through them systematically, firstly recording their bibliographic information and country. Note, we didn’t search and note, search and note, search and note. We did one big search and then noted the corpus.
We didn’t sort our list of papers, reports and books alphabetically, but we could have, although in our case it wouldn’t have made things much easier. I would also advise using bibliographic software for this section of the task, importing papers and reports as PDFs with associated bibliographic information into a project library. Only some books are likely to need to be entered manually. But the software entries do need to be checked for accuracy.
We read the methods section of each paper in order to categorise the type of research used (as above, types 1-6). We then read as much of each paper as we had to, to ascertain which of the six pre-set research questions an individual item addressed. And then we sorted and summarised the key messages. Sometimes this meant we read the abstract, introduction and discussion/conclusion, often more.
Once we had all this information we were able to fill in the table we had constructed. Item by item.
We used a word document for this task, but we could alternatively have used an excel spreadsheet. However, as most of the items we found addressed more than one of our six key research questions, we would still have ended up doing a lot of counting by hand.
Looking for search omissions
We had to make sure that our initial search had located all of the literatures. So we added a few more terms just in case. In our RER we had two more waves of searching; the first to catch any recent literatures we had missed, and the second to pick up some key international literatures of research types 1-3. We looked outside the UK because we suspected, on the basis of our first wave search, that the field we were investigating had very little of type 1-3 type research at all, not just in the UK.
Sorting the master list
We eventually had a master list of publications. Every item completely categorised. We were then able to sort this master list into six sublists – each one addressed one of our specific six questions. These six lists then became the basis for a very critical evaluation – much discussion between us – and this led to a written summary of the literatures.
Writing the report
Our written report began with a description of our approach to, and process of, undertaking the RER. We then described the overall body of evidence, giving numbers and types of research for each of the six questions. A summary of key results for each of the six questions was provided. We concluded with a description of the gaps in the research and strategic possibilities for further inquiry.
Our RER, done.
The RER approach forces you to be quick and succinct. It is obviously not a process you want to use if you are seeking to get deep into the literatures. Even the usual systematic review takes longer than an RER. However the RER approach can be adapted – for example, for an initial scoping exercise of a field.
Maybe this is a process you could use. But a little reminder, mainly for my sake, not yours.
The Rapid Evidence Review is not a process suitable for all literature reviewing, although there is a family resemblance between all types of literatures work. Pretty well all lit reviews aim to sort, classify and summarise patterns. Questions and tables are often key to how this work gets done. However, a quick lit approach may not be what you want or need. But it could be helpful to you at particular times and for very particular tasks.
Gough, D., Oliver, S., & Thomas, J. (2013). Learning from research: Systematic reviews for informing policy decisions: A quick guide. The Alliance for Useful Evidence, 1-38.
The entire website in which this OA report is located was down as I revised this post so I have temporarily linked to my copy Alliance-FUE-reviews-booklet-3