Joanna Ptolomey Systematic reviews: polishing the turd?
Jinfo Blog

17th June 2010

By Joanna Ptolomey

Item

Systematic reviews are usually considered to be the crème de la crème of research findings. They are summaries of large bodies of evidence. The review process is generally scientific in strategy, in order to limit bias. Literature is searched for, assembled and sorted, critically appraised and synthesized to address a specific question. Systematic reviews are generally used to inform decision making, plan research, and establish policy. However, they are hardly useful if they are flawed. Systematic reviews are considered to be the backbone of evidenced based medicine. You may be familiar with the Cochrane Collaboration (http://www.cochrane.org/) – the ‘gold standard’ of clinical systematic reviews. However they are also increasingly common in the social sciences. The Evidence for Policy and Practice Information (EPPI http://eppi.ioe.ac.uk/cms/) is a great resource for developing methods and systematic reviews in social sciences. With such importance placed on the role of the systematic review it is imperative that the methodology is adhered to strictly. However there seems there is a problem with one part of the methodology – the literature searching. A recent blog article from the Research Information Network (RIN http://digbig.com/5bbstr) suggests that in the social sciences the systematic review may not be so systematic. Whilst the methodologies used for analysis of papers may be systematic and of high quality, the rigorous search for evidence is producing inadequate quality reviews. The RIN suspect that researcher ‘lack of knowledge’ of ‘laziness’ in tracking down appropriate sources and databases is leading to questions around – just how systematic are these reviews if they have not searched for all the evidence? Being an analyst myself there is always that question of ‘have I considered every possible source’ for consideration as evidence. It means that you cannot limit your search to just one geographic region such as the UK or US or Australia and their heavily biased resources or databases. As we know databases are highly selective in their coverage. For example in a biomedical search I would always run a Medline and Embase search – even although they have a 50% overlap in content. Embase indexes from many more European journals that Medline does not cover. In a systematic review you can’t be lazy or assume – for example Medline is a fantastic database but it is far from perfect. Even in a very clinical field where randomised controlled trials are expected and required, there is still plenty of room for high quality qualitative and grey literature. Usually they take the form of reports published from many varied sources – think tanks, government departments, professional bodies, academic institutions and voluntary sector agencies. They are hard to find, define, and quantify but immensely valuable. The RIN article takes a swipe a low paid post-graduate students who have to ‘pay their dues’ in the research team. Such thoughtless consideration given to an important job, they would be wise to remember that you ‘can’t polish a turd’. The methodologies of critically appraising and synthesizing data can sometimes take over from the process of finding all the evidence in the first place. With so many great resources now seemingly at our fingertips we forget to question our skills as investigators. Good quality information is just as hard to find these days as it ever was – there is just more crap to sort through.

« Blog