Intended for healthcare professionals

Education And Debate

Systematic Reviews: Rationale for systematic reviews

BMJ 1994; 309 doi: https://doi.org/10.1136/bmj.309.6954.597 (Published 03 September 1994) Cite this as: BMJ 1994;309:597
  1. C D Mulrow
  1. Divisions of General Medicine and Geriatrics, University of Texas Health Science Center, San Antonio, Texas 70284, USA.

    Systematic literature reviews including meta-analyses are invaluable scientific activities. The rationale for such reviews is well established. Health care providers, researchers, and policy makers are inundated with unmanageable amounts of information; they need systematic reviews to efficiently integrate existing information and provide data for rational decision making. Systematic reviews establish whether scientific findings are consistent and can be generalised across populations, settings, and treatment variations, or whether findings vary significantly by particular subsets. Meta-analyses in particular can increase power and precision of estimates of treatment effects and exposure risks. Finally explicit methods used in systematic reviews limit bias and, hopefully, will improve reliability and accuracy of conclusions.

    Systematic literature review is a fundamental scientific activity. Its rationale is grounded firmly in several premises. Firstly, large quantities of information must be reduced into palatable pieces for digestion. Over two million articles are published annually in the biomedical literature in over 20 000 journals1 - literally a small mountain of information. For example, about 4400 pages were devoted to approximately 1100 articles in the BMJ and New England Journal of Medicine, combined, in 1992. In a stack, two million such articles would rise 500 m. Clearly, systematic literature review is needed to refine these unmanageable amounts of information. Through critical exploration, evaluation, and synthesis the systematic review separates the insignificant, unsound, or redundant deadwood in the medical literature from the salient and critical studies that are worthy of reflection.2

    Secondly, various decision makers need to integrate the critical pieces of available biomedical information. Systematic reviews are used by more specialised integrators, such as economic and decision analysts, to estimate the variables and outcomes that are included in their evaluations. Both systematic and more specialised integrations are used by clinicians to keep abreast of the primary literature in a given field as well as to remain literate in broader aspects of medicine.3,4 Researchers use the review to identify, justify, and refine hypotheses; recognise and avoid pitfalls of previous work; estimate sample sizes; and delineate important ancillary or adverse effects and covariates that warrant consideration in future studies. Finally, health policy makers use systematic reviews to formulate guidelines and legislation concerning the use of certain diagnostic tests and treatment strategies.

    An efficient scientific technique

    Thirdly, the systematic review is an efficient scientific technique. Although sometimes arduous and time consuming, a review is usually quicker and less costly than embarking on a new study. Just as important, a review can prevent meandering down an already explored path. Continuously updated literature review, as exemplified by the Oxford Database of Perinatal Trials, can shorten the time between medical research discoveries and clinical implementation of effective diagnostic or treatment strategies.5 A landmark example of cumulative meta- analyses and its benefits is shown in figure 1, which gives odds ratios and 95% confidence intervals for 33 trials that compared intravenous streptokinase with a placebo or no therapy in patients who had been hospitalised for acute myocardial infarction. The left side of the figure shows that the effect of treatment with streptokinase on mortality was favourable in 25 of the 33 trials, but in only six was statistical significance achieved. The overall pooled estimate of treatment effect given at the bottom significantly favoured treatment. The right side of the figure shows the same data presented as if a new or cumulative meta-analysis was performed each time the results of a new trial were reported. The years during which the treatment effect became statistically significant were 1971 for a two sided P value of <0.05, 1973 for a P value of <0.01, and 1977 for a P value of <0.001. This cumulative type of review indicated that intravenous streptokinase could have been shown to be life saving almost 20 years ago, long before its submission to and approval by the United States Food and Drug Administration and its general adoption in practice.

    FIG 1
    FIG 1

    Conventional and cumulative meta-analysis of 33 trials of intravenous streptokinase for acute myocardial infarction. Odds ratios and 95% confidence intervals for effect of treatment on mortality are shown on a logarithmic scale

    Generalisability, consistency - and inconsistency

    Fourthly, the generalisability of scientific findings can be established in systematic reviews. The diversity of multiple reviewed studies provides an interpretive context not available in any one study.7 This is because studies addressing similar questions often use different eligibility criteria for participants, different definitions of disease, different methods of measuring or defining exposure, different variations of a treatment, and different study designs.8

    Closely related to generalisability, a fifth reason for systematic reviews is to assess the consistency of relationships. Assessments of whether effects are in the same directions and of the same general magnitudes, given the variance in study protocols, can be made. More specifically, systematic reviews can determine consistency among studies of the same intervention or even among studies of different interventions (for example, varying doses or intensities or classes of therapeutic agents).9 Consistency of treatment effects across different disease with common underlying pathophysiology and consistency of risk factors across study populations can be ascertained.

    Conversely, a sixth reason for systematic reviews is to explain data inconsistencies and conflicts in data. Whether a treatment strategy is effective in one setting and not another or among certain subjects and not others can be assessed. Furthermore, whether findings from a single study stand alone for any reason such as uniqueness of study population, study quality, or outcome measure can be explored.

    Power and precision

    Seventhly, an often cited advantage of quantitative systematic reviews in particular is increased power. Quantitative reviews or meta-analysis have been likened to “a tower of statistical power that allows researchers to rise above the body of evidence, survey the landscape, and map out future directions.”10 An example of meta-analysis improving statistical power is shown in the Cochrane Collaboration's logo (fig 2), which depicts effect sizes of seven trials that evaluated the effects of a short course of corticosteroids given to women expected to give birth prematurely. Only two trials had clear cut, statistically significant effects, but when data from all of the studies were pooled the “sample size” and thus power increased, yielding a definitive significant combined effect size that indicated strongly that corticosteroids reduce the risk of babies dying from complications of immaturity. The advantage of increasing power is particularly relevant to conditions of relatively low event rates or when small effects are being assessed.

    FIG 2

    The Cochrane Collaboration logo shows how pooling data reveals the significance of treatment effects

    Eighthly, quantitative systematic reviews allow increased precision in estimates of risk or effect size. On the right side of figure 1 the cumulative meta-analysis shows that increasing sample size from temporally consecutive studies resulted in continued narrowing of confidence intervals even though efficacy had been established in the early 1970s.6 Particularly noteworthy, two very large trials - the 1986 study of the Gruppo Italiano per Io Studio della Streptochinasi nell'Infarto Miocardico (GISSI) involving 11 712 subjects and the 1988 second international study of infarct survival (ISIS-2) involving 17 187 subjects - did not change the already established evidence of efficacy, though they increased precision by narrowing the confidence intervals slightly.

    Accurate assessment

    A final rationale for systematic reviews is accuracy, or at least an improved reflection of reality. Traditional reviews have been criticised as haphazard and biased, subject to the idiosyncratic impressions of the individual reviewer.11 Systematic reviews and meta-analyses apply explicit scientific principles aimed at reducing random and systematic errors of bias.12 But whether such reviews will lead to greater reliability, and by inference greater accuracy, is not yet established clearly.8

    At the very least, the use of explicit methods allows assessment of what was done and thus increases the ability to replicate results or understanding of why results and conclusions of some reviews differ. In addition, reviewers using traditional methods are less likely to detect small but significant effects than are reviewers using formal systematic and statistical techniques.13 Finally, traditional review recommendations lag behind and sometimes vary significantly from continuously updated or cumulative meta-analyses.14 Figure 3 shows that pooled data from 15 randomised trials published before 1990 found no mortality benefit associated with prophylactic lidocaine for acute myocardial infarction. Despite this evidence, most pertinent traditional reviews continued to recommend prophylactic lidocaine. Antman et al have shown also that many effective treatments for reducing mortality due to acute myocardial infarction, such as intravenous magnesium, are not being recommended as often as they might be.6,14

    FIG 3
    FIG 3

    Cumulative meta-analysis by year of publication or randomised controlled trials of prophylactic lidocaine for acute myocardial infarction, and recommendations of clinical expert reviewers (adapted from Antman et al14)

    Summary

    There are a myriad of reasons to herald systematic literature reviews including meta-analyses. The hundreds of hours spent conducting a scientific study ultimately contribute only a piece of an enormous puzzle. The value of any single study is derived from how it fits with and expands previous work, as well as from the study's intrinsic properties.15 Through systematic review the puzzle's intricacies may be disentangled.

    The vast amount of available information under-scores the value of systematic reviews. As T S Eliot asked in his poem “The Rock,” “Where is the knowledge we have lost in information?” Moreover, decision makers of various types are inundated with unmanageable amounts of information. They have great need for systematic reviews that separate the known from the unknown and that save them from the position of knowing less than has been proved.16

    Advantages of the systematic review are many. Whether scientific findings are consistent and can be generalised across populations, settings, and treatment variations or whether findings vary significantly by particular subsets can be gleaned. Unique advantages of quantitative systematic reviews or meta-analyses are increased power and precision in estimating effects and risks. Hopefully, both qualitative and quantitative systematic reviews, with their explicit methods, will limit bias and improve the reliability and accuracy of recommendations.

    I thank Dr Rosalva M Solis for her assistance in the preparation of this article.

    References

    1. 1.
    2. 2.
    3. 3.
    4. 4.
    5. 5.
    6. 6.
    7. 7.
    8. 8.
    9. 9.
    10. 10.
    11. 11.
    12. 12.
    13. 13.
    14. 14.
    15. 15.
    16. 16.