Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Systematic reviews are a valuable tool to inform healthcare decision-making.1 2 While a single randomised controlled trial (RCT) is insufficient to definitively guide healthcare decisions, a systematic review synthesising multiple RCTs can overcome this limitation. The results of rigorous systematic reviews possess wide-ranging applicability to numerous stakeholders within the evidence-based medicine ‘ecosystem’. Clinicians consult systematic reviews to inform their clinical decisions.3 Researchers rely on systematic reviews to identify knowledge gaps in existing literature.4 Health policymakers use systematic review evidence to inform practice guidelines and legislation.5 6 Journal editors often prioritise systematic reviews for their impact on readership attention and journal metrics.7 Finally, patients are empowered by systematic reviews that assess the beneficial and harmful patient-important outcomes of available management strategies.8 Evidently, systematic review authors have an important responsibility to ensure their findings provide the most accurate results possible.
The biomedical literature expands by 22 systematic reviews daily,9 with no evidence that production is waning. More systematic reviews are desirable if they identify and inform important research questions that improve patient care.10 However, production of this magnitude is problematic when systematic reviews offer ‘extensive redundancy, little value, misleading claims and/or vested interests’.11 As we outlined in part 1, bias is a systematic deviation from the truth in the results of a research study due to limitations in study design, conduct, or analysis.2 Deviations may either overestimate or underestimate a study’s true findings depending of the type and magnitude of bias. As the results of a systematic review are only as valid as the studies it includes, pooling biased results from different studies can compromise the credibility of systematic review findings when no assessment, or a poor assessment, of risk of bias is performed.3 12
Inadequate study design, conduct, or analysis …
Twitter @peanutbuttner, @marinuswinters, @EamonnDelahunt, @clare_ardern
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Patient consent for publication Not required.
Provenance and peer review Not commissioned; externally peer reviewed.