Statistics from Altmetric.com
As BJSM readers know, systematic reviews identify, critique and summarise evidence relevant to a specific question. Most importantly, systematic reviews should be transparent and follow a predefined protocol to reduce bias. Since many articles in the scientific literature mislabel systematic reviews,1 journals including BJSM are improving systematic review reporting by adhering to standards such as the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) reporting guidelines.2 This editorial provides practical tips for systematic review authors relevant to sports and exercise medicine: guidance on four common problems and suggestions on how to address these problems.
Key steps for quality systematic reviews
There are six keys steps that you should follow for a systematic review:
State a clear question (specifying the Population studied, Intervention(s)or exposure(s), Comparisons (if any) and Outcomes—the PICO).
Conduct a systematic, replicable search for evidence using a prespecified search strategy.
Select studies for the review based on predefined inclusion and exclusion criteria.
Assess the internal validity or ‘risk of bias’ of studies included.
Extract data using predefined variables and coding forms.
Synthesise the characteristics and findings of the included studies (using quantitative (meta-analysis) and/or qualitative methods).
Types of systematic reviews
The most straightforward type of systematic review has a focused question and combines summary data from randomised controlled trials that measure outcomes using the same metrics (eg, does the FIFA 11+ programme reduce lower extremity injuries in football?).3 The Cochrane Handbook for Systematic Reviews of Interventions provides guidance for every step of the systematic review process4 that you can apply to systematic reviews of randomised controlled trials studying the effects of exercise, surgery or training on injury prevention, recovery or fitness. More complex systematic reviews may address multiple questions with multiple interventions and outcomes reported in different ways and synthesise observational or qualitative studies (eg, what is the efficacy of different injection therapies for plantar fasciopathy?).5
Four common pitfalls in systematic reviews
Lack of a prospective systematic review protocol
Protocols reduce bias in the systematic review process by making any changes in decisions transparent. Protocols allow editors, peer reviewers and readers to judge whether you made decisions after the review was started about which studies to include or which data to extract. This ensures that any changes in your systematic review methods are justified and do not introduce bias in the selection of data for your review. The protocol for your systematic review should describe all of the steps through to the analysis. If the proposed methods change (eg, a meta-analysis was planned, but it was not possible to combine the data), the reason for the change should be documented.
Protocols for Cochrane Reviews are published in The Cochrane Library. A number of journals publish systematic review protocols (such as BMJ Open or Systematic Reviews), and protocols could also be registered on the PROSPERO database.6
Inadequate search strategy
A comprehensive search uses a prespecified search strategy and includes an exhaustive list of possible search terms. It involves searching multiple electronic databases and the reference lists of all studies that meet the prespecified inclusion criteria. Otherwise, you will miss studies. If you only search one database (eg, MEDLINE), you might miss 70% of the randomised controlled trials relevant to your systematic review.7 Observational studies may be even harder to identify as they include different study types and may not be indexed as accurately. Cochrane recommends including non-English language studies as language restrictions limit the number of eligible studies identified. Unpublished studies should be identified by talking to researchers in the field and searching the grey literature and trial registries.8 While the comprehensiveness of your search must be balanced with the resources available, consulting with an information specialist or librarian is a worthwhile investment. Even when a comprehensive search is conducted, you should include an estimate of publication bias because failure to publish entire studies or all outcomes from a study is common.
Lack of (or inadequate) assessment of risk of bias of the included studies
Assessing the internal validity (risk of bias) of included studies is a fundamental step. Cochrane has a widely applied tool for assessing risk of bias in randomised controlled trials.9 Developing valid, reliable and widely accepted tools for assessing bias in observational studies of different designs (such as cohorts, case control or cross-sectional studies) is an active area of research. Many tools are available, and when selecting one for your review, the tool’s reliability, validity and prior application in the field are important considerations. BJSM is currently working on guidelines and recommendations for authors assessing risk of bias in systematic reviews.
Once you have selected a tool, it should be applied by at least two reviewers independently, to all included studies. Reliability of risk of bias assessments is improved when assessors are trained and when dual assessors with a mix of methodological and content area expertise are involved.7 Individual bias domains should be reported for each study rather than a quality ‘score’ because these scores do not discriminate between high-quality and low-‘quality’ studies.10 You must also decide how you will use the risk of bias assessments. Studies that are at high risk of bias could be excluded from further analysis. If you conduct a meta-analysis, you might consider a subgroup analysis by level of risk of bias to determine if summary estimates differ for high versus low risk of bias studies.
Combining studies that should not be combined in a meta-analysis
Methods for data synthesis are becoming increasingly sophisticated.11 Although it is tempting to plug the findings of the included studies into a statistical programme that produces a nice forest plot (figure 1), extreme caution is required at this step. Dissimilar studies should not be combined in a meta-analysis. Randomised and non-randomised studies should never be combined in a meta-analysis.
Decisions about whether to combine studies can be guided by the population studied, the intervention or exposure, the comparisons made and the outcomes measured. In the context of your review, is it reasonable to combine data from children and adolescents? Or data from athletes and non-athletes? If the intervention is physical activity, can studies with different intensity or duration of intervention be combined? Can studies comparing 3 days of 30 min walks per week to ‘normal activity’ be combined with studies comparing 3 days of 30 min walks per week with 3 days of 30 min swims? Can studies assessing the effects of walking on a given health outcome that use different walking protocols in terms of duration, intensity or frequency be combined? If quality of life is the primary outcome, can studies that used different measures for quality of life be combined?
Being specific in your inclusion or exclusion criteria or including populations, interventions or exposures and comparisons and outcomes with acceptable variation (eg, walking protocols must be between 30 and 60 min duration, three times per week and have the same duration) can improve the potential for combining studies in a meta-analysis.
When studies are combined in a meta-analysis, you must assess heterogeneity. Statistical heterogeneity is variability in the effect estimates of studies (assessed with statistical tests such as the I2 statistic). Clinical heterogeneity is variability in the population, interventions or outcomes of studies. Examining clinical heterogeneity requires careful examination of all study characteristics. Heterogeneity can be explored through prespecified subgroup analysis. For example, if a stationary cycling exercise intervention is being examined, studies that tested high-intensity cycling could be analysed separately from studies that examined low-intensity cycling. Subgroup analyses should be specified a priori, limited in number and interpreted cautiously. The ‘adequate’ number of subgroup analyses depends on the systematic review.
When it is inappropriate to combine data using meta-analysis, you must still synthesise the findings from the studies included in the systematic review. Graphical or tabular presentation of the findings from individual studies is helpful. Studies can be grouped by the characteristics of their interventions or outcomes. Similarities and differences in the studies and how these appear to be related to the findings can also be outlined. This type of analysis requires a deep dive into the methods of each included study. It is tempting to ‘vote count’ (five studies found an association, four did not), but this is not helpful unless the findings are put in the context of the similarities and differences among the studies.
Making sense of it all
Systematic reviews are powerful tools for synthesising the best available evidence on topics in sports and exercise medicine. Although this list of potential problems with systematic reviews is not exhaustive, adhering to all the steps in the systematic review process and preventing some of the most common problems in conducting reviews should considerably improve the quality of your systematic review. Understanding the basics of conducting a systematic review is an important first step to master before tackling more complex, next-generation systematic reviews.11
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.