Article Text

Systematic review hacks for the sports and exercise clinician: five essential methodological elements
Free
  1. Clare L Ardern1,2,3
  1. 1 Aspetar Orthopaedic & Sports Medicine Hospital, Doha, Qatar
  2. 2 Division of Physiotherapy, Linköping University, Linköping, Sweden
  3. 3 School of Allied Health, La Trobe University, Melbourne, Australia
  1. Correspondence to Dr. Clare L Ardern, Aspetar Orthopaedic & Sports Medicine Hospital, P.O. Box 29222, Doha, Qatar, c.ardern{at}latrobe.edu.au

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Karen is a keen middle-aged tennis player who presents at your busy practice after her family physician told her she has chronic lateral elbow tendinopathy. Karen is anxious to be back playing at her best for a big regional tournament in 6-weeks time. She has consulted with Dr Google and read about a promising treatment called ‘PRP’ (platelet-rich plasma). She asks you what you know about this treatment and whether you think it could help her get back on court.

You recall an abstract you scanned some months ago; patients who received PRP injections for chronic lateral epicondylitis (sic) had less pain at 6 months than those who received an active control treatment (needling under local anaesthetic).1 Promising. However, you know that evidence from more than one study is needed to help you and Karen make an optimal treatment decision. Searching PubMed that evening, you find a randomised, double-blind and placebo-controlled study. Injections of PRP were no better than injections of saline for reducing pain in patients with lateral elbow tendinopathy.2 So what now? How do you resolve this conflict? Are these the only two relevant articles or is there additional evidence either for or against PRP?

PubMed identifies 87 articles reporting on PRP and lateral elbow tendinopathy. Nightmare. However, hooray! There is a recent systematic review!3 Owing to the structured methodological approach to collating all available evidence that fits a specific and predefined research question, systematic reviews are an attractive and practical way for busy clinicians to keep abreast of new developments. However, not all systematic reviews are equal; a systematic review is only as good as the quality of the studies included in it—if those included studies are open to a large amount of bias, the systematic review might be at risk too.4

This editorial aims to highlight five important methodological elements of a systematic review and demonstrate the detailed process of a systematic review, so that the clinician knows what to look for when reading and using them. Being able to judge the quality of a systematic review is an important part of making a quality clinical decision.

Five essential methodological elements of a systematic review

  1. Clearly stated aim and clinical research question. Both of these will help the clinician decide whether the findings of a systematic review can be applied to his or her patient. Like any study, without a clear aim and research question it is impossible to know what the results of a systematic review mean, or how they might be used. A clinical research question should address the Patient (or population); Intervention (or treatment; may also be the ‘indicator’ in questions regarding diagnostic tests); Comparison group (if appropriate; may also be the ‘control’) and clinical Outcome/s. It may be narrow or broad in focus. For example: ‘Are platelet-rich plasma injections superior to no injection for return to play in active middle-aged people with lateral elbow tendinopathy?’ For more on PICO, see the web appendix.

  2. Unambiguous eligibility criteria that address the research question. Sound selection criteria ensure that articles are selected without fear or favour, and are essential for the clinician to evaluate whether the selected articles apply to the clinical setting. The criteria must be clearly stated and should address the elements of the research question. Other aspects including length of follow-up, study design (remember that systematic reviews are not limited to randomised trials), language or publication status (eg, theses, ‘grey literature’) may also be important. There should be a clear rationale for each criterion. Systematic reviews with inadequate selection criteria may fail to include all articles relevant to the clinical question; and this increases the potential for misleading findings.

  3. Thorough literature search to avoid missing key articles. The main sources of literature are electronic databases; at least two major medical literature databases (eg, PubMed, EMBASE, MEDLINE), plus at least one topic-specific database if appropriate (eg, SPORTDiscus for sports medicine, CINAHL for allied health, PsychInfo for sports psychology), should be searched because any single database does not cover every relevant article. Additional methods of identifying articles that may have been missed in the electronic database may include manual searching the reference lists of included articles, manual searching the ePublication lists of key journals and clinical trial registries, and forward citation tracking using a database like Web of Science. Search terms should include the relevant PICO so that the clinician can determine whether the findings apply to the patient at hand. The search should be appropriate to the systematic review—a good systematic review and literature search may not address every element of PICO. For example: in professional athletes, what is the return to sport rate following ACL reconstruction? This clinical question addresses the P (professional athletes), I (ACL reconstruction) and O (return to sport rate) elements of PICO (note that the C element is not included). Have regional differences in spelling and phrasing, and multiple clinical terms (eg, tennis elbow, lateral epicondylitis, lateral epicondylar tendinopathy) been considered? Has the search used the appropriate database language, including Medical Subject Headings (MeSH) and text words/keywords, and combined terms appropriately with the Boolean operators ‘OR’ and ‘AND’? Ask the questions: is there a chance that something could have been missed, and, could I replicate this search?

  4. Risk of bias within and between individual articles. The chance that articles are affected by factors that could influence whether we can be sure the results are true, represents the risk of bias. For example, in randomised trials, ideally the participants and assessors measuring the clinical outcomes do not know who is receiving the treatment and who is receiving the control; if the participants and/or assessors find out, it can artificially increase the results.5 Three frequently used checklists for assessing bias in rehabilitation systematic reviews are (1) the PEDro scale (for therapy/treatment reviews), (2) Downs and Black's checklist for assessing methodological quality, and (3) the QUADAS-2 tool (for diagnostic accuracy reviews). The cumulative evidence of a systematic review may also be influenced by at least two factors. Publication bias refers to the fact that studies with positive findings are more likely to be published than studies that do not find an effect. Selective reporting occurs when only some of the clinical outcomes that were measured in a study are reported. If the risk of bias is not adequately assessed or reported in a systematic review it is difficult to know whether it is reasonable to combine the results of the included studies, or how trustworthy the results are. A search may identify a large number of articles that could be included, but there may only a very small proportion of these articles that are of high quality. For example, a systematic review investigating groin injury management included 72 articles, yet only 4 were of high quality.6 In a quality systematic review, authors should report how they assessed risk of bias and how the results were used in the systematic review (such as whether it influenced the decision to combine data or not).

  5. Clearly described and predetermined plan of how the results of each article in the systematic review will be combined. In some systematic reviews, the results are presented descriptively (sometimes referred to as a qualitative or narrative synthesis; not to be confused with a narrative review); where the results of each paper are grouped and summarised, and often presented in themes. In others, a meta-analysis is performed (sometimes referred to as a quantitative synthesis); where the results of each article are pooled/combined using statistical models to produce one overall estimate of a clinical outcome. Any assumptions, additional calculations or decisions made should be clearly described.

How does the busy clinician use this information to make a quality clinical decision?

Had your grandmother been treating Karen, she (your grandmother) might have asked her colleague, Dr Cyriax, for his opinion regarding the best treatment. Your mother might have discussed new treatments with colleagues at the trade display of the annual national orthopaedic conference. However, you have the benefit of over 20 years of Dr Sackett's work in evidence-based practice (figure 1).7 To answer Karen's question, you might draw on the findings of the most recent systematic review evaluating PRP for chronic lateral elbow tendinopathy—there was strong evidence that PRP does not work.3 You also found that a multimodal approach (eg, eccentric exercise, mobilisation, massage, bracing) is advocated for treatment of lateral elbow tendinopathy.8 So, in the interests of facilitating optimal shared decision-making, you search for systematic review evidence of the effectiveness of eccentric exercises. Huzzah! Recent evidence suggests eccentric exercises work.8 At your next consultation with Karen you summarise the evidence for optimal treatment of chronic lateral elbow tendinopathy, and together make the decision to avoid PRP and start with an eccentric exercise programme.

Figure 1

Sackett's et al 7 model of evidence-based practice.

There are two key documents that may help you assess the quality of a systematic review: (1) the Preferred Reporting Items for Systematic reviews and Meta-Analysis (PRISMA) is a 27-item checklist of the important elements that should be reported in a systematic review; (2) A MeaSurement Tool to Assess systematic Reviews (AMSTAR) checklist is an instrument specifically designed for assessing the methodological quality of systematic reviews.

Summary

Systematic reviews are the highest level of evidence, and core foundations of evidence-based practice. However, a systematic review is only as good as the quality of the studies that are included. When using systematic reviews to make a quality clinical decision, check whether key methodological elements have been addressed. If they have, you might decide that the findings apply to your patient—and this enhances your confidence in treating your patient.

References

View Abstract

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

Footnotes

  • Twitter Follow Clare Ardern at @clare_ardern

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; internally peer reviewed.

Linked Articles

  • Warm up
    Stephen Aspinall