Article Text

Download PDFPDF
Unravelling confusion in sports medicine and sports science practice: a systematic approach to using the best of research and practice-based evidence to make a quality decision
  1. Clare L Ardern1,2,
  2. Gregory Dupont3,4,
  3. Franco M Impellizzeri5,
  4. Gary O’Driscoll6,
  5. Guus Reurink7,
  6. Colin Lewin6,
  7. Alan McCall3,6
  1. 1 Division of Physiotherapy, Linköping University, Linköping, Sweden
  2. 2 School of Allied Health, La Trobe University, Melbourne, Victoria, Australia
  3. 3 Department of Research and Development, Edinburgh Napier University, Edinburgh, UK
  4. 4 University of Lille, Lille, France
  5. 5 Schulthess Clinic, Zurich, Switzerland
  6. 6 Arsenal Football Club, London, UK
  7. 7 The Sports Physician Group, OLVG, Amsterdam, The Netherlands
  1. Correspondence to Dr Clare L Ardern, Division of Physiotherapy, Linköping University, Linköping, Sweden; c.ardern{at}latrobe.edu.au

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The quantity of clinical information (scientific articles, videos, podcasts, and so on) available, and ways to access this information are increasing exponentially—far more rapidly than one can differentiate valuable information (signal) from non-useful or even misleading information (noise).1 Your job is to select information that is both accurate and useful, and to convert it into knowledge to make a quality decision.

How do you make a quality decision amid the noise?

Making a quality decision in practice should involve three key steps: (1) systematically searching and assessing the quality of published literature, (2) combining quality research evidence with quality clinical evidence, and (3) considering the feasibility of use in the practical setting. When reviewing the research evidence, you might choose to consult systematic reviews (SRs)—expected to be a high-quality and comprehensive summary of the body of research regarding a particular intervention, technique, procedure or technology. However, while you might expect that a SR is accurate, remember that the glittering thing is not always gold.2 Some poorly conducted SRs use complex data pooling to synthesise articles with high risk of bias, creating a publication with a tantalising title while the results and discussion/conclusion are misleading and confusing.2 The number of SRs being produced is growing rapidly, but many of these are redundant, misleading or unnecessary.3 This raises the question: How do you make sense of conflicting SR research evidence? How do you know which one to trust?

The real-world example

You are the team clinician and your star player sustains a time-loss injury—tendinopathy, hamstring strain, shoulder dislocation, ACL rupture—the injury specifics are not important here. But what is important is what you do next. How do you gather and synthesise quality information to help the player make an informed choice about treatment? You are not short of information to help inform a decision, but how do you make sense of it? What should …

View Full Text

Footnotes

  • Contributors GD, AMC and CLA developed the concept for the manuscript. FMI conducted the literature search. CLA and AMC selected the articles, completed the AMSTAR checklist, extracted the data and wrote the first draft. All authors contributed to the graded recommendation, and revised the manuscript for important intellectual content.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

Linked Articles