Article Text

PDF
Unravelling confusion in sports medicine and sports science practice: a systematic approach to using the best of research and practice-based evidence to make a quality decision
  1. Clare L Ardern1,2,
  2. Gregory Dupont3,4,
  3. Franco M Impellizzeri5,
  4. Gary O’Driscoll6,
  5. Guus Reurink7,
  6. Colin Lewin6,
  7. Alan McCall3,6
  1. 1Division of Physiotherapy, Linköping University, Linköping, Sweden
  2. 2School of Allied Health, La Trobe University, Melbourne, Victoria, Australia
  3. 3Department of Research and Development, Edinburgh Napier University, Edinburgh, UK
  4. 4University of Lille, Lille, France
  5. 5Schulthess Clinic, Zurich, Switzerland
  6. 6Arsenal Football Club, London, UK
  7. 7The Sports Physician Group, OLVG, Amsterdam, The Netherlands
  1. Correspondence to Dr Clare L Ardern, Division of Physiotherapy, Linköping University, Linköping, Sweden; c.ardern{at}latrobe.edu.au

Statistics from Altmetric.com

The quantity of clinical information (scientific articles, videos, podcasts, and so on) available, and ways to access this information are increasing exponentially—far more rapidly than one can differentiate valuable information (signal) from non-useful or even misleading information (noise).1 Your job is to select information that is both accurate and useful, and to convert it into knowledge to make a quality decision.

How do you make a quality decision amid the noise?

Making a quality decision in practice should involve three key steps: (1) systematically searching and assessing the quality of published literature, (2) combining quality research evidence with quality clinical evidence, and (3) considering the feasibility of use in the practical setting. When reviewing the research evidence, you might choose to consult systematic reviews (SRs)—expected to be a high-quality and comprehensive summary of the body of research regarding a particular intervention, technique, procedure or technology. However, while you might expect that a SR is accurate, remember that the glittering thing is not always gold.2 Some poorly conducted SRs use complex data pooling to synthesise articles with high risk of bias, creating a publication with a tantalising title while the results and discussion/conclusion are misleading and confusing.2 The number of SRs being produced is growing rapidly, but many of these are redundant, misleading or unnecessary.3 This raises the question: How do you make sense of conflicting SR research evidence? How do you know which one to trust?

The real-world example

You are the team clinician and your star player sustains a time-loss injury—tendinopathy, hamstring strain, shoulder dislocation, ACL rupture—the injury specifics are not important here. But what is important is what you do next. How do you gather and synthesise quality information to help the player make an informed choice about treatment? You are not short of information to help inform a decision, but how do you make sense of it? What should …

View Full Text

Request permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.