Article Text

Download PDFPDF

Unravelling confusion in sports medicine and sports science practice: a systematic approach to using the best of research and practice-based evidence to make a quality decision
  1. Clare L Ardern1,2,
  2. Gregory Dupont3,4,
  3. Franco M Impellizzeri5,
  4. Gary O’Driscoll6,
  5. Guus Reurink7,
  6. Colin Lewin6,
  7. Alan McCall3,6
  1. 1 Division of Physiotherapy, Linköping University, Linköping, Sweden
  2. 2 School of Allied Health, La Trobe University, Melbourne, Victoria, Australia
  3. 3 Department of Research and Development, Edinburgh Napier University, Edinburgh, UK
  4. 4 University of Lille, Lille, France
  5. 5 Schulthess Clinic, Zurich, Switzerland
  6. 6 Arsenal Football Club, London, UK
  7. 7 The Sports Physician Group, OLVG, Amsterdam, The Netherlands
  1. Correspondence to Dr Clare L Ardern, Division of Physiotherapy, Linköping University, Linköping, Sweden; c.ardern{at}

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The quantity of clinical information (scientific articles, videos, podcasts, and so on) available, and ways to access this information are increasing exponentially—far more rapidly than one can differentiate valuable information (signal) from non-useful or even misleading information (noise).1 Your job is to select information that is both accurate and useful, and to convert it into knowledge to make a quality decision.

How do you make a quality decision amid the noise?

Making a quality decision in practice should involve three key steps: (1) systematically searching and assessing the quality of published literature, (2) combining quality research evidence with quality clinical evidence, and (3) considering the feasibility of use in the practical setting. When reviewing the research evidence, you might choose to consult systematic reviews (SRs)—expected to be a high-quality and comprehensive summary of the body of research regarding a particular intervention, technique, procedure or technology. However, while you might expect that a SR is accurate, remember that the glittering thing is not always gold.2 Some poorly conducted SRs use complex data pooling to synthesise articles with high risk of bias, creating a publication with a tantalising title while the results and discussion/conclusion are misleading and confusing.2 The number of SRs being produced is growing rapidly, but many of these are redundant, misleading or unnecessary.3 This raises the question: How do you make sense of conflicting SR research evidence? How do you know which one to trust?

The real-world example

You are the team clinician and your star player sustains a time-loss injury—tendinopathy, hamstring strain, shoulder dislocation, ACL rupture—the injury specifics are not important here. But what is important is what you do next. How do you gather and synthesise quality information to help the player make an informed choice about treatment? You are not short of information to help inform a decision, but how do you make sense of it? What should you do?

The use of biological products such as platelet-rich plasma (PRP) is controversial, and it is a hot topic often faced by clinicians who work in elite sport. So your question might be: ‘Should I consider platelet-rich plasma as a treatment option for an athlete with Achilles tendinopathy?’ The team clinician is often under pressure to get the star player back on the pitch. Pressure comes from many angles about a fabulous new, wonder treatment trumpeted by the media.4 Players are asking about this treatment because they have seen another world-class player has received it, or they have been treated with it at their previous clubs. An agent is threatening to take a player abroad for a celebrity-endorsed miracle cure.5 Dealing with the cultural beliefs about a treatment can be a big challenge. (Temporarily) Fashionable treatments gather momentum without sound evidence,6 and once you agree to a treatment for one player, you might quickly find yourself with a queue of players asking for the same thing. The player does not usually have the skills to consider whether a treatment is ‘evidence-based.’ As long as nothing ‘bad’ happened, the player probably considers it a ‘good’ treatment (regardless whether it actually worked, or not). On top of this, the manager just wants his or her star player back as soon as possible for the ‘BIG’ match. Under these pressures, how do you make a quality decision and confident recommendation to the player and coach?

Steps in reaching a quality decision

Using a method originally created for medication prescription,7 and now adapted for use in the sport setting,7 8 we arrived at a graded recommendation using a combination of research evidence and practice-based knowledge and experience to answer the question: Should I use PRP to treat the player with Achilles tendinopathy? The main purpose of this article is to demonstrate how to apply the three key steps (systematically search the published literature and assess the quality of research, interpret research evidence and combine it with knowledge from practice, consider feasibility in practice) for quality decision making to any clinical/practical question (figure 1).

Figure 1

Three steps in using evidence for quality decision making in practice (Reproduced from [The BMJ, Harbour R, Miller J, 323, 334, 20017] with permission from BMJ Publishing Group Ltdn).

Step 1: Reviewing and assessing research-based evidence

To make sure we identified all available SRs that might help answer our question in an unbiased way, we systematically searched three electronic databases (PubMed, Cochrane Library and Web of Knowledge) on 23 June 2016 for published SRs reporting on the use of PRP injections (any preparation) for treating Achilles tendinopathy in humans. We excluded reviews that reported on the use of PRP in combination with other treatment modalities (eg, surgery), evaluated the effectiveness of a range of conservative treatments without separating the data regarding PRP, or did not report separate data and conclusions for Achilles tendinopathy. We limited the search to articles published in English, French, German or Italian languages since 1 January 2010. We used the following search strategy, adapting the Medical Subject Heading terms according to the database: (1) ‘PRP’ or ‘platelet’ and ‘rich’ and ‘plasma’; (2) ‘tendon*’ or ‘tendin*’; (3) ‘review’ or ‘meta analysis’ or ‘meta-analysis’; (1) and (2) and (3).

To make sure that the article selection process was as rigorous as possible, and to make sure that nothing relevant was missed, two reviewers independently screened the titles and abstracts of each record identified in the electronic database search to judge whether we should disregard the record, or not (two reviewers is the ideal (eg, one picks up things missed by the other), but not mandatory, and may not always be possible in a clinical setting). Any article where it was unclear from the title and/or abstract whether the article should be disregarded or not, was obtained in full text, and independently screened by the two reviewers. Any discrepancies were resolved by consensus.

To make sure that we made appropriate conclusions about how trustworthy the research results were and how strongly we could rely on the research evidence, two reviewers (again, the ideal, but not mandatory) also independently assessed the methodological quality of the included SRs using the Assessing Methodological Quality of Systematic Reviews (AMSTAR) tool.9 Finally, two reviewers independently extracted the specific conclusions of the included SRs regarding the use of PRP to treat Achilles tendinopathy, and classified the conclusion of each SR as supporting, refuting, qualified support or inconclusive regarding the use of PRP to treat Achilles tendinopathy. Any discrepancies in assessment of quality and data extraction were resolved by consensus. Qualified support for PRP was where a SR concluded that the treatment might be beneficial, but that more research was necessary to confirm the efficacy (or lack of efficacy) of PRP.

Putting this together, we graded the SRs based on study design and the AMSTAR score.7 This was a two-step process: first we assigned a level of evidence: SRs that only included randomised controlled trials were rated as level 1 (highest) evidence; SRs of mixed study designs were rated as level 2 evidence. Then we rated the quality (high or low) and assessed the risk of bias, based on the AMSTAR score: high quality, very low risk of bias (AMSTAR ≥10); high quality, low risk of bias (AMSTAR 8–9); low quality, high risk of bias (AMSTAR 3–7); low quality, very high risk of bias (AMSTAR ≤2). Finally, the two reviewers discussed the SRs and collaboratively classified them as redundant and unnecessary; flawed beyond repair; decent but not clinically useful; might be clinically useful; or decent and clinically useful,3 based on predefined criteria that we developed considering the quality, risk of bias and presence of a definitive conclusion regarding the use of PRP to treat tendon problems (see table 1 for description of our classification system). The four classifications have been used to classify published healthcare SRs.3 We added the descriptors to help us consistently apply the classifications to sports medicine SRs.

Table 1

Explanation of clinical usefulness/uselessness of SR; classification based on Ioannidis’3 suggestions

Steps 2 and 3: Interpreting the research-based evidence and combining with practice-based evidence, and considering the feasibility of use in the practical setting

To make sure we identified and correctly interpreted the key, high-quality findings from the research evidence (step 2), and accurately assessed the feasibility in practice (step 3), seven sports medicine, sports science and applied research experts met for an expert consensus round-table discussion. Using their combined knowledge and experience, and considering the results from step 1, the expert group focused on the financial cost of PRP and whether there were any perceived benefits and/or risks associated with using PRP. A final graded recommendation was assigned according to the following classification: A—high, B—acceptable, C—weak, D—insufficient evidence to provide a recommendation or sufficient evidence to reject use/implementation. We are fortunate to work in a team setting where we can gather a large expert consensus group. We recognise that this may not be possible for solo practitioners, or practitioners working in smaller team settings. Most important is to gather views from other colleagues. This may be via face-to-face meetings of as few as two or three people; it may also include email or telephone correspondence with experts in the field (we regularly contact experts by email when applying our decision-making approach).


Regarding step 1, our search identified 313 unique records, and we ultimately assessed six SRs (figure 2). The included SRs fulfilled a median 3 out of 11 AMSTAR items (range 110 11 to 1012) (table 2 and online supplementary table 1). All but one of the SRs were assessed as low quality and high or very high risk of bias (table 2). We classified one review as redundant and unnecessary; three as flawed beyond repair; one as might be clinically useful; and one as decent and clinically useful (table 2). The two reviews that were classified as clinically (or potentially clinically) useful concluded that there was insufficient evidence to support the use of PRP to treat Achilles tendinopathy.

Supplementary file 1

Figure 2

PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) flow diagram outlining the process of article selection.

Table 2

Summary of quality and risk of bias of included SRs

Regarding steps 2 and 3 (combining the research and practice-based evidence, and assessing evidence quality and feasibility for a considered judgement and final recommendation), the use of PRP to treat Achilles tendinopathy was assigned a graded recommendation D—insufficient evidence to currently recommend its use (table 3).

Table 3

Assigning a graded recommendation for the use of PRP to treat tendon problems

I’ve got the information; now how do I interpret and use it in practice?

Our search strategy identified six SRs that could help us answer our clinical question about using PRP to treat Achilles tendinopathy. Overall, the quality of these SRs was poor, the risk of bias high and the conclusions often conflicting. These issues meant that there was limited research-based evidence for us to rely on. One thing we could take from the research-based evidence was that the two SRs that we assessed to be (or to potentially be) clinically useful concluded that there was insufficient evidence to support the use of PRP to treat Achilles tendinopathy. Finally, when we considered our practice-based evidence, the applicability to clinical practice was low. With so many unanswered questions about the treatment, we concluded that it cannot currently be recommended (graded recommendation D). This is important because while an individual SR with insufficient evidence regarding whether a treatment is effective or not, may not be clinically useful, in combination with other SRs/evidence in the graded recommendation approach, we build a broader brief of evidence upon which to base our decision.

We aimed to illustrate how the practitioner might combine research evidence and practical experience to make a quality decision. As we acknowledged at the beginning of this article, this is our approach to a clinical/practical question, and we have the luxury of specialised support to do this. But there are ways to adapt this approach to your practice without specialised research support, and we will outline some tips further on in this section. Also look out for future articles in this BJSM series on searching and critically appraising evidence.

While in this article we focused on PRP for treating Achilles tendinopathy, and assessed research-based evidence from SRs, the principles of this decision-making process can be applied to any technique, technology, treatment or training intervention. Research-based evidence from any appropriate study design can be used and the level of evidence adjusted accordingly—this is the principal purpose of our article. Unfortunately, our findings support the assertion that most SRs in healthcare are of low quality, have many flaws and are probably of little use in practice.3 But this should not be seen as an excuse to ignore SRs or research evidence, rather as a warning of the need to carefully assess the quality of research evidence you use when making decisions in practice. This is the challenge for the practitioner in using research evidence to make a decision (and usually make it fast) regarding an athlete’s management.

How can the graded recommendation approach help your decision making in practice?

Our graded recommendation approach involved three steps: (1) reviewing and assessing research-based evidence, (2) interpreting the research-based evidence by considering methodological quality and biases and (3) combining the consensus from research with practice-based experience (including assessing current practice), and the feasibility of use in the practical setting. Depending where you work, the decision-making team may be a solo practitioner, or it might a group of sports medicine and science practitioners who may or may not be supported by embedded applied researchers. The advantage of our graded recommendation approach is that it is applicable to all levels of practice, can be completed by the individual practitioner or a group of practitioners and involves equal consideration and weighting of quality research-based evidence and practice-based experience—two key pillars in evidence-based practice.13

How to make a fast but effective decision on the quality of research without specific research support

To get the most out of our graded recommendation process might require specific research skills (especially in literature searching, reviewing and quality assessment) that you may not have. However, there are reputable tools available to the clinician/practitioner for each of these key tasks. Here are a few options that you might like to try.

Fast and effective literature searching

One useful source of reputable research evidence is the PubMed website ( This is a free-to-search database that stores most research articles published in the medical (including sports medicine) field. The ‘Clinical Queries’ search option (found on the PubMed home page) is a specialised search query designed to help you quickly identify clinical studies, SRs, meta-analyses, consensus statements and clinical practice guidelines related to aetiology, diagnosis, therapy, prognosis and clinical prediction. For a search on PRP in the management of Achilles tendinopathy, we could use the search terms ‘platelet rich plasma’ and ‘Achilles tendinopathy’ (figure 3). Keep in mind that while the Clinical Queries search option is useful for identifying research related to specific clinical research areas, and it helpfully separates search results into clinical study and SR (including meta-analysis, clinical practice guideline, consensus statement) categories, it may not identify all relevant articles on a topic. For a more comprehensive search, it is still advisable to search the full PubMed website.

Figure 3

Searching PubMed Clinical Queries for research evidence on platelet-rich plasma for the management of Achilles tendinopathy.

There are many subscriber-based clinical support tools available to support your literature searching (and most have the added benefit that they provide a summary of the evidence), including Dynamed (, Physiotherapy Evidence Database (, BMJ Best Practice (, TRIP Database ( and Rehabilitation Reference Centre ( You might also have access to evidence-based practice tools through your workplace: National Health Service clinicians in the UK can access NHS Evidence Search via the National Institute for Health and Care Excellence (NICE;

Fast and effective critical appraisal of research

Where possible, look to identify quality SR evidence to help with your decision making. When you identify a potentially interesting and useful SR, there are five key aspects of the SR methods (review question, approach to literature searching, article selection, assessment of risk of bias and data synthesis) that you can check (for more information see ref 14). The Oxford Centre for Evidence-Based Medicine also provides a free, six-question critical appraisal tool (plus information on where to find the relevant information in a SR, and what information should be presented) guiding you, step-by-step, through a critical assessment of SR methods ( Another useful free tool is the 10-question Critical Appraisal Skills Programme SR checklist (

Fast and effective application of research evidence in practice

As well as specific research skills, getting the most out of the graded recommendation process might also require support from a second assessor/colleague. Duplicate article selection, assessment of study quality (risk of bias) and data extraction are methodological approaches to minimise errors in SRs. We had two people independently complete each of these tasks to make sure our approach was as rigorous as possible. While this duplication of tasks can be thought of as the ‘gold standard,’ it might not always be feasible in practice. If it is not possible to recruit a colleague to help with this process, it is still a worthwhile approach, even when not completed in duplicate.

An interesting component of the graded recommendation approach, particularly for the novice practitioner (or those working in solo practice), may be the inclusion of an ‘expert opinion’ regarding the specific technique, strategy or technology that is the focus of your clinical question. We have the benefit of working in a multidisciplinary team, and being able to apply a shared decision-making approach. But what if you work in solo practice? Maintain strong links with professional organisations, senior colleagues and mentors who can be called upon to provide advice when needed. We often contact experts from outside our team for advice, which we then consider in our graded approach. This is important because assessing current practice (both your own current practice and the current practice of other practitioners around the world) is a key component of our suggested decision-making approach. The best research evidence might not be feasible in practice, or research might still be playing ‘catch up’ to the best practice.

Researchers, with great power comes great responsibility

Evidence-based practitioners look to the research world, trusting that research will be high quality with low bias, and provide an accurate and truthful answer. We identified six systematic reviews, all attempting to answer the same clinical question. A useful SR could be considered a kind of ‘super article’—giving an accurate picture of what we know about a given topic, and highlighting biases, limitations and conflicts in individual articles. However, our six SRs were conflicting, generally poor quality, often at high risk of bias and had no mention of conflicts of interests from included original articles. Misleading recommendations may result in the practitioner adopting a technique, technology or strategy that does not work and/or could be harmful. Conflicting messages can create more confusion. The risk is that this could turn the practitioner away from using an evidence-based approach if he or she cannot trust research. There is a danger of widening the gap between research and practice rather than bringing these two closer together.

Many of the same original research articles were included in these SRs, yet often the SRs arrived at different conclusions. How can this be possible? The sceptical reader might point out that this means authors are trying to find an answer that suits their agenda (ie, trying to reverse engineer a research question to support what they wanted to find). There is a danger that in many cases, SRs serve as a marketing tool—conducted by authors with a vested interest in a product or finding.3 For this reason, conflicts of interest must be transparently declared and assessed for individual articles included in a SR (AMSTAR item 11). This might be especially important when evaluating biological/pharmaceutical industry products.3

Alternatively, if the reader takes off his or her ‘sceptics hat’ he or she might see the conflicting results as reflecting differing SR methods. Authors may have made different choices about inclusion and exclusion of articles, whether to include unpublished research and how to combine data (eg, whether to do a meta-analysis or not).15 Decisions about how data were synthesised must be clearly explained so that the reader can assess whether this was a reasonable approach and take these into consideration when interpreting the results.

It is the responsibility of the authors, editors and peer reviewers to ensure quality methods, accurate, unbiased interpretations and conclusions that have meaning for the practitioner. Which brings us to our call to researchers and authors of scientific articles: we implore you to make your SRs practical, with accurate, clear and clinically-relevant reporting. Beware of flawed designs and bias creeping in.15 An important scientific aim of SRs is to highlight the flaws in the current literature to provide and stimulate new, high-level research to provide these answers. So the conclusion of ‘more research is needed’ does not automatically mean that the SR is useless from a scientific perspective. However, a sole conclusion that more research is needed, does not help the practitioner.


Being a practitioner is challenging enough—supporting athletes to achieve their potential, reducing injury, and so on—without having to spend all your time trying to make sense of conflicting research articles or risking being sent in the wrong direction by misleading research. At best you might end up using something that does not work; at worst, it may do harm. Research must get its act together and provide quality evidence with clinical and practical meaning.

Although the quantity of scientific papers being published is increasing, there is a big gap for the practitioner between the amount of information representing the noise and the amount of useful information characterising the signal. Our graded recommendation approach could help researchers to close this gap—using the methodological checklists to assess the quality and risk of bias, and presenting clinically-relevant information. At the same time, the approach could support you to confidently identify useful information to support quality decision making for a range of common sports musculoskeletal problems.

Our systematic approach to searching the literature, assessing the quality and combining it with practical experience and feasibility might not be possible in all practical situations, particularly when an immediate decision must be made. Practitioners make dozens of clinical decisions every day; we do not suggest that every decision you make on the court/pitch/track/pool deck/sideline must go through this three-step process. At the moment, we lack a simple, practical solution for all decisions you make in day-to-day practice. Our approach is not a magic bullet solution (if nothing else, it takes time), so we suggest that it be used judiciously—it may be especially helpful for guiding decision making about the use of a technique/strategy/new treatment or purchase of a new technology, where you have slightly more time to decide. There are helpful tools, from reputable sources, such as the PubMed Clinical Queries search tool and the Oxford Centre for Evidence-Based Medicine critical appraisal tool, that might also support your decision making.

BJSM is here to help

Recognising that this graded approach could be seen as being more feasible in the professional sporting context (with the luxury of high-level research support when making decisions in practice) than a typical busy clinical practice, we have also provided practical tips for how the solo practitioner might approach searching for evidence, appraising the quality of evidence and synthesising information to make a quality decision when research support may not be as readily available. Subsequent articles will outline the key steps in performing a good literature search, and how to confidently appraise the quality of research evidence.


Our thanks to Rob Polsen, from University of Stirling, for expert comments and suggestions regarding searching scientific literature.



  • Contributors GD, AMC and CLA developed the concept for the manuscript. FMI conducted the literature search. CLA and AMC selected the articles, completed the AMSTAR checklist, extracted the data and wrote the first draft. All authors contributed to the graded recommendation, and revised the manuscript for important intellectual content.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

Linked Articles