Article Text
Statistics from Altmetric.com
A changing world
In the modern technological age, practitioners are exposed to a wealth of information from many diverse sources. Social media has resulted in rapid distribution of research evidence. As soon as an article appears on the journal website, those with the fastest fingers and thumbs will have the paper ‘posted’, ‘tweeted’ or ‘blogged’. Without doubt, social media has assisted researchers to distribute their findings to (hopefully) enhance translation to the ‘real world’. But how well does a single 140-character ‘tweet’ encapsulate the findings of a complete research study (which may range from 3000 to 5000 words)?
The cyclical continuum of evidence
Often research evidence is equivocal; for every study there will be another study reporting an opposing view, and quite often, factors other than empirical evidence shape our beliefs.1 Wherever possible, practitioners should examine their beliefs and the sources those beliefs are formed upon. We should try to ascertain what is a hypothesis, what is evidence (myth, story, empirical), and based on the strength of the hypothesis or evidence, how much weight should be placed on each. Below, we have provided some examples of how people obtain information and how some beliefs are formed. We have used training load and its relation to injury as a working example (figure 1).
Urban myths are fun but rarely true
Sports science and medicine has had a relatively short relationship with elite sport. Jeff Thomson was one of Australian cricket’s best fast bowlers in the 1970–1980s. In a 2013 interview, he stated that during his career he “didn’t really have any injuries” and that “all they did in my day was come up and say are you OK to play? And you said, sure I am.” He also indicated that he was just expected “to keep going.”2 Every sport has a legendary player such as Jeff Thomson, whose talents on the sporting field were unmatched, and have an opinion on training load and how much harder athletes trained “in their day.”3,4 However, just because Thomson was an expert fast bowler does not necessarily make him an expert on fast bowling workloads and how they relate to injury. Equally, a key point to recognise is that like most sports, Australian cricket only recently introduced official injury surveillance records, alongside complete bowling workload histories in the past 5–10 years.5 Unfortunately, due to recall error and with no official records, Thomson’s statements (along with the opinions of others) can only be considered ‘urban myths’. The further the player is removed from retirement, the greater the myth grows!
Stories are sticky, but are often just stories
The Jeff Thomson anecdote is tied to the human condition where we much prefer our information in stories that are relatable and meaningful, rather than the coldness of graphs and numbers in tables. Our beliefs are strongly shaped by our individual experiences and are influenced by biopsychosocial factors.6 For example, an athlete who has sustained an injury following the prescription of an inappropriate training programme may believe that “I can’t train too hard as my body cannot handle it.” It is irrelevant that the injury resulted from an inappropriate training programme; all the athlete believes is when he/she trains hard, they ‘break’. However, this is an individual ‘story’, not scientific evidence. While individual modifications may be necessary to ensure that workloads are maintained at the highest tolerable level,7 basing team training loads on the anecdotes of one individual is fraught with danger.1
Hypotheses should drive research questions
An important component of research is the development of ‘hypotheses’. Hypotheses are theories that are evidence-driven. In a recent insightful British Journal of Sports Medicine letter, Williams et al 8 hypothesised that when determining injury risk, an exponentially weighted moving average (EWMA) offered a ‘better’ approach to calculating the acute:chronic workload ratio (ACWR) than rolling averages (RA). Their findings did indeed show that the two methods were different. However, it took a subsequent research investigation to determine that the EWMA model was more sensitive than RA for determining injury risk at higher ACWR.9 Importantly, irrespective of the model used, athletes performing more training than they are prepared for are at an increased risk of injury. To date, this is the only study that has compared EWMA and RA models; to state that we now have a training monitoring system to prevent all sporting injuries would be somewhat premature.
Empirical evidence is the ultimate arbiter
For several years, higher training loads were thought to contribute to injuries.10 Recent research has demonstrated that high workloads are associated with fewer injuries, as long as those workloads are achieved safely.10 These findings have subsequently been shown across multiple sports and several research groups, although it is important to note that a randomised controlled trial is required to confirm these associations. Most of the discrepancies between previous and present findings can be explained by differences in research methodologies and analytical techniques. Equally, as science continues to evolve, what we know today will most certainly be different from what we know tomorrow.
Science is the book that never ends
We encourage practitioners to challenge the research and to use evidence (preferably scientific research rather than unreferenced blogs and ‘tweets’) to inform their practice. Good questions drive scientific research. However, it is much easier to ask questions than to conduct the research to answer those questions. The answers to good research questions drive further research and so on. Scientific research is, and always will be, a book with unfinished chapters. It is the ultimate Never Ending Story!
Footnotes
Contributors TJG drafted the original paper. PB provided feedback on subsequent drafts.
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.