Statistics from Altmetric.com
A substantial amount of research has tested the relationship between training load and injury.1 Given that sports injuries compromise team success,2 3 team administrators, players and coaches are now interested in this field. As team injury data are widely available through various internet sources (eg, Man Games Lost, https://www.mangameslost.com/), sports medicine staff are commonly evaluated based on the number of injuries sustained (or not sustained) by their playing rosters.
A search of the ‘PubMed’ database shows that in the past 18 years, there has been a rapid growth in ‘training load’ and ‘injury’ research, increasing from 9 papers in 2000 to 145 in 2017 (figure 1). Despite this growing body of literature, evidence-based guidelines to reduce workload-related injury are often poorly implemented due to the level of expertise or understanding of the high-performance team (including the skill coaches, strength and conditioning or medical staff) or their individual beliefs and experiences (cognitive biases, confirmation biases). This can lead to a disconnect between the evidence supporting training load and its role in injury, and the actual training programmes prescribed.4 Five common myths and misconceptions about training load and its role in injury and performance are reviewed in this paper.
Myth 1: load explains all injuries
The relationship between training, performance and injury has been of interest to researchers and practitioners for considerable time.1 5–15 Both individual16 and team17–19 performance can be explained, at least in part by training load, with higher training loads generally associated with better performance. Equally, a large body of evidence has emerged suggesting that inappropriately prescribed training load may increase injury risk20–25 and pain.26 Based on these findings, a myopic view would be that ‘load’ explains all injuries.
The multifactorial determinants of both performance and injury means that adaptation to training is influenced (positively or negatively) by biomechanical factors,27 as well as various emotional and lifestyle stressors.5 28 For example, elevated academic and emotional stress,29 30 anxiety31 and the stress-related personality traits of ‘self-blame’32 and ‘perfectionism’33 all increase injury risk. Likewise, athletes who sleep fewer than 8 hours per night have 1.7 times greater risk of sporting injury than their counterparts who sleep for 8 or more hours.34 When an increased training intensity and volume was coupled with shorter sleep in elite adolescent athletes, injury risk increased twofold (HR=2.3).35 The training, performance and injury relationship is complex and multifactorial.
Myth 2: the ‘10% rule’
A popular myth is that training load should not exceed 10% each week.36 This ‘10% rule’37 is a popular method of prescribing graded increases in training load, particularly in endurance sports. Although rapid changes in training load increase the risk of injury,38 39 there is no ‘10% rule’. In a study of novice runners, a standard 8-week training programme (control group) was compared against an adapted, graded, 13-week training programme (intervention group) on the risk of sustaining a running-related injury.40 Although training load increases were limited to 10% per week in the intervention group, there were no differences in injury prevalence between the graded training programme (21%) and standard training programme (20%) groups.40 In team sport athletes, we have previously observed large increases in injury risk when changes in training load increased from ≤10% per week (≤7.5% injury likelihood) to ≥15% per week (~21% injury likelihood).39 The likelihood of injury was as high as 38% when the weekly increase in training load was 50%. Nielsen et al 41 showed that novice runners who had >30% increases in weekly training volume were more likely to sustain injuries than those who had smaller increases (mean=22%) in training volume. They concluded that in novice runners, some individuals may tolerate weekly progressions around 20%–25%, at least for short periods of time.41
Although the general consensus is that large weekly changes in training load increase the risk of injury in both individual and team sport athletes, these changes in training load should be interpreted in relation to the chronic load of the individual athlete. For example, small weekly increases in training load (≤10%) in an athlete with low chronic training load will considerably delay the return of that athlete to full capacity, whereas an athlete with high chronic training load will likely tolerate much smaller increases in training load from week to week (figure 2). In this respect, limiting training load increases to 10% per week is, at best, a ‘Guideline’ rather than a ‘code’.
Myth 3: avoid ‘spikes’ and ‘troughs’ at all costs
The size of the current week’s training load (termed acute training load) in relation to longer term training load (termed chronic training load) determines the ‘acute:chronic workload ratio’ (ACWR, also referred to as ‘training-stress balance’).39 42 Across a wide range of sports (eg, cricket, rugby league, rugby union, Australian football, European football, Gaelic football, hurling, American football, basketball, handball and multisport athletes), rapid increases (ie, ‘spikes’) in workload have been associated with increased injury risk (table 1).43–69 When the ACWR was within the range of 0.8–1.3 (ie, the acute training load was approximately equal to the chronic training load), the risk of injury was relatively low. However, when the ACWR was ≥1.5 (ie, the acute training load was much greater than chronic training load), the risk of injury increased markedly (figure 3A).43 70
Given the link between ‘spikes’ in workload and injury risk, some have suggested that restricting ACWRs to ≤1.5 would reduce all injuries. Due to the multifactorial nature of injuries (discussed in myth 1), some athletes will sustain injuries at ACWRs much lower than 1.5, while other athletes will tolerate ACWRs well above 1.5. I have found the phrase ‘risk does not equal rate’ to resonate with practitioners; just because an athlete is at risk (even high risk), the event may not occur.
Importantly, although it is intuitive that large ‘spikes’ in workload increase injury risk, it is possible that ‘troughs’ in workload (ie, too little training) may also increase injury risk.46 49 71 These data suggest that both overtraining and undertraining may be problematic. There are two possible explanations for this finding: (1) undertraining leaves athletes underprepared for competition demands, and (2) ‘troughs’ in workload generally precede ‘spikes’ in workload.
If large fluctuations in workload resulting in ‘spikes’ and ‘troughs’ increase injury risk, does this mean that practitioners should never rapidly increase training load or unload their athletes? The answer is clearly ‘No’! Good coaches regularly use intensified training blocks to elicit greater physiological adaptations.72–74 Consider the example of a 2-week high-intensity ‘shock’ microcycle in junior triathletes. Participants completed 15 high-intensity interval sessions within three 3-day training blocks. Training elicited improvements in time trial performance and peak power output. Similar large improvements in repeated-sprint ability and Yo-Yo intermittent recovery performance (level 2) were observed in soccer players following a 13-day ‘shock block’ involving 12 high-intensity sessions.74 A limitation of most of these studies in elite athletes in real training is the absence of a control group. Just as ‘shock blocks’ may elicit superior physiological adaptations, unloading periods can allow for supercompensation.75 Athletes who reduce their training load the most during the taper typically improve the most in performance.75 With this is mind, although ‘spikes’ and ‘troughs’ in workload may increase injury risk, at times they may also be necessary to elicit greater physiological adaptations for enhanced performance. High training monotony (representing low variability in the training stimulus) and high training strain (representing the product of high training loads and high monotony) have been associated with the greatest incidence of illness in competitive athletes76; hence the need for variability within training microcycles, mesocycles and macrocycles.
Myth 4: 1.5 is the magic ACWR
Confused between relative and absolute risk?
The BJSM reader understands the difference between absolute and relative risk of injury. The proportion of players injured at a given workload or acute:chronic workload ratio (ACWR) represents the absolute injury risk. For example, at an ACWR of 1.3, the absolute risk of injury is ~4%. When acute load is double that of chronic load, the absolute risk of injury is ~16%. An alternate way of stating this is that players are 84% likely to remain injury free at an ACWR of 2.0. Injury likelihood can also be expressed in relative terms (ie, relative risk). In this respect, researchers can quantify the risk at a given workload (or ACWR) relative to a lower or higher workload. If the above example was used, the relative risk of injury at an ACWR of 2.0 (16%) would be four times greater than at an ACWR of 1.3 (4%).
Although the likelihood of injury is increased at an ACWR of ≥1.5 (on average), it is important to recognise that an ACWR of 1.5 is not a ‘magic number’—increased risk does not guarantee injury will occur (see myth 3).77 78 While recent attempts by researchers to use the ACWR to predict injuries have proven unsuccessful,63 64 66 given the multifactorial nature of injury, the fact that a single training monitoring variable was unable to predict injury events is to be expected.77 79 It is no secret that even using a wide range of training load variables obtained from Global Positioning System, accelerometers and player-perceived exertion ratings, injury prediction models (including regularised logistic regression, generalised estimating equations, random forests and support vector machines) were only able to predict non-contact injuries slightly better than what might be obtained by random chance (area under the receiver operating characteristic curve=0.64–0.76).80–82 So what factors may explain some athletes sustaining injuries at an ACWR of ≤1.5, while others tolerate extremely high chronic workloads and ACWRs? The difference between robust and fragile athletes can largely be explained by moderators of the workload—injury relationship.47 54 83 84 A moderator acts to either increase or decrease injury risk at a given workload (figure 4).83
For example, both younger and older athletes,85 86 and those with poorly developed physical qualities (eg, aerobic fitness, speed, repeated-sprint ability and lower body strength),47 54 84 low heart rate variability87 and low chronic training load54 86 have increased risk with a given spike in workload (figure 5). Large changes in throwing load (>60% per week) were associated with a twofold increase in shoulder injury rate (HR=1.9) and the effects of moderate (20%–60% per week) and large (>60% per week) increases in training load were exacerbated in the presence of poor external rotational strength and scapular dyskinesis.51 Rather than focusing solely on the ACWR, practitioners are advised to stratify players according to known moderators of the workload—injury relationship (eg, age, training and injury history, physical qualities), and interpret any internal and external load variables in combination with well-being and physical readiness data, and factors known to influence the risk of injury (figure 6).28 77 Most practitioners working with elite teams combine knowledge of individual-level physical risk factors (eg, screening measures), tests of physical qualities (eg, strength, aerobic fitness) and training load data, with the aim of maximising performance and minimising injury risk in athletes.
Myth 5: it’s all about the ratio
Because ‘spikes’ in workload that result in a large ACWR increase injury risk, practitioners have embraced measurements of the ACWR. Australian football players with an ACWR of >2.0 had up to 11 times greater risk of injury than players with lower ACWRs.45 However, seemingly overlooked63 is the importance of chronic training load, and its role in keeping athletes injury free. For example, although Hulin et al 43 showed that ‘spikes’ in workload increased injury risk, players with greater chronic workloads had a fivefold lower risk of injury than those with low chronic workloads (figure 3B). These findings have been replicated across a wide range of sports (eg, rugby league, rugby union, European football, Australian football, Gaelic football, American football, cricket) and research groups (table 1).44 46–48 50 52 54 55 57 61 62 65 68 71 84 86 88–94
The protective effect of training appears to arise from two sources: (1) exposure to ‘load’ allows the body to tolerate ‘load’, and (2) training develops the physical qualities (eg, strength, prolonged high-intensity running ability and aerobic fitness) that are associated with a reduced injury risk.54 95 96
What’s next for practitioners and researchers?
Recent research has advanced our understanding of the link between training load, injury and performance, however further research is required in order to continue to deconstruct the myths around training load.
Pushing the limits
Although the risk of injury is increased at an ACWR of ~1.5, one of the goals of training is to improve an athlete’s ability to tolerate training load. This would shift the ACWR—injury curve to the right, allowing athletes to have either (1) reduced injury risk at the same ACWR or (2) similar injury risk at a higher ACWR. The best approach to develop this robustness in athletes is currently unknown. For example, while it is clear that supra-‘threshold’ ‘spikes’ in workload increase the risk of subsequent injury, whether exposure to repeated sub-‘threshold’ ‘mini-spikes’ in workload increases robustness and resilience is unknown.
How much sport is too much?
Athletes from some sports (eg, basketball, ice hockey, European football, baseball) are regularly required to play three (eg, football) to five (basketball, ice hockey) games in 1 week. NBA and NHL seasons each consist of 82 games and Major League Baseball has 162 games. Although high chronic workloads are associated with lower injury rates, not all players can safely participate in all competitive games. Equally, one might expect that a minimum number of games (or minutes) should be played to maintain the health of athletes. In a study of elite rugby players over seven seasons, players who were involved in <15 or >35 matches in the preceding season were at increased risk of injury in the subsequent season.97 Playing back-to-back games away from home may be associated with higher injury rates in the NBA.98 99 With such demanding playing schedules, research into (1) the total number of games athletes can play without compromising their health and well-being, and (2) the optimum number of consecutive games before requiring rest is warranted. The reader will appreciate that not all matches (or minutes) are created equally. Capturing the intensity of competition would likely provide greater insight into injury risk than reporting the total number or frequency of matches in isolation.
When returning from injury, early loading is key
When rehabilitating players, medical and performance staff must balance the need to develop adequate chronic loads to prevent reinjury while also returning injured players as quickly and safely as possible. In Australian football players, completion of high rehabilitation workloads delayed return to play following lower limb non-contact injury. However, the longer rehabilitation time provided greater opportunity to build chronic sprint distance which protected against subsequent injury.88 In a recent study, Bayer et al 100 compared the effects of early (2 days postinjury) or delayed (9 days postinjury) rehabilitation of acute thigh and calf injuries. Starting rehabilitation 2 days after injury rather than waiting for 9 days shortened the return-to-play time by 3 weeks without increased risk of reinjury. Finally, most sporting injuries are not life or career threatening. It is therefore important that athletes receive education that any intolerance to training is likely to be temporary. Collectively, these findings suggest that (1) temporary training intolerance may occur following the initial injury insult and during periods of tissue repair, but this is unlikely to result in long-term intolerance or incapacity; (2) early staged loading can improve return-to-play time; (3) developing high chronic workloads that provide protection against subsequent injury risk may delay return to play; and (4) exposure to high sprinting workloads protects against subsequent injury, as long as these workloads have been attained gradually.
Response of different tissue types to different loading patterns
To date, the majority of workload-injury research has been conducted on single teams. This has resulted in a small number of different injuries to different tissue types being pooled as ‘workload-related injuries’. From the limited studies that have been performed on large data sets, it is known that the response of different tissue types will vary according to different loading patterns.101 Clearly, further long-term studies with large sample sizes are required to better understand the loading tolerances of different tissue types. In addition, it is possible that the workload variables and models used to calculate the ACWR could differ among injury types; investigations of the interaction among workload variables, loading patterns and specific injuries are warranted.
What about the possibility of multiple moderators?
Although recent systematic reviews have established a relationship between low chronic workloads, rapid increases in workloads and increased risk of injury,10–12 most but not all studies53 57 have been performed on senior-level athletes competing at the elite level. From the limited available research on adolescent athletes, higher daily,53 weekly and monthly57 training loads were associated with a greater risk of injury. A greater amount of acute load accumulated in high-speed running combined with a low high-speed running chronic load was also associated with a greater injury risk (relative risk=2.6).57 Blanch et al 85 demonstrated that young cricket fast bowlers (<22 years) were 3.7–6.7 times more likely to suffer a bone injury than other players, while older fast bowlers (>31 years) were 2.2–2.7 times more likely to sustain a tendon injury than younger bowlers.
To our knowledge, only one study has investigated the moderating effect of playing experience (as a surrogate measure for age) on the workload-injury relationship.54 Malone et al 54 demonstrated that Gaelic football players with less playing experience (1 year) were 2.2 times more likely to sustain an injury when exposed to ‘spikes’ in workload than a reference group (ie, players ≥7 years’ experience). Players with 2–3 years (OR=0.2) and 4–6 years (OR=0.2) playing experience had a lower risk of injury when exposed to workload ‘spikes’ than more experienced (≥7 years’ experience) players. Collectively, these results demonstrate that (1) the workload-injury relationships demonstrated in elite senior athletes may differ for junior athletes, (2) the injured tissue types may differ between younger and older athletes, and (3) both younger and older athletes are at greater risk of injury when exposed to workload ‘spikes’.
The mathematics and biostatistics relevant to this field of sport science
Several recent papers have explored the statistical approaches used in training load-injury analyses.102–104 While the ACWR provides a method of safely progressing and regressing acute training load in relation to the most recently performed chronic load, the ACWR variable,105–107 as well as its method of calculation,108 is not without its critics. It was demonstrated that ACWRs calculated using an exponentially weighted moving average (EWMA) formula provided different ACWRs than those provided by rolling averages (RA).109 Moreover, while ACWRs calculated from both EWMA and RA were associated with increased injury risk, the sensitivity to detect the increases in injury risk was better using ACWRs calculated from EWMA.44 More recently, others105 have suggested that the acute and chronic load should be ‘uncoupled’ (ie, acute load excluded from the calculation of chronic load) rather than the more traditional ‘coupled’ (ie, acute load included in the calculation of chronic load) ACWR. Interestingly, both coupled and uncoupled ACWRs have been associated with increased injury risk.51 110 However, research comparing coupled and uncoupled ACWRs for detecting injury risk has yet to be conducted and warrants investigation.
Where is the ‘Ceiling’?
Recently, Drew et al 111 presented evidence-based guidelines for practitioners to follow when prescribing training load to athletes. These included (1) establishing a moderate chronic training load, (2) minimising week-to-week changes in training load, (3) avoiding exceeding the ceiling of safety for the sport, (4) ensuring a minimum training load is maintained, (5) avoiding inconsistent loading patterns, (6) ensuring training loads are proportionate to the demands of the sport, and (7) monitoring the athlete throughout the latent period (ie, following the application of load or ‘spikes’ in load).
Although these points have been addressed throughout this paper, establishing the appropriate ‘ceiling’ of training load can, at times, be difficult for practitioners given that the demands of sport evolve over seasons and are influenced by several known contextual factors (eg, winning or losing, margin of victory or defeat, quality of opposition, and so on).112 The English Premier League has seen increases in the demands on players’ high-intensity running while in possession of the ball, sprint distance and number of sprints, the number of passes performed and received, and the number of short and medium passes performed in the period from 2006–2007 through 2012–2013.113 Researchers have also demonstrated increases in total ball-in-play time114 and decreases in recovery time115 in other team sports. Thus, the training load ‘ceiling’ may be a ‘moving target’. Regular monitoring of the physical demands of competition and reassessment of the ‘ceiling’ is required to ensure that athletes are adequately prepared for the most demanding passages of play.
In conclusion, in 2014 it was demonstrated that high chronic workloads were associated with lower injury risk as long as those workloads were achieved safely.43 This paper addresses some common myths and misconceptions about training load and provides an updated summary of the recent research examining training load and injury. In the past 5 years, a total of 38 studies from as many as 24 different research groups, and 11 different sports have demonstrated that rapid increases in workload43–69 and low chronic workloads44 46–48 50 52 54 55 57 61 62 65 68 71 84 86 88–94 are associated with greater injury risk. The next advance in this field will be a randomised controlled trial where the intervention consists of carefully described, reproducible load management and the control group trains ‘as usual’. This will be a difficult study to execute at any level of sport.
The author wishes to thank the reviewers fortheir constructive comments during the review process.
Contributors TJG is responsible for the content in this manuscript.
Funding The author has not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests TJG works as a consultant to several high-performance organisations, including sporting teams, industry, military and higher education institutions. He serves in a voluntary capacity as Senior Associate Editor of BJSM.
Patient consent Not required.
Provenance and peer review Not commissioned; externally peer reviewed.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.