Shiri et al. conducted a meta-analysis to examine the effect of leisure time physical activity on non-specific low back pain (LBP) (1). Adjusted risk ratio (RR) (95% confidence interval) of moderately/highly active individuals, moderately active individuals and highly active individuals against individuals without regular physical activity for frequent/chronic LBP was 0.89 (0.82 to 0.97), 0.86 (0.79 to 0.94) and 0.84 (0.75 to 0.93), respectively. For LBP in the past 1-12 months, adjusted RR did not reach the level of significance in any levels of physical activity. The authors concluded that leisure time physical activity might reduce the risk of chronic LBP by 11%-16%. I have some concerns about their study by presenting negative information regarding protection of LBP by physical activity.
First, Saragiotto et al. conducted a meta-analysis on the effectiveness of motor control exercise (MCE) in patients with nonspecific LBP (2). MCE focuses on the activation of the deep trunk muscles and targets the restoration of control and coordination of these muscles. They concluded that MCE was probably more effective than a minimal intervention for reducing pain, but did not have an important effect on disability, in patients with chronic LBP. In addition, there was no clear difference between MCE and other forms of exercises or manual therapy for acute and chronic LBP. Although there is no definite information to recommend MCE for non-specific LBP, further studies are need...
Shiri et al. conducted a meta-analysis to examine the effect of leisure time physical activity on non-specific low back pain (LBP) (1). Adjusted risk ratio (RR) (95% confidence interval) of moderately/highly active individuals, moderately active individuals and highly active individuals against individuals without regular physical activity for frequent/chronic LBP was 0.89 (0.82 to 0.97), 0.86 (0.79 to 0.94) and 0.84 (0.75 to 0.93), respectively. For LBP in the past 1-12 months, adjusted RR did not reach the level of significance in any levels of physical activity. The authors concluded that leisure time physical activity might reduce the risk of chronic LBP by 11%-16%. I have some concerns about their study by presenting negative information regarding protection of LBP by physical activity.
First, Saragiotto et al. conducted a meta-analysis on the effectiveness of motor control exercise (MCE) in patients with nonspecific LBP (2). MCE focuses on the activation of the deep trunk muscles and targets the restoration of control and coordination of these muscles. They concluded that MCE was probably more effective than a minimal intervention for reducing pain, but did not have an important effect on disability, in patients with chronic LBP. In addition, there was no clear difference between MCE and other forms of exercises or manual therapy for acute and chronic LBP. Although there is no definite information to recommend MCE for non-specific LBP, further studies are needed to verify the interventional effects.
Second, Øverås et al. reviewed prospective studies to evaluate the associations between objectively measured physical behaviour and the risk or prognosis of neck pain (NP) and/or LBP (3). Eight studies out of 10 handled blue-collar workers, and increased sitting time at work reduced the risk of NP and LBP. Among blue-collar workers, increased physical activity during work and/or leisure increased the risk of NP and LBP. In addition, physical activity was not significantly associated with prognosis of LBP. These findings are not consistent with data by Shiri et al. (1), and type of job might be an important factor on the association.
Finally, Oliveira et al. reviewed prospective studies to investigate the prognostic role of physical activity in the course of LBP (4). Although low quality evidence presented that physical activity might not be a prognostic factor for pain and disability in patients with LBP, additional studies are indispensable to verify the association. I speculate that many factors would affect the relationship and might be complicated.
References
1. Shiri R, Falah-Hassani K. Does leisure time physical activity protect against low back pain? Systematic review and meta-analysis of 36 prospective cohort studies. Br J Sports Med. 2017;51(19):1410-1418. doi: 10.1136/bjsports-2016-097352
2. Saragiotto BT, Maher CG, Yamato TP, et al. Motor control exercise for nonspecific low back pain: A Cochrane review. Spine (Phila Pa 1976). 2016;41(16):1284-95. doi: 10.1097/BRS.0000000000001645
3. Øverås CK, Villumsen M, Axén I, et al. Association between objectively measured physical behaviour and neck- and/or low back pain: A systematic review. Eur J Pain. 2020 Feb 24. doi: 10.1002/ejp.1551
4. Oliveira CB, Pinheiro MB, Teixeira RJ, et al. Physical activity as a prognostic factor of pain intensity and disability in patients with low back pain: A systematic review. Eur J Pain. 2019;23(7):1251-1263. doi: 10.1002/ejp.1395
Allow me to make use of the opportunity to extend my appreciation to the BJSM for being a publication of high standing, bringing cutting edge information to the sports medical fraternity.
Thank you for the consensus statement of the International Olympic Committee describing the methods for recording and reporting of epidemiological data on injury and illness in sport 2020 (including STROBE Extension for Sport Injury and Illness Surveillance (STROBE-SIIS))”.[1] I found it both informative and useful.
I have a comment about the use of the word “Nervous” in the first column of Table 5. It is an adjective whereas the rest of the words in the column are nouns that more accurately describe the tissue type under discussion. It is possibly only a linguistic error, but I am of the opinion that it should be “Nerve” or “Neural tissue”.
Reference
1. Bahr R, Clarsen B, Derman W, et al. International Olympic Committee consensus statement: methods for recording and reporting of epidemiological data on injury and illness in sport 2020 (including STROBE Extension for Sport Injury and Illness Surveillance (STROBE-SIIS)). Br J Sports Med Published Online First: 18 February 2020. doi: 10.1136/bjsports-2019-101969
We read with interest the recent International Olympic Committee consensus statement: methods for recording and reporting of epidemiological data on injury and illness in sport 2020 (including STROBE Extension for Sport Injury and Illness Surveillance (STROBE-SIIS))”.[1] While helping to clarify aspects associated with recording and reporting epidemiological data, based on the definitions included in the statement, we believe that some of the examples in Table 10 require clarification with regards to the recording of injuries and calculation of time loss.
Consider the example for ‘Delayed’ time loss: Sunday injury, thigh contusion, able to train on Monday and Tuesday but unable to train on Wednesday and returns on Sunday (time loss starts on Wednesday even though the injury was on Sunday). Time loss (days) 3. Given the recommended reported time loss of 3-days, and definition provided whereby “time-loss days should be counted from the day after the onset that the athlete is unable to participate”, we assume Wednesday is considered as the day of onset (day 0), with subsequent impact on Thursday, Friday and Saturday resulting in a 3-day time-loss (days). When considering this example, we were then somewhat confused by the example for, ‘Intermittent’ time loss: boy with Osgood-Schlatter disease that gets reported at the start of a training camp on Monday. The player may train fully on Monday, Tuesday and Thursday, but miss training on Wednesday and Friday (time loss co...
We read with interest the recent International Olympic Committee consensus statement: methods for recording and reporting of epidemiological data on injury and illness in sport 2020 (including STROBE Extension for Sport Injury and Illness Surveillance (STROBE-SIIS))”.[1] While helping to clarify aspects associated with recording and reporting epidemiological data, based on the definitions included in the statement, we believe that some of the examples in Table 10 require clarification with regards to the recording of injuries and calculation of time loss.
Consider the example for ‘Delayed’ time loss: Sunday injury, thigh contusion, able to train on Monday and Tuesday but unable to train on Wednesday and returns on Sunday (time loss starts on Wednesday even though the injury was on Sunday). Time loss (days) 3. Given the recommended reported time loss of 3-days, and definition provided whereby “time-loss days should be counted from the day after the onset that the athlete is unable to participate”, we assume Wednesday is considered as the day of onset (day 0), with subsequent impact on Thursday, Friday and Saturday resulting in a 3-day time-loss (days). When considering this example, we were then somewhat confused by the example for, ‘Intermittent’ time loss: boy with Osgood-Schlatter disease that gets reported at the start of a training camp on Monday. The player may train fully on Monday, Tuesday and Thursday, but miss training on Wednesday and Friday (time loss counted as Wednesday and Friday only). Time loss (days) 2. Herein, applying the time-loss definition provided in the consensus statement [1] and the logic applied to the delayed time loss example, should the Wednesday not be considered as the onset of time loss and therefore counted as day 0? Based on the two examples and the time-loss (days) provided for each we feel as if time-loss (days) has been calculated differently and as such wish for the authors to clarify. From our position we agree with recording time-loss from the day after the injury when the injury occurs during training on that day, but if an athlete is unable to participate at all due to injury we feel as if this could be considered day 1 of time-loss.
In relation to the above examples it is also unclear as to why these have been considered as single injury reports (cases) given the definitions provided within the consensus statement, “subsequent injuries to the same location and tissue as the index injury are recurrences if the index injury was healed/fully recovered; they are exacerbations if the index injury was not yet healed/fully recovered”, when healed/fully recovered is when an “athlete is fully available for training and competition”.[1] Herein, for the delayed time loss and intermittent time loss examples we would interpret that the injury examples should be considered as multiple injury reports (cases). For delayed time loss, the first case would open following the initial injury on the Sunday and close when the athlete is considered healed/fully recovered, when they return to full training Monday, before a subsequent recurrence on the Wednesday, second case open, which closes on the Sunday (Figure 1a). Similarly for intermittent time loss, it seems that the initial case opens with the initial injury report Monday, closes when the athlete trained fully Monday, second injury case opens on Wednesday as the athlete is unable to train and closes Thursday upon full return to training, before another case opens on the Friday (Figure 1b).
INSERT FIGURE 1A AND 1B ABOUT HERE
Moreover, if the intermittent time loss example should in fact be considered as multiple injury reports (cases), based on time-loss methods suggested within the paper, both Wednesday and Friday would be considered as the onset of time loss and counted as day 0. Therefore, although the athlete missed two days of training, 0 time loss (days) would be calculated (Figure 1b). As a result such an approach for calculating time loss may lead to underreporting. We feel as if this supports our view that an athlete reporting injured at the start of training (and does not participate at all), incurs the first day of time loss and in this example results in two 1-day time-loss events linked to two subsequent injury reports (cases).
It is not our intention to challenge the authors and indeed the updated consensus statements has provided valuable recommendations for injury surveillance. Additionally, we are pleased to see the inclusion of examples within the paper as in our areas of research such examples are a regular occurrence. However, based on the two examples provided, we feel that there are discrepancies with regards to the calculation of time-loss (days) and a lack of clarity surrounding injury recording, specifically with respect to what keeps an injury report (case) open (delayed and intermittent time loss examples). We therefore ask the authors to consider clarification on each point we raise in this letter.
References
Bahr R, Clarsen B, Derman W, et al. International Olympic Committee consensus statement: methods for recording and reporting of epidemiological data on injury and illness in sport 2020 (including STROBE Extension for Sport Injury and Illness Surveillance (STROBE-SIIS)). Br J Sports Med Published Online First: 18 February 2020. doi: 10.1136/bjsports-2019-101969
Figure Descriptions
Figure 1: Injury examples from the International Olympic Committee consensus statement, with the inclusion of injury cases and associated time loss (days), based on the healed/fully recovered and time loss (days) definitions provided.
Tables and Figures
Figure available upon reasonable request
Whilst its principal message is clear, I wish to draw attention to three problems arising from the editorial authored by Caneiro et al.:
1. They say, “… pain is described as an altered state of a person’s knee health influenced by biopsychosocial factors, of which many can be modified.”
How is “knee health” different from “whole person health”?
Just how many biopsychosocial factors can be modified?
2. Contemporary evidence is said to support the proposition that “knee health” is “influenced by the interaction of different biopsychosocial factors” that have the property of “modulating inflammatory processes and tissue sensitivity”.
Is there any evidence that such an interaction actually takes place?
And furthermore, what are the postulated mechanisms for such interaction?
3. Their Infographic (“What should you know about knee osteoarthritis?”) contains the statement “rest and avoidance makes pain worse.” Presumably they are referring to avoidance of graded exercise. But even so, how do the authors justify their conclusion that avoidance of exercise or rest "per se" can “make pain worse”?
The Australian Sports Drug Medical Advisory Committee (ASDMAC) and Drug Free Sport New Zealand (DFSNZ) Therapeutic Use Exemption (TUE) committees welcome the recent discussion paper by our esteemed colleague Dr Ken Fitch entitled "Therapeutic Use Exemptions (TUEs) are essential in sport: but there is room for improvement." As the national bodies responsible for TUE assessment and processing in our respective nations, ASDMAC and DFSNZ agree that the integrity of the TUE process is sound and essential, but could be improved through a peer review process.
Although the World Anti-Doping Agency (WADA) does screen TUEs entered in Anti-Doping Administration and Managements System (ADAMS), the supplementary screening of TUE Committees themselves, including the members, their TUE processes and procedures, as suggested by Dr Fitch would improve the reliability and standardisation of TUEs. In 2018 and 2019, ASDMAC and DFSNZ with the support of the World Anti-Doping Agency (WADA) TUE expert group designed and conducted a TUE Peer Review Audit. This process included the documentation of the proposed audit process, followed by the respective visits of each Chair to the others TUEC meeting. During the visits the Chairs assessed a number of TUE applications and outcomes to ensure that those granted were done so in accordance with the WADA ISTUE and that the WADA Medical Information to Support TUEC decisions had been appropriately interpreted. These visits also includ...
The Australian Sports Drug Medical Advisory Committee (ASDMAC) and Drug Free Sport New Zealand (DFSNZ) Therapeutic Use Exemption (TUE) committees welcome the recent discussion paper by our esteemed colleague Dr Ken Fitch entitled "Therapeutic Use Exemptions (TUEs) are essential in sport: but there is room for improvement." As the national bodies responsible for TUE assessment and processing in our respective nations, ASDMAC and DFSNZ agree that the integrity of the TUE process is sound and essential, but could be improved through a peer review process.
Although the World Anti-Doping Agency (WADA) does screen TUEs entered in Anti-Doping Administration and Managements System (ADAMS), the supplementary screening of TUE Committees themselves, including the members, their TUE processes and procedures, as suggested by Dr Fitch would improve the reliability and standardisation of TUEs. In 2018 and 2019, ASDMAC and DFSNZ with the support of the World Anti-Doping Agency (WADA) TUE expert group designed and conducted a TUE Peer Review Audit. This process included the documentation of the proposed audit process, followed by the respective visits of each Chair to the others TUEC meeting. During the visits the Chairs assessed a number of TUE applications and outcomes to ensure that those granted were done so in accordance with the WADA ISTUE and that the WADA Medical Information to Support TUEC decisions had been appropriately interpreted. These visits also included meetings with other members of the National Anti-Doping Organisation (NADO) such as the TUE Secretariat and other staff in leadership and education roles. The entire process was presented and discussed at the WADA TUE Expert Group meetings.
Having conducted this peer review, ASDMAC and DFSNZ TUEC would commend this process to all NADO and International Federation (IF) TUECs. There is great potential to use this Peer Review process to ensure the quality and transparency in granting elite athlete TUEs around the world and across nations and sports, as well as supporting smaller, less experienced TUECs to establish robust processes in their work as TUE Committees. The universal adoption of a TUEC peer review process would be beneficial to all organisations involved in anti-doping and to athletes and sports to whom the integrity of the TUE process reflects the integrity of sport and performance itself.
As part of this excellent summary editorial, you mentioned how important it is to tailor your education to the needs and preferences of the patient. Therapists also have these diverse needs so it would be an excellent resource to have this in a graphical format that could be displayed openly in any department, whether it be in Outpatients or in physiotherapy for example as a visual reminder to clinicians but also visible for patients to interrogate so that they can have an understanding of what is to be expected in their consultation and by creating these expectations, will help to drive forward better, more holistic assessment and care of patients.
I'd like to commend you on running a large RCT on such an important topic (assessing the purported effectiveness of concussion-reduction technologies). Unfortunately I have some concerns about some aspects of your data and analysis, particularly the as-treated analysis in Table 4and your reported adherence numbers. I am hoping you can clarify these concerns and re-do parts of your analysis.
1. In the as-treated analysis you have a very strange result. Your multivariate risk ratio (which is actually a rate ratio) is 0.63 for everyone overall, 0.64 for females, and 0.93 for males. The result for everyone should be between the results for males and females. Can you please clarify how you got these results, including the exact model(s) you used and how you calculated the rate ratios? Did you use a group*sex interaction term to get the sex-specific results?
2. How you defined the as-treated group is concerning. You state that you only re-classified a subject if they spent >50% of their time in their non-assigned group OR if they were concussed while in their non-assigned group. This approach will bias the results of your as-treated analysis as you are deliberately misclassifying the AEs of people who do not get hurt and the non-concussed AEs of those who do. You need to classify every AE, rather than each athlete, as headgear or no headgear and repeat the as-treated analysis. Otherwise this analysis is highly questionable and...
I'd like to commend you on running a large RCT on such an important topic (assessing the purported effectiveness of concussion-reduction technologies). Unfortunately I have some concerns about some aspects of your data and analysis, particularly the as-treated analysis in Table 4and your reported adherence numbers. I am hoping you can clarify these concerns and re-do parts of your analysis.
1. In the as-treated analysis you have a very strange result. Your multivariate risk ratio (which is actually a rate ratio) is 0.63 for everyone overall, 0.64 for females, and 0.93 for males. The result for everyone should be between the results for males and females. Can you please clarify how you got these results, including the exact model(s) you used and how you calculated the rate ratios? Did you use a group*sex interaction term to get the sex-specific results?
2. How you defined the as-treated group is concerning. You state that you only re-classified a subject if they spent >50% of their time in their non-assigned group OR if they were concussed while in their non-assigned group. This approach will bias the results of your as-treated analysis as you are deliberately misclassifying the AEs of people who do not get hurt and the non-concussed AEs of those who do. You need to classify every AE, rather than each athlete, as headgear or no headgear and repeat the as-treated analysis. Otherwise this analysis is highly questionable and should be removed from the paper.
3. You report extremely high adherence (99.53%). Is this taking into account any non-adherence or only non-adherence from students not adhering for a majority of their AEs or suffering a concussion while non-adherent? It would be very helpful to see total non-adherence since it sounds like your ATs reported that? I would like to emphasize again that this total non-adherence is what should be used in your as-treated analysis.
If 0.47% is the total non-adherence for all participants, would you be willing to share some strategies you used to secure such great adherence?
4. Finally, it seems there is an extremely high rate of concussions among the non-adherent AEs. Per Table 4 and the text of your paper, there were at least 7 concussions among the 711 non-adherent AEs for a rate of 9.85 per 1,000 AEs. For the adherent group this leaves 123 concussions in 150,466 AEs for a rate of 0.82 per 1,000 AEs. This suggests the rate of concussions in the non-adherent was at least 12-fold that in the adherent (regardless of whether these involved wearing headgear or not). This is a very strong effect. Do you have any explanation for this vast difference? It is possible that this difference will shrink or disappear if you correctly count all non-adherent AEs as non-adherent.
Thank you again for conducting these trial and for your kind attention to these questions!
We are grateful for Dr. Binney’s interest in our study and his consideration of a portion of the results presented in the manuscript.
Listed below are our responses to each of the concerns raised in the letter.
1. In the as-treated analysis you have a very strange result. Your multivariate risk ratio (which is actually a rate ratio) is 0.63 for everyone overall, 0.64 for females, and 0.93 for males. The result for everyone should be between the results for males and females. Can you please clarify how you got these results, including the exact model(s) you used and how you calculated the rate ratios? Did you use a group*sex interaction term to get the sex-specific results?
Response: We thank you for noticing the mathematical inconsistency in Table 4 rate ratio results for the as-treated analyses. You are correct that if these results were from one model, the overall rate ratio estimate would need to be in-between the male/female estimates. We should note that these were actually 3 separate mixed-effects models: (1) the overall model adjusting for all variables including sex, (2) female sub-group model adjusting for all variables –excluding sex, and (3) male sub-group model adjusting for all variables –excluding sex. We apologize that the footnote in the table is unclear in this regard. We did attempt to use interaction models for this analyses, but did not achieve consistent convergence. As such, we opt...
We are grateful for Dr. Binney’s interest in our study and his consideration of a portion of the results presented in the manuscript.
Listed below are our responses to each of the concerns raised in the letter.
1. In the as-treated analysis you have a very strange result. Your multivariate risk ratio (which is actually a rate ratio) is 0.63 for everyone overall, 0.64 for females, and 0.93 for males. The result for everyone should be between the results for males and females. Can you please clarify how you got these results, including the exact model(s) you used and how you calculated the rate ratios? Did you use a group*sex interaction term to get the sex-specific results?
Response: We thank you for noticing the mathematical inconsistency in Table 4 rate ratio results for the as-treated analyses. You are correct that if these results were from one model, the overall rate ratio estimate would need to be in-between the male/female estimates. We should note that these were actually 3 separate mixed-effects models: (1) the overall model adjusting for all variables including sex, (2) female sub-group model adjusting for all variables –excluding sex, and (3) male sub-group model adjusting for all variables –excluding sex. We apologize that the footnote in the table is unclear in this regard. We did attempt to use interaction models for this analyses, but did not achieve consistent convergence. As such, we opted for sub-group analysis. It is curious that the overall rate ratio estimate does not fall within the bounds of the male and female rate ratio estimate; this is something we continue to explore. We regret not publishing the univariable results in conjunction with the multivariable results in Table 4. As noticed in the text below, the univariable results lead to estimates where the overall estimate falls in between that of the subgroup analyses. The cluster adjusted univariable rate ratios [URR (95%CI), p-value] and cox proportional hazard ratios [UHR (95%CI) p-value] comparing the incidence of SRC between the HG and NoHG groups are as follows:
Per protocol analyses; All No HG: n = 1545, SRC’s n = 68, %SRC = 4.4%. All HG: n = 1505, SRCs’ n = 62, %SRCs = 4.1% [RR = 0.96 (0.63–1.46) p = 0.855], HR = 0.99 (95%CI 0.65–1.50) p = 0.951]. Males - no HG: n = 546, SRC’s n = 8, %SRC = 1.5%. Males HG: n = 474, SRC’s n = 14, %SRC’s 3.0%, [RR 1.81 (0.63–5.18) p = 0.271], HR = 2.02 (0.70–5.80) p = 0.286]. Females-NHG: n = 999, SRC’s n = 60, %SRC = 6.0%. Females-HG: n = 1031, SRC’s n = 48, %SRC = 4.7% [RR = 0.87 (0.53-1.42) p = 0.582], [HR 0.83 (0.53-1.32) p = 0.442]
As treated analyses; All No HG; n =1546, SRC’s n = 75, %SRC = 4.9%. All HG n = 1504, SRC’s n = 55, %SRC = 3.7 [RR = 0.66 (0.41-1.08) p = 0.097], [HR = 0.80 (0.51-1.24) p = 0.315]
Males- no HG: n = 548, SRC’s n = 10, %SRC = 1.8%. Males-HG n = 472, SRC’s n = 12, %SRC = 2.5% [RR = 0.97 (0.30-3.20) p = 0.966], [HR = 1.40 (0.49-4.06) p = 0.531]
Females-NHG n = 998, SRC n = 65, %SRC = 6.5%. Females-HG n = 1032, SRC’s n = 43, %SRC = 4.2 [RR = 0.62 (0.37-1.06) p = 0.078], [HR = 0.69 (0.42-1.12) p = 0.134].
Given the fact that the univariable results are plausible, it convinces us that perhaps something is happening when adjusting for other variables or by having a random effect in each separate model that is causing the overall ‘as treated’ HG estimate to be outside the estimates for the male and female subgroup models. We will continue to look at issues of sparsity, multicolinearity, exchangeability, etc. that may be causing this disparity in the multivariable analyses.
2. How you defined the as-treated group is concerning. You state that you only re-classified a subject if they spent >50% of their time in their non-assigned group OR if they were concussed while in their non-assigned group. This approach will bias the results of your as-treated analysis as you are deliberately misclassifying the AEs of people who do not get hurt and the non-concussed AEs of those who do. You need to classify every AE, rather than each athlete, as headgear or no headgear and repeat the as-treated analysis. Otherwise this analysis is highly questionable and should be removed from the paper.
Response: For this study we considered the subjects without SRC to be adherent if they participated in more than 50% of the AE with their assigned group allocation. There are certainly limitations to this decision. One could also argue that our study team should have set the “threshold” of the percentage of exposures that met the assigned group criteria greater than 50% as we did. The 50% threshold appears arbitrary but we felt it was set to give us the most realistic picture of what was occurring at the team level during the study. We recognize that we could have set this threshold at 60%, 70%, 80% or even 90% of the athletic exposures and doing so would alter the distribution of the as-treated group. But given the high rate of compliance, we felt that our results would not sufficiently change by this decision. The distribution of compliance would have to be very different. In fact, we looked into this and determined that the classification of adherent versus non-adherent groups would not change drastically based on our definition of “as-treated”. The sensitivity of classification approach for all participants as follows: ITT HG Group (n = 1505); 0 % adherent n = 0, 1% to 10% adherent n = 0, 11% to 20% adherent n = 0, 21% to 30% adherent n = 0, 41% to 50% adherent n = 0, 51% to 60% adherent n = 0, 61% to 70% n = 3 (0.2%), 71% to 80% n = 2 (0.1%), 81% to 90% n = 33 (2.2%) Note: n = 7 of the 33 subjects sustained a SRC during one of their non-adherent AEs], 91% to 100% n = 1467 (97.5%).
ITT No HG group (n=1545); 0 % adherent n = 5 (0.3%), 1% to 10% adherent n = 0, 11% to 20% adherent n = 0, 21% to 30% adherent n = 0, 41% to 50% adherent n = 1 (0.06%), 51% to 60% adherent n = 0, 61% to 70% n = 0, 71% to 80% n = 0, 81% to 90% n = 0, 91% to 100% n = 1539 (95.4%). Note: The n = 6 non adherent participants in this group were changed to HG group in “as-treated” analyses due to non-adherence (wore their own headgear, not provided by the study team).
In the distribution results above, there are very few athletes relative to the entire sample that were affected by our definition of < 50% adherence. The second component that could change the analyzed group assignment for the as-treated analysis was having a non-adherent headgear status for the athletic exposure in which an SRC occurred. There were only 13 athletes whose group assignment was different between the ITT and as-treated analysis. Six of them changed from No HG to HG group based on adherence (seen in table above), and the other 7 changed from HG to No HG because of not wearing their headgear at time of SRC. We agree that it is a little puzzling that all 7 that changed status from HG to no HG were due to SRC. Could this be due to data collection issues? We explored this possibility and discuss further below.
We also acknowledge that our ‘as-treated’ groupings may not be optimal and the approach of assigning each athlete-exposure, rather than each student, to whether they wore headgear each day or not could be a preferred method to further analyze the as treated group. As suggested, we analyzed the data in this way for the ‘as treated’ analysis. The results of the cluster adjusted Univariable Odds Ratio [UOR (95%CI) p value] and Multivariable Odds Ratios [MVOR (95%CI) p value] comparing the incidence of SRC between the HG and NoHG groups using athletic exposure as individual unit of analysis are as follows: All HG: [UOR = 0.70 (0.45-1.07) p = 0.099], [MVOR = 0.67 (0.42-1.06) p = 0.089]. Males-HG [UOR = 1.09 (0.40-2.94) p = 0.871], [MVOR = 1.06 (0.38-2.97) p = 0.911]. Females-HG [UOR = 0.65 (0.40-1.06) p = 0.083], [MVOR = 0.67 (0.41-1.08) p = 0.101]. The cluster models were adjusted at the school level only due to singularity issues when additionally adjusting at the subject ID level
Note, this analysis method doesn’t apply to an ITT type of analysis because it allows the group assignment to change, while an ITT analysis does not. Also, this analysis method estimates an odds ratio, not a rate ratio since athletic exposures are considered the individual unit. Since there are no repeat concussions in our data, the OR should be similar to the RR. Comparing the results seen here to the published Table 4 as-treated results, does not show an appreciable difference. We have shown through the responses to these concerns that we can analyze the data in multiple different ways and the results have not changed in a manner that should bring the methods selected for publication into major scrutiny.
3. You report extremely high adherence (99.53%). Is this taking into account any non-adherence or only non-adherence from students not adhering for a majority of their AEs or suffering a concussion while non-adherent? It would be very helpful to see total non-adherence since it sounds like your ATs reported that? I would like to emphasize again that this total non-adherence is what should be used in your as-treated analysis. If 0.47% is the total non-adherence for all participants, would you be willing to share some strategies you used to secure such great adherence?
Response: We readily acknowledge the high compliance with the study once the participants were enrolled in the study, and we believe, to the best of our knowledge, this to be an accurate representation of adherence to study protocol. With that said, there are some selection effects that may have contributed to this unusually high compliance rate.
For any RCT, researchers must be concerned with compliance of the subjects to any given treatment or control group. This is even more a concern when the subjects are participants on athletic teams and or adolescents. It is entirely unrealistic to assume that each subject would be compliant for 100% of the athletic exposures. This study is no different. In fact, given our previous work with adolescent athletes, during the initial study planning, the entire study team recognized that we needed to have a plan to monitor and encourage compliance for subjects in the HG group.
To encourage compliance our plan was to stress the importance of the study during all phases of team and potential subject recruitment. Once the study began, we also stressed the importance to coaches and athletic trainers to continually monitor and report how many subjects were compliant to their assigned group.
This high compliance could be attributed to several factors but most likely due to the fact that the coaches who were highly motivated to have their teams participate in the study actually did so (and independently encouraged their athletes to fully comply with study protocol). Even though we contacted 537 teams to participate, only 33% of the female coaches and 23% of the male coaches agreed to take part in the study (see Figure 1 in manuscript).
Anecdotally, coaches who did not want the risk of being randomized to either the HG or NoHG group did not agree to participate at the onset of the study, resulting in a self-selected group of potentially highly motivated schools/coaches to be randomized. Further, motivation of the study participants should be mentioned. For example, once the school agreed to allow their teams to participate, 60% of the players in the NoHG group actually enrolled in the study while only 50% of the players from schools allocated to the HG group enrolled in the study (Figure 1).
At the soccer team recruitment and enrollment meetings, individual core study team members stressed the magnitude of the study. Potential participants (and their parents) were told the importance of being compliant with the study due to the costs and logistics involved in the data collection process. In many cases the coaches stressed these points as well. Many coaches repeated that while participation was voluntary they expected all subjects to be compliant on a daily basis throughout the season if they enrolled in the study. We anticipate but cannot fully prove that as the result of these statements during the subject recruitment meetings, the subjects who were most likely to be compliant enrolled in the study while those not likely to be compliant did not enroll in the study.
In addition, we allowed the subjects in the HG group to choose which brand of headgear to wear during the season. Each of the brands utilized of the study had unique characteristics, were comprised out of various materials and conformed to various head shapes and sizes differently. As a result, the headgear did not fit each subject to the same degree of comfort. The HG group participants were encouraged to try on each different HG brand and to pick the model they felt was the most comfortable as they would be wearing it the entire season, for every practice and competition. By allowing the participants to individually choose the HG model they would wear, we felt we could increase the likelihood the participant to be compliant throughout the season.
During the run up to and throughout the study, the PI contacted each AT collecting data reminding them the importance of compliance and to let the study team know if participants were not compliant. Coaches were contacted personally by the study PI if the school AT reported compliance was a concern for their team participants. In Figure 1, we note that of the n=1599 initially enrolled in the HG arm of the study, only n=1505 began participation (i.e. n=94 (6%) HG participants dropped out of the study after enrollment). This differed in the NHG arm where only n=15 (1%) did not participate with the team after enrollment. Presumably, most of these individuals left because they were cut from the team or quit before data collection began.
Finally, n = 59 participants chose to drop out of the study during the season. These individuals are effectively censored in the analysis (disproportionately in the HG group) and only their data up until the point of withdrawing from the study are included in n=3050. Study oversight rules and regulations prohibited us from contacting any of the players who decided to stop participating in the study. However, at the time of withdrawal, ATs recorded the reason for withdrawing from the study, if known. Most (69%) did not list a reason. We can’t say for sure, but it can be presumed a majority of the HG athletes did not want to wear the HG anymore and thus withdrew from the study. As a result, only data up to the point at which they withdrew from the study are included. Further info on study drop-out by headgear group are as follows:
Head Gear Group: n = 50 (3%) Non soccer related illness (n=1); quit team (n=10), didn’t want to wear (n=3); reason not listed (n=36). No Head Gear Group: n = 9 (0.6%) Quit the team (n=4); reason not listed (n=5)
Thus, although we do have high adherence once subjects started participating in the study, it should be noted that we do have a number of eligible subjects who did not enroll and or exited the study after enrollment. We readily acknowledged this fact and we mentioned this in the limitations section, that we are very aware that selection effect for study participants may be present. Thus selection effect, along with coaches and school ATs stressing compliance during the study, and allowing for subjects to individually choose the headgear brand they would wear are most likely the reasons for the high compliance although we have no way to definitively answer this question.
Further, we only analyzed the data reported to us by the school personnel regarding their participants. In an optimal situation, one could argue that to be most accurate to have a study team member (not school personnel) present at each of the school sites to monitor compliance. While that type of study oversight may be desirable it is entirely unrealistic given the size and scope of this study. We did perform weekly queries and periodically questioned individual school ATs if we felt the data that was reported was incongruent with the data reported from other teams. We are making the assumption that the ATs, who were incentivized for their efforts (note: the soccer programs themselves were also incentivized), accurately recorded HG use and injuries throughout the season. And although we feel relatively confident with this assumption, once we had the final data we felt we had to analyze the data as it was reported to us.
4. Finally, it seems there is an extremely high rate of concussions among the non-adherent AEs. Per Table 4 and the text of your paper, there were at least 7 concussions among the 711 non-adherent AEs for a rate of 9.85 per 1,000 AEs. For the adherent group this leaves 123 concussions in 150,466 AEs for a rate of 0.82 per 1,000 AEs. This suggests the rate of concussions in the non-adherent was at least 12-fold that in the adherent (regardless of whether these involved wearing headgear or not). This is a very strong effect. Do you have any explanation for this vast difference? It is possible that this difference will shrink or disappear if you correctly count all non-adherent AEs as non-adherent.
Response: The estimates of rate of concussion per 1,000 AEs based on non-adherent (1000*7/711 = 9.8) vs adherent (1000*123/150,446 = 0.8) is quite the difference, and worth exploring more. The distribution of concussion by ITT and As-treated assignment are as follows: ITT Assignment: NoHG and the As-treated; NoHG, No SRC = 78,558 while the Yes SRC n = 68. ITT Assignment: NoHG and the As-treated; HG: No SRC = 218 while the Yes SRC n = 0. ITT Assignment: HG and the As treated: NoHG; No SRC = 486 while the Yes SRC n = 7. ITT Assignment HG and the As- treated: HG; No SRC = 71,765 while the Yes SRC = 55.
In addition, here is the breakdown for all 7 subjects that switched from HG to no HG in the as-treated analysis because they suffered a SRC while not wearing HG. Player A; School 07 - 100% adherent from 8/15 – 09/01, then only ~50% adherent from 09/02 – 09/29 (n=13 total AEs; n=7 non-adherent). Suffers SRC on 09/29. 100% adherent after return to play. Player B; School 28 – 100% compliance from 03/20 – 05/01, then 0% compliance from 05/02 – 05/09 (n=5 AEs). Suffers SRC on 05/09. Did not return prior to end of season. Player C; School 13 – 100% adherent from 03/20 – 04/13, then 0% compliance from 04/17 - 04/18 (n=2 AEs). Suffers SRC on 04/18. Did not return prior to end of season. Player D; School 11 - 100% adherent from 03/20 – 03/30, then 0% adherent from 03/31 – 04/04 (n=3 AEs) then 100% adherent after return to play. Player E; School 22 – 100% adherent from 03/23 – 05/01, then 36% adherent from 05/02 – 05/16 (n=11 AEs; 4 non-adherent), then 100% adherent after return to play. Player F; School 29 – 100% adherent from 03/20 – 04/06, then 0% adherent from 04/07 – 04/11 (n=3 AEs), then 100% adherent from 04/12 – 05/15, then 0% adherent from 05/15 – 05/18 (n=4 AEs). Suffers SRC on 05/18 and does not return prior to end of season. Player G; School 18 – 100% adherent from 08/14 – 08/21, then 50% adherent from 08/22 – 09/05 (n=10 AEs; 5 non-adherent. Suffer SRC on 09/05. 100% adherent from return to play to end of season.
All 7 of these athletes were non-adherent for 10-20% of their total AEs. For the 6 athletes that were non-adherent but did not suffer an SRC, all of them come from different schools (01, 03, 05, 10, 13, and 33). Five of them, were non-adherent for the entire season and all had at least 30 AEs. The other, was non-adherent (i.e. wore HG, but was assigned no HG) for 33 of 57 AEs (57.9%).
By looking at the information above, it is noticed that 12 of 13 athletes came from different schools (so this anomaly isn’t athletic trainer specific). All 7 that had SRC had periods of non-adherence prior to sustaining their SRC. The latter could suggest that the anomaly isn’t due to only reporting non-adherence on days when an SRC occurred. It might be that ATs were not as compliant as we expected they were during the recording of compliance during each practice and game; however, given our efforts to promote and ensure compliance, we have no reason to not trust the accuracy of our results.
Thank you again for allowing to respond to these comments.
Thank you for your thorough response to my initial comment.
I am wondering if you could help me understand the new AE-level as-treated analysis you have done in response to Point 2. This accounts for all non-compliant AEs among all athletes, correct? If I understood you correctly, there were somewhat more than the 711 non-compliant AEs reported in the paper and which you reported in your response to Point 4, correct?
What would be very helpful to see is a.) the number of AEs and b.) the number of SRCs that occurred during those AEs for each of the following groups when considering any non-compliant AE, not just ones from athletes who suffered an SRC while non-compliant or were non-compliant >50% of the time:
Assigned HG/Did Not Wear:
Assigned HG/Did Wear:
Assigned No HG/Did Not Wear:
Assigned No HG/Did Wear:
If it's not too late. The authors may wish to correct their referencing throughout the paper. I noticed that the 3rd paragraph in the Introduction provides references that do not support the statements made. e.g. the Biswas et al paper did NOT assess the impact of PROLONGED sitting. Evidence on bouts of sitting is still very unclear, and none of these interventions have shown is has a meaningful impact.
It is also unclear how the authors can make their statement about 30 min/day being “likely to be clinically meaningful” – when it probably depends on what the sitting was replaced with (i.e. with standing vs. movement, etc) and the isotemporal substitution paper that is cited to support this assertion is based on a ‘theoretical’ shift of sitting to light activity from a cross sectional study (with risk of reverse causation). This seems to be selling a story that really isn't there.
A more reasoned conclusion might be that VERY SMALL reductions in TOTAL sitting per day seem possible (a drop in the bucket?) with interventions that require significant resources (notably, not too dissimilar to PA interventions), but whether or not such shifts in sitting per se would make any meaningful difference for health outcomes/biomarkers remains very unclear. Not much good if there is no efficacy for outcomes. The reducing sitting story comes across as more of an hypothesis and 'feel good' story, but the evidence upon which it is based (in terms of effic...
If it's not too late. The authors may wish to correct their referencing throughout the paper. I noticed that the 3rd paragraph in the Introduction provides references that do not support the statements made. e.g. the Biswas et al paper did NOT assess the impact of PROLONGED sitting. Evidence on bouts of sitting is still very unclear, and none of these interventions have shown is has a meaningful impact.
It is also unclear how the authors can make their statement about 30 min/day being “likely to be clinically meaningful” – when it probably depends on what the sitting was replaced with (i.e. with standing vs. movement, etc) and the isotemporal substitution paper that is cited to support this assertion is based on a ‘theoretical’ shift of sitting to light activity from a cross sectional study (with risk of reverse causation). This seems to be selling a story that really isn't there.
A more reasoned conclusion might be that VERY SMALL reductions in TOTAL sitting per day seem possible (a drop in the bucket?) with interventions that require significant resources (notably, not too dissimilar to PA interventions), but whether or not such shifts in sitting per se would make any meaningful difference for health outcomes/biomarkers remains very unclear. Not much good if there is no efficacy for outcomes. The reducing sitting story comes across as more of an hypothesis and 'feel good' story, but the evidence upon which it is based (in terms of efficacy) for health outcomes remains VERY shaky indeed (or mostly null) at present. More critical probing is needed, particularly if it begins to detract from messages around physical activity per se - which probably make any purported effects of sitting go away pretty quickly (particularly when measured objectively, unlike the Ekelund paper). There is already ample evidence that physical inactivity is bad for health, etc. Much of the research on sitting is simply a good sales exercise (good for getting grants) and is splitting hairs, when the goal is really just to get people to move more. We'd really welcome more critical and reflective discussion on this as a whole.
Shiri et al. conducted a meta-analysis to examine the effect of leisure time physical activity on non-specific low back pain (LBP) (1). Adjusted risk ratio (RR) (95% confidence interval) of moderately/highly active individuals, moderately active individuals and highly active individuals against individuals without regular physical activity for frequent/chronic LBP was 0.89 (0.82 to 0.97), 0.86 (0.79 to 0.94) and 0.84 (0.75 to 0.93), respectively. For LBP in the past 1-12 months, adjusted RR did not reach the level of significance in any levels of physical activity. The authors concluded that leisure time physical activity might reduce the risk of chronic LBP by 11%-16%. I have some concerns about their study by presenting negative information regarding protection of LBP by physical activity.
First, Saragiotto et al. conducted a meta-analysis on the effectiveness of motor control exercise (MCE) in patients with nonspecific LBP (2). MCE focuses on the activation of the deep trunk muscles and targets the restoration of control and coordination of these muscles. They concluded that MCE was probably more effective than a minimal intervention for reducing pain, but did not have an important effect on disability, in patients with chronic LBP. In addition, there was no clear difference between MCE and other forms of exercises or manual therapy for acute and chronic LBP. Although there is no definite information to recommend MCE for non-specific LBP, further studies are need...
Show MoreAllow me to make use of the opportunity to extend my appreciation to the BJSM for being a publication of high standing, bringing cutting edge information to the sports medical fraternity.
Thank you for the consensus statement of the International Olympic Committee describing the methods for recording and reporting of epidemiological data on injury and illness in sport 2020 (including STROBE Extension for Sport Injury and Illness Surveillance (STROBE-SIIS))”.[1] I found it both informative and useful.
I have a comment about the use of the word “Nervous” in the first column of Table 5. It is an adjective whereas the rest of the words in the column are nouns that more accurately describe the tissue type under discussion. It is possibly only a linguistic error, but I am of the opinion that it should be “Nerve” or “Neural tissue”.
Reference
1. Bahr R, Clarsen B, Derman W, et al. International Olympic Committee consensus statement: methods for recording and reporting of epidemiological data on injury and illness in sport 2020 (including STROBE Extension for Sport Injury and Illness Surveillance (STROBE-SIIS)). Br J Sports Med Published Online First: 18 February 2020. doi: 10.1136/bjsports-2019-101969
We read with interest the recent International Olympic Committee consensus statement: methods for recording and reporting of epidemiological data on injury and illness in sport 2020 (including STROBE Extension for Sport Injury and Illness Surveillance (STROBE-SIIS))”.[1] While helping to clarify aspects associated with recording and reporting epidemiological data, based on the definitions included in the statement, we believe that some of the examples in Table 10 require clarification with regards to the recording of injuries and calculation of time loss.
Consider the example for ‘Delayed’ time loss: Sunday injury, thigh contusion, able to train on Monday and Tuesday but unable to train on Wednesday and returns on Sunday (time loss starts on Wednesday even though the injury was on Sunday). Time loss (days) 3. Given the recommended reported time loss of 3-days, and definition provided whereby “time-loss days should be counted from the day after the onset that the athlete is unable to participate”, we assume Wednesday is considered as the day of onset (day 0), with subsequent impact on Thursday, Friday and Saturday resulting in a 3-day time-loss (days). When considering this example, we were then somewhat confused by the example for, ‘Intermittent’ time loss: boy with Osgood-Schlatter disease that gets reported at the start of a training camp on Monday. The player may train fully on Monday, Tuesday and Thursday, but miss training on Wednesday and Friday (time loss co...
Show MoreWhilst its principal message is clear, I wish to draw attention to three problems arising from the editorial authored by Caneiro et al.:
1. They say, “… pain is described as an altered state of a person’s knee health influenced by biopsychosocial factors, of which many can be modified.”
How is “knee health” different from “whole person health”?
Just how many biopsychosocial factors can be modified?
2. Contemporary evidence is said to support the proposition that “knee health” is “influenced by the interaction of different biopsychosocial factors” that have the property of “modulating inflammatory processes and tissue sensitivity”.
Is there any evidence that such an interaction actually takes place?
And furthermore, what are the postulated mechanisms for such interaction?
3. Their Infographic (“What should you know about knee osteoarthritis?”) contains the statement “rest and avoidance makes pain worse.” Presumably they are referring to avoidance of graded exercise. But even so, how do the authors justify their conclusion that avoidance of exercise or rest "per se" can “make pain worse”?
The Australian Sports Drug Medical Advisory Committee (ASDMAC) and Drug Free Sport New Zealand (DFSNZ) Therapeutic Use Exemption (TUE) committees welcome the recent discussion paper by our esteemed colleague Dr Ken Fitch entitled "Therapeutic Use Exemptions (TUEs) are essential in sport: but there is room for improvement." As the national bodies responsible for TUE assessment and processing in our respective nations, ASDMAC and DFSNZ agree that the integrity of the TUE process is sound and essential, but could be improved through a peer review process.
Although the World Anti-Doping Agency (WADA) does screen TUEs entered in Anti-Doping Administration and Managements System (ADAMS), the supplementary screening of TUE Committees themselves, including the members, their TUE processes and procedures, as suggested by Dr Fitch would improve the reliability and standardisation of TUEs. In 2018 and 2019, ASDMAC and DFSNZ with the support of the World Anti-Doping Agency (WADA) TUE expert group designed and conducted a TUE Peer Review Audit. This process included the documentation of the proposed audit process, followed by the respective visits of each Chair to the others TUEC meeting. During the visits the Chairs assessed a number of TUE applications and outcomes to ensure that those granted were done so in accordance with the WADA ISTUE and that the WADA Medical Information to Support TUEC decisions had been appropriately interpreted. These visits also includ...
Show MoreAs part of this excellent summary editorial, you mentioned how important it is to tailor your education to the needs and preferences of the patient. Therapists also have these diverse needs so it would be an excellent resource to have this in a graphical format that could be displayed openly in any department, whether it be in Outpatients or in physiotherapy for example as a visual reminder to clinicians but also visible for patients to interrogate so that they can have an understanding of what is to be expected in their consultation and by creating these expectations, will help to drive forward better, more holistic assessment and care of patients.
Dear Dr. McGuine et al,
I'd like to commend you on running a large RCT on such an important topic (assessing the purported effectiveness of concussion-reduction technologies). Unfortunately I have some concerns about some aspects of your data and analysis, particularly the as-treated analysis in Table 4and your reported adherence numbers. I am hoping you can clarify these concerns and re-do parts of your analysis.
1. In the as-treated analysis you have a very strange result. Your multivariate risk ratio (which is actually a rate ratio) is 0.63 for everyone overall, 0.64 for females, and 0.93 for males. The result for everyone should be between the results for males and females. Can you please clarify how you got these results, including the exact model(s) you used and how you calculated the rate ratios? Did you use a group*sex interaction term to get the sex-specific results?
2. How you defined the as-treated group is concerning. You state that you only re-classified a subject if they spent >50% of their time in their non-assigned group OR if they were concussed while in their non-assigned group. This approach will bias the results of your as-treated analysis as you are deliberately misclassifying the AEs of people who do not get hurt and the non-concussed AEs of those who do. You need to classify every AE, rather than each athlete, as headgear or no headgear and repeat the as-treated analysis. Otherwise this analysis is highly questionable and...
Show MoreTo: The British Journal Sports Medicine
We are grateful for Dr. Binney’s interest in our study and his consideration of a portion of the results presented in the manuscript.
Listed below are our responses to each of the concerns raised in the letter.
1. In the as-treated analysis you have a very strange result. Your multivariate risk ratio (which is actually a rate ratio) is 0.63 for everyone overall, 0.64 for females, and 0.93 for males. The result for everyone should be between the results for males and females. Can you please clarify how you got these results, including the exact model(s) you used and how you calculated the rate ratios? Did you use a group*sex interaction term to get the sex-specific results?
Response: We thank you for noticing the mathematical inconsistency in Table 4 rate ratio results for the as-treated analyses. You are correct that if these results were from one model, the overall rate ratio estimate would need to be in-between the male/female estimates. We should note that these were actually 3 separate mixed-effects models: (1) the overall model adjusting for all variables including sex, (2) female sub-group model adjusting for all variables –excluding sex, and (3) male sub-group model adjusting for all variables –excluding sex. We apologize that the footnote in the table is unclear in this regard. We did attempt to use interaction models for this analyses, but did not achieve consistent convergence. As such, we opt...
Show MoreDear Drs. McGuine, Hetzel, and Kliethermes,
Thank you for your thorough response to my initial comment.
I am wondering if you could help me understand the new AE-level as-treated analysis you have done in response to Point 2. This accounts for all non-compliant AEs among all athletes, correct? If I understood you correctly, there were somewhat more than the 711 non-compliant AEs reported in the paper and which you reported in your response to Point 4, correct?
What would be very helpful to see is a.) the number of AEs and b.) the number of SRCs that occurred during those AEs for each of the following groups when considering any non-compliant AE, not just ones from athletes who suffered an SRC while non-compliant or were non-compliant >50% of the time:
Assigned HG/Did Not Wear:
Assigned HG/Did Wear:
Assigned No HG/Did Not Wear:
Assigned No HG/Did Wear:
Thank you again for your thorough response.
If it's not too late. The authors may wish to correct their referencing throughout the paper. I noticed that the 3rd paragraph in the Introduction provides references that do not support the statements made. e.g. the Biswas et al paper did NOT assess the impact of PROLONGED sitting. Evidence on bouts of sitting is still very unclear, and none of these interventions have shown is has a meaningful impact.
It is also unclear how the authors can make their statement about 30 min/day being “likely to be clinically meaningful” – when it probably depends on what the sitting was replaced with (i.e. with standing vs. movement, etc) and the isotemporal substitution paper that is cited to support this assertion is based on a ‘theoretical’ shift of sitting to light activity from a cross sectional study (with risk of reverse causation). This seems to be selling a story that really isn't there.
A more reasoned conclusion might be that VERY SMALL reductions in TOTAL sitting per day seem possible (a drop in the bucket?) with interventions that require significant resources (notably, not too dissimilar to PA interventions), but whether or not such shifts in sitting per se would make any meaningful difference for health outcomes/biomarkers remains very unclear. Not much good if there is no efficacy for outcomes. The reducing sitting story comes across as more of an hypothesis and 'feel good' story, but the evidence upon which it is based (in terms of effic...
Show MorePages