Article Text

Download PDFPDF

Setting our minds to implementation
  1. Evert Verhagen1,
  2. Caroline F Finch2
  1. 1Department of Public and Occupational Health, EMGO Institute for Health and Care Research, VU University Medical Center, Amsterdam, The Netherlands
  2. 2Australian Centre for Research into Injury in Sport and its Prevention (ACRISP), Monash Injury Research Institute, Monash University, Melbourne, Victoria, Australia
  1. Correspondence to Dr Evert Verhagen, Department of Public and Occupational Health, EMGO Institute for Health and Care Research, VU University Medical Center, Van der Boechorststraat 7, 1081 BT Amsterdam, The Netherlands; e.verhagen{at}vumc.nl

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

It is now well accepted that to prevent sports injuries we need more intervention studies.1 Therefore, it is somewhat alarming that most sports injury studies still only focus on the first two steps of the four-step prevention sequence of van Mechelen et al2: only counting injuries and describing causal factors. This has clearly been shown by Klügl et al,3 who reviewed approximately 5274 original sports injury publications, of which only 492 studies intended to establish the preventive value of a measure or programme. This review showed that although the number of efficacy/effectiveness studies has slowly increased over the years, this is still lagging behind the approximately 4000 descriptive and aetiological studies.

A similar issue is apparent in relation to implementation research.4 Consider the hierarchy in research questions: efficacy questions being on the fundamental scientific knowledge side, effectiveness questions in the middle and implementation questions on the practical end of the spectrum. Klügl et al3 unmistakably show that, just as with the sequence of prevention, we are stuck somewhere halfway. The field is starting to set a solid foundation of efficacy evidence but has not progressed to addressing the effectiveness questions, let alone beyond.

This is where we are facing an important knowledge gap because positive prevention study outcomes rarely get past the stage of being a beautiful well-cited publication. Moreover, positive intervention outcomes do not necessarily lead to subsequent prevention of injuries. Interventions can only prevent injuries when they are actually adopted and used by the intended end users.5 6 This is where understanding of key implementation factors needs to come in and why BJSM has decided to give this prominence.7 Over recent years, several Dutch and Australian studies revolving around, or laying a base for, implementation research have been published (eg, references 5 and 8,,10). These are gaining some resonance in the field, including increasing exposure of the field to implementation issues at the recent International Olympic Committee World Conference on Prevention of Injury and Illness in Sport, where sports injury prevention implementation research leaders presented their take on the challenges ahead in a dedicated symposium.

Although the notion that implementation research is the way forward has landed,1 4 this progression is not easy. Efficacy and, to a lesser extent, effectiveness studies are still the standard, and the design of implementation research is inherently, and necessarily, different. To truly make implementation count, we must embrace these differences and acknowledge that implementation is not something we can do in the margin of an effectiveness study.1 8 10 True implementation questions, in contrast to efficacy questions, are concerned with the uptake of an intervention in a real-life situation, including its fidelity, continuance, need for adoptionand outcomes.1 Such questions can only be properly answered when there is an efficacious or effective intervention available. Therefore, the focus does not need to be on establishing the preventive effect of the intervention again. As an example, a research team led by the principal author of this editorial is currently evaluating the implementation of a preventive programme in South African rugby: the BOKSmart programme. This programme aims to prevent catastrophic injuries by providing rugby coaches and referees with evidence-based preventive knowledge and skills. The effectiveness of this BOKSmart programme has previously been evaluated in New Zealand's RugbySmart programme.11 Consequently, we now only need to evaluate whether the required knowledge and skills are actually acquired by coaches and referees and translated to their practices on the field. Once this is demonstrated, it can be assumed that injuries will also be prevented.

It is important to understand that implementation questions cannot be solved in efficacy studies. In an efficacy study, a researcher is interested in the effect of an intervention on a specific outcome. To achieve this goal, the intervention is disseminated and evaluated under controlled conditions. These controlled conditions may include, among others, follow-up calls, incentives for participation or supervised preventive training. Moreover, external validity is low because participants in efficacy studies are a selected sample of the entire population of interest. Mostly, only highly motivated participants are included in efficacy trials. In short, the context in which efficacy trials are undertaken differs significantly from real life. Consequently, any conclusions regarding implementation that are derived from efficacy studies apply only to the study protocol, of which the actual intervention is only a single part.

Notwithstanding the controlled nature of efficacy studies, many studies have also been doomed to fail because their interventions have seldom been developed from a practical application viewpoint. Interventions developed solely to maximise health gain or risk reduction may not necessarily be the best interventions to be adjusted to practice.8 The resulting conclusion that low adoption and intervention uptake warrants further efforts to increase implementation is, therefore, not always warranted. It may well be that the well-intentioned intervention, which was designed to be efficacious, was just not useable in practice. There is a clear need to think differently about intervention development when successful implementation is the goal.

One promising framework for constructing acceptable and evidence-based intervention programmes is the Intervention Mapping (IM) protocol,1 8 12 which has been previously used successfully to develop a school-based injury prevention programme in The Netherlands.10 13 The IM protocol maps the path from recognition of a need or problem to the identification of a behavioural solution in a series of six consecutive steps and provides the tools to do so in an evidence-based and multidisciplinary manner. The strength of IM lies in the fact that the end users and others involved with the intervention programme are part of the development process. Although the resulting intervention programme may be less efficacious or effective than a ‘traditionally’ developed programme, this loss is counteracted through an increased adoption by the target population.

Imagine a 100% effective programme that is only adopted by 10% of the population, as opposed to a 50% effective programme that is adopted by 90% of the population. From an implementation perspective, the latter programme has most true impact and dominates. Surely, it is time now for all sports injury prevention researchers to rethink how they weigh-up these key implementation factors when they design interventions and studies to evaluate them.

Acknowledgments

CFF is funded by a National Health and Medical Research Council (NHMRC) Principal Research Fellowship. ACRISP is one of the International Research Centres for Prevention of Injury and Protection of Athlete Health supported by the International Olympic Committee (IOC).

References

View Abstract

Footnotes

  • Competing interests None.

  • Provenance and peer review Not commissioned; internally peer reviewed.