Using an expert consensus-based approach, a rugby union Video Analysis Consensus (RUVAC) group was formed to develop a framework for video analysis research in rugby union. The aim of the framework is to improve the consistency of video analysis work in rugby union and help enhance the overall quality of future research in the sport. To reach consensus, a systematic review and Delphi method study design was used. After a systematic search of the literature, 17 articles were used to develop the final framework that described and defined key actions and events in rugby union (rugby). Thereafter, a group of researchers and practitioners with experience and expertise in rugby video analysis formed the RUVAC group. Each member of the group examined the framework of descriptors and definitions and rated their level of agreement on a 5-point agreement Likert scale (1: strongly disagree; 2: disagree; 3: neither agree or disagree; 4: agree; 5: strongly agree). The mean rating of agreement on the five-point scale (1: strongly disagree; 5: strongly agree) was 4.6 (4.3–4.9), 4.6 (4.4–4.9), 4.7 (4.5–4.9), 4.8 (4.6–5.0) and 4.8 (4.6–5.0) for the tackle, ruck, scrum, line-out and maul, respectively. The RUVAC group recommends using this consensus as the starting framework when conducting rugby video analysis research. Which variables to use (if not all) depends on the objectives of the study. Furthermore, the intention of this consensus is to help integrate video data with other data (eg, injury surveillance).
- contact sports
- injury prevention
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Video analysis in sport is the systematic observation and interpretation of video to improve objectivity and reduce the bias and subjectivity that are inherent in human observation.1 It is a branch of the sports science subdiscipline known as performance analysis that merges qualitative biomechanical methods and notational analysis.2 3 Arguably, an unintended consequence of the link between performance analysis and video analysis is that most analyses have a performance-orientated focus.
While the utility of video analysis beyond the scope of performance has been recognised before,4–6 its role in understanding injury mechanisms and risk factors, and assisting medical protocols (eg, head injury assessment)7 8 and referee decision making (eg, television match official reviewing dangerous play) in rugby has grown over the last 10 years. This growth has precipitated the need for us to rethink video analysis as a predominantly performance-focused subject. In most team settings and within national and international rugby organisations, routine video analysis work falls within the coaching/performance department, without necessarily satisfying the needs of the medical department in terms of understanding injury mechanisms, risk factors and assisting with medical decision making.
An important methodological aspect of video analysis is to clearly describe and define actions and events to reduce bias and improve reliability.9 10 Despite this, den Hollander et al 11 found that half of all video analysis studies in rugby did not provide clear, detailed descriptions and definitions, while those that did differed in how they defined certain actions and events, limiting opportunities to make interstudy comparisons. For example, some studies use the World Rugby (formerly the International Rugby Board) law definition of the tackle, which requires the player with the ball (ball-carrier) to go to ground before a tackle can be observed, while other studies defined the tackle as ‘any attempt to stop or impede the ball-carrier, irrespective of whether or not the ball-carrier was brought to ground’.12–16 For the field to evolve and become more integrated, a framework with clear descriptions and definitions of key actions and events needs to be developed. To address the above-mentioned concerns, the rugby union Video Analysis Consensus (RUVAC) group was formed to synthesise a framework of descriptors and definitions to improve the consistency and quality of video analysis work in rugby union.
A two-step process was used to develop the framework of descriptors and definitions and reach consensus. For step 1, we performed a systematic search of the literature using the methods described by den Hollander et al.11 Specific search terms were used to identify peer-reviewed articles in three electronic databases: Scopus, PubMed and Web of Science. The search terms were ‘rugby union’ in the title, keywords or abstracts linked with any of the following terms: ‘performance analysis’, ‘video analysis,’ ‘tackle performance’, ‘video’, ‘notational analysis’, ‘match performance’, ‘match analysis’, ‘time motion analysis’, ‘attacking strategies’, ‘defensive strategies’, ‘performance indicators’, ‘injury risk’, ‘injury mechanisms’ or ‘injury rates’ anywhere in the text. Each of these terms were searched separately with ‘rugby union’ either in the title, keywords or abstracts (a total of 14 searches for each database). For example, in Scopus, the full electronic search strategy for the term ‘video analysis’ was: (TITLE-ABS-KEY(“Rugby Union”) AND ALL (“video analysis”)) AND PUBYEAR <2019 AND (LIMIT-TO (LANGUAGE,“English”)) AND (LIMIT-TO (SRCTYPE,“j”)). The results of all 14 searches were subsequently merged and duplicates removed. The time frame for the literature search was any study published up to 31 December 2018. The inclusion criteria were as follows: the article needed to use video analysis to study rugby union match footage and needed to be published in English and in a peer-reviewed journal. Inclusion criteria were applied at the title, abstract and full-text level, and any article not meeting the criteria was omitted from the review. The results from all three databases were merged, and duplicates were removed. Thereafter, a second reviewer applied the inclusion criteria to the title, abstract and full-text level. Any discrepancies between reviewer 1 and reviewer 2 were discussed and included/excluded from the final database. From this systematic search, papers that provided descriptors and definitions were identified. Only published papers that used video and provided descriptors and definitions of key contact events and outcomes were selected. If the same or similar descriptors and definitions were found in multiple papers, the original paper that provided the definition was cited. In addition to this, a meeting was held to discuss what descriptors are important for each contact event.
For step 2, a prominent group of researchers and practitioners with experience and expertise in all rugby video analysis objectives (performance, injury, medical/referee decision making and physical demands) collaborated to form the RUVAC group. No formal process was used to convene the group; however, consideration was given to having different national Unions represented. Once the group was formed, consensus on the descriptors and definitions developed in step 1 was attained using a Delphi consensus method.17 18 This method required each member to examine the framework of descriptors and definitions and rate their level of agreement on a 5-point agreement Likert scale (1: strongly disagree; 2: disagree; 3: neither agree or disagree; 4: agree; 5: strongly agree). Agreement ratings were obtained separately for each contact event: the match characteristics, the tackle technique scoring criteria and the video analysis tackle model. For each of these, the mean agreement (±95% CI) was calculated by summing all the ratings and dividing it by the total number of respondents. After the first round of ratings, some comments and suggestions were proposed. Accordingly, these comments and suggestions were added, and a second round of ratings were obtained based on the revised descriptors and definitions. The level of agreement after round 2 (n=17) for each contact event, the match characteristics, the tackle technique scoring criteria and the video analysis tackle model are reported in the results section.
A total of 118 articles on video analysis in rugby matches have been published up to 31 December 2018 (figure 1). Fifty-five per cent (n=60) of these studies provided definitions. Seventeen articles were used to develop the final framework of descriptors and definitions and the technique criteria for the front-on tackle (tables 1–5 and online supplementary appendix 1 and 2)13–16 19–31
The mean rating of agreement was 4.6 (4.3–4.9), 4.6 (4.4–4.9), 4.7 (4.5–4.9), 4.8 (4.6–5.0) and 4.8 (4.6–5.0) for the tackle, ruck, scrum, line-out and maul, respectively. For match characteristics, the tackle technique criteria and tackle video analysis model (figure 2), the mean rating of agreement was 4.6 (4.4–4.9), 4.5 (4.2–4.9) and 4.2 (3.9–4.7), respectively.
The aim of this consensus is to improve the consistency and quality of video analysis work in rugby union. This list of descriptors and definitions was designed based on the deterministic and phase analysis models15 33–35 with the intention of being comprehensive, focusing on key actions and versatile across video analysis objectives. Therefore, the RUVAC group recommends using these descriptors and definitions as the starting framework when conducting rugby video analysis research. Which variables to use (if not all) depends on the objectives of the study. For example, if it is a tackle-related study, only the tackle variables may apply. Also, which variables are ultimately captured and analysed depends on the frequency of occurrence of that action in the sample. Furthermore, although comprehensive, the list is finite, and variables can also be added to meet the objectives of the study. For instance, to describe concussion injury mechanism using video analysis, Hendricks et al 30 studied descriptors specific to concussion, such as ‘acceleration of the head’. It is also recommended that video analysis projects using one analyst should be tested for intra-rater reliability and, if possible, inter-rater reliability as well. In cases where more than one analyst is used, testing inter-rater reliability is essential. Using video to observe body positions and actions is challenging, and reliability across the descriptors will differ.15 It may be difficult to achieve an acceptable level of reliability on the first round. However, reliability testing for a video analysis project should be considered a process. If an acceptable intrarater or inter-rater reliability was not attained during the first round of testing, improvements can be made by engaging the rater(s) to discuss and clarify any analysis issues.14 15
Integration with additional data sources
Information from external data sources, for example, injury surveillance data, wearable technology and so on should be integrated with video analysis. For example, observing an injury event using video is not straightforward. An injury in rugby is defined as ‘Any physical complaint, which was caused by a transfer of energy that exceeded the body’s ability to maintain its structural and/or functional integrity, that was sustained by a player during a rugby match or rugby training, irrespective of the need for medical attention or time-loss from rugby activities. An injury that results in a player receiving medical attention is referred to as a “medical-attention” injury and an injury that results in a player being unable to take a full part in future rugby training or match play as a “time-loss” injury’. 36 This definition is consistently used in rugby injury surveillance studies and requires access to the team or tournament. Injury surveillance data alone, however, do not provide enough detail to design and develop injury prevention programmes, especially if player behaviour or player technique is the target of an intervention. Therefore, video analysis should be integrated with injury surveillance data of the team or the tournament. Beyond understanding injury mechanisms and risk factors,5 6 video analysis can also be used to assess the effectiveness of an injury prevention intervention (be it a training programme or law change). Also, video analysis can be used to determine a change in on-field player behaviour and assist medical/referee decision making. For example, at the elite level, the Head Injury Assessment (HIA) is a process to manage the potentially serious sequelae of concussion. During the HIA process, video footage is available to the official match doctors to review and identify if any concussive signs and symptoms are apparent at the time of injury to decide whether to permanently remove the player from the match.8
Wearable microtechnology (eg, GPS) is a common feature in professional rugby union and has been successfully used to describe the physical demands of the sport.37 Like injury surveillance data, linking video analysis data to microtechnology data provides superior information compared with assessing the datasets in isolation. Microtechnology provides objective data of the physical demands. These data, along with standardised injury data and high-quality video analysis data will advance the integration of injury, video and microtechnology data within rugby.37
Quality of video footage
The quality of the video footage may have a direct impact on the quality of the analysis.34 At the professional level, televised matches provide good quality video, from different angles, with the main camera stable and at a suitable vantage point. At lower levels of rugby (eg, community level), video footage of matches is typically recorded using one camera with a less professional set-up. With limited resources available for video recording, one-camera angle may be sufficient for detailed analyses. Using one camera, recording should ideally take place on the half-way line at a height sufficient to capture the full field, with the camera mounted on a tripod for video stability. The ball should be kept at the centre of the camera view with a radius of approximately 10 m. If more than one camera is available, the aim of the camera set up should be to increase the ability to distinguish body parts and players’ on-the-ball and off-the-ball actions. Off-the-ball actions are particularly important for postconcussive analysis and HIA review. Related to the quality of the video footage, the software program used to analyse the video should also allow for control over the time lapse during each movement, as well as the recording and saving of each coded instance into a database. During the analysis, the analyst should be able to pause, rewind and watch the footage in slow motion or frame by frame.
Another method to analyse rugby contact events is to score the technique of the player.20 22 24 The score is based on a list of observable actions that represents the ideal form of the technique, which coaches use during training (online supplementary appendix 2).35 38 To score a player’s technique, one point is awarded when an appropriate action is performed and zero when the action is not. The sum of these points represents the technical ability of the player. To date, this method has demonstrated encouraging construct validity by differentiating between injury and non-injury tackles, head impact tackles and successful tackles and shown to be stable across match quarters at the elite level.20 22 24 39 In this paper, we only provided technique criteria for the ball-carrier and tackler during the front-on tackle as an example; however, technique criteria for other types of tackles and contact events do exist.20 22 24 39
From a dynamical systems perspective, video analysis can also be used to study the complex and dynamic movement interactions between players (within the same team and with the opposition) and the environment.2 40 For example, how spatial and temporal interactions emerge between players based on field location.40 Limited work has been done on video analysis in rugby matches from a dynamical systems perspective, particularly in understanding injury, which highlights an avenue for future research. Another avenue for future research is the video analysis of training sessions. An appreciable amount of work has been documented on the physical demands of training sessions41 42; however, less is known about specific skill activities and coach behaviours during training. Systematic observation of rugby training sessions using video has a role to play in this regard.43 44
The aim of this consensus is to improve the consistency and quality of video analysis work in rugby union. The RUVAC group recommends using this consensus as the starting framework when conducting rugby video analysis research. Which variables to use (if not all) depends on the objectives of the study. Furthermore, the intention of this consensus is to help integrate video data with other data (eg, injury surveillance). This framework of descriptors and definitions can be used or developed further for other rugby football codes such as rugby sevens and rugby league.
What is already known
Video analysis research beyond the scope of performance has grown in the last 10 years.
In video analysis studies, descriptors and definitions have either been lacking or inconsistent between studies.
Video analysis can be linked to injury surveillance data and directly used as part of an injury prevention strategy.
What are the new findings
A consensus on a framework of descriptors and definitions for video analysis in rugby.
The framework focuses on key actions and is versatile across video analysis objectives.
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
Twitter @Sharief_H, @KTConditioning, @steve_dh1, @drsimonkemp, @jonpatricios, @Rock_Andy1, @drkeithstokes, @kenquarrie, @23benjones
Contributors SH drafted the manuscript, and all other authors provided input to prepare the manuscript for publication. SdH conducted the literature search for the review.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Patient consent for publication Not required.
Provenance and peer review Not commissioned; externally peer reviewed.
Data availability statement No data are available.