Article Text
Abstract
Objective—To provide the basis for collecting rugby union injury data using a rigorously validated injury report form.
Methods—Seven stages were used to assess face, content, and criterion validity of the rugby union injury report form. A 22 member panel plus four sporting bodies assessed the form for face validity, and an expert panel assessed it for content and criterion validity. Panel members were consulted until consensus was reached. A yardstick developed by an expert panel using the Delphi technique was used to assess the reliability of the form. An independent panel of 10 viewed a series of five videotaped injuries, three times over a five week period to assess inter-rater and intrarater reliability. The form was then trialed by 40 people in situ during four games.
Results—The rugby union injury report form for games and training was developed, and the face, content, and criterion validity successfully assessed. A seven step protocol to create a yardstick was also developed to assist in the validation process. Both inter-rater and intrarater reliability results indicated a 98% agreement. The 40 trialists who completed forms in situ during four games were found to have an inter-rater reliability agreement of 98% for nine injuries.
Conclusions—A measurement instrument for injury data collection in rugby union was successfully developed and validated, providing researchers with a basis for future studies in this area. A procedure to develop future injury data collection instruments in other sports was also developed.
- data collection
- validation
- injury prevention
- rugby union
Statistics from Altmetric.com
Take home message
This study has provided researchers with the basis for collecting rugby union injury data using a rigorously validated injury report form. It has also provided a procedure to develop and validate injury data collection instruments in other sports.
Over one million sporting injuries occur annually in Australia costing almost one billion dollars.1 Although injury rates are relatively high in the rugby codes in comparison with other sports,2 a meta-analysis on rugby injuries conducted from 1974 to 1994 concluded that no study met all the desirable criteria necessary to ascertain the extent and nature of injury.3 Also many different definitions of injury were found in the literature. Most studies used only hospital databases or injuries that caused players to miss at least one fixtured game, therefore the true incidence of injury was not evident.4 It has been widely recommended that injury definitions within rugby union and other sports should be standardised.5,6 The importance of collecting injury data from both games and training is also emphasised, as all injuries have the potential to impact on sporting performance.4
Following a review of injury definitions used in the published literature, four definitions were devised: minor, the player was able to return to the game or training in which the injury occurred; mild, the player missed one week; moderate, the player missed two weeks; severe, the player missed more than two weeks. The definitions include categories for all injuries and incorporate the most commonly used definition (player missed one week), to allow some comparison of results with previous and current sports injury studies.
Internationally, injuries in rugby union are receiving increased attention because of the high incidence in this sport. New Zealand established the rugby injury and performance project to reduce the high incidence, severity, and consequences of injury in that country.7 Speed and contact were cited as major risk factors for the high incidence of injury in Irish rugby union,8 while the risk of substantive injury in British rugby was found to be three times higher than that of soccer.2 The incidence of injury in rugby union in Australia is also of concern, with at least half of all players receiving one injury per season.1
There is limited injury research in rugby union and rugby league that provides accurate reproducible data.7,9 The development of a standardised data collection instrument to assess incidence rates is critical to methodologically sound injury research.10
Valid and reliable injury data collection instruments increase the accuracy of study results and may assist in the identification of risk factors associated with sport.3,11–14 The extent to which an instrument measures what it is intended to measure is termed validity.15 Reliability is the extent to which a measurement instrument reproduces the same results on two or more occasions.16 Although many recommend the use of valid and reliable measurement instruments in rugby union,3,17,18 very few studies have discussed the validation of the data collection instruments used.2,19–21 An extensive literature review failed to locate a validated instrument for injury data collection in rugby union.
The aim of this study is to provide the basis from which rugby union injury data can be collected using a rigorously validated injury report form.
Methodology
An extensive review of published literature and existing data collection instruments was conducted. On the basis of this research, a seven stage process was used to develop and validate the rugby union injury report form (table 1).
FACE, CONTENT, AND CRITERION VALIDITY
A 22 member panel plus four sporting bodies assessed the form for face validity. In addition to this panel, representatives from sporting bodies were asked to comment in writing on the suitability of the rugby union injury report form and key to coding for data collection in rugby union. Experts were required to assess content validity. A panel of seven experts in the field of rugby union or injury prevention volunteered to assess the form for content validity using a Delphi technique.22 This technique was used as it may be performed entirely by correspondence thereby negating the need for face to face meetings by panel members and minimising costs.
In the absence of a yardstick or gold standard to assess criterion validity, the expert panel also assessed the criterion validity of the instrument during the four round Delphi technique used to assess content validity.23
DEVELOPMENT OF A YARDSTICK
Very few studies have reported testing the reliability of their data collection forms, although this procedure is considered essential to injury data collection.20,24,25 To assist in the assessment of reliability, a yardstick was developed. There were no recommendations in the literature about the procedure required to establish a yardstick for data collection instruments in sport; however, there was ample evidence of their development in clinical medicine.26–29 Following a procedure used by Streiner and Norman16 (table 2), this study developed a videotape of five rugby union injuries, showing the mechanism of injury, to be used as the basis for assessing the reliability of the form. The procedure was both cost effective and a tested method of developing a yardstick.16
A panel was chosen to develop a yardstick using the Delphi technique22 for each of the five injuries recorded on videotape using the form. Panel members were consulted until consensus was reached. The yardstick was then validated by trained raters on a selected set of injuries, to assess inter-rater and intrarater reliability.30 Altman31 provided a scale of acceptance levels for reliability scores in medical research, reporting that an agreement of 80% or over represented a high level of reliability.
INTRARATER AND INTER-RATER RELIABILITY
Ten independent raters were asked to complete separate rugby union injury report forms for the five videotaped injuries. The ratings occurred three times over a five week period, with completed forms being collected after each viewing. Forms from the first viewing were used to assess inter-rater reliability, with results indicating a 98% agreement. Intrarater reliability agreement of raters assessed against the yardstick devised by the panel also indicated a 98% agreement.
INTER-RATER RELIABILITY OF THE FORM IN SITU
Forty raters were randomly selected from spectators present at four games (10 per game) to trial the form in situ. All raters were screened before selection to ensure that they had a fundamental knowledge of sport injury—that is, they understood the terms intrinsic and extrinsic in relation to mechanism of injury. They then received at least half an hour briefing about the use of the form. Maintaining total independence between raters was the most effective way to minimise bias.32 Raters were asked to remain in one position during the game under review, thus blinding them to other rater results. An inter-rater reliability agreement of 98% was achieved for the nine injuries sustained during these four games.
Results
The design and content of the rugby union injury report form for games and training was devised from recommendations for future research elicited from the literature reviewed and a review of existing data collection instruments. Face, content, and criterion validity were successfully assessed. A seven step protocol to create a yardstick was also developed to assist in the validation process. The number of agreed responses overall for panel members over all five injuries out of a possible 200 (10 questions × 4 panel members × 5 injuries) was 197, indicating a 98.5% agreement (fig 1).
Intrarater reliability results indicated a 98% agreement between raters (10 questions × 10 raters × 3 viewings × 5 injuries = 1472/1500 or 98% agreement). Inter-rater reliability results indicated a 98% agreement by raters (10 questions × 10 raters × 1 injury = 98/100). The inter-rater reliability agreement of the 40 raters who completed forms in situ during four games was 98% for nine injuries ((10 questions × 10 raters × 5 injuries) + (10 questions × 10 raters × 1 injury) + (10 questions × 10 raters × 1 injury) + (10 questions × 10 raters × 2 injuries) = 884/900, indicating a 98% agreement).
The following information can be collected using the form: environmental conditions that impact on injury; mechanism of injury; phase of play or aspect of training; if play was legal or illegal; position played specifically and in general; relationship of ball and injured player; severity of injury; time of game injury occurred; and when the injury occurred (game or training).
The front of the rugby union injury report form (fig 2A) comprises closed ended questions, and instructions indicate the need to merely circle an option for each question. The reverse of the form (fig 2B) allows space for a written record of assessment, treatment, and management of injury. A separate form is completed for each injury, and copies of all medical reports are attached to provide a complete record of the assessment, treatment, and management of each injury.
The reverse side of the form incorporates the Orchard sports injury classification system (1997) (OSICS) to streamline data input. The OSICS is currently used by the Australian Institute of Sports and the Australian Rules Football Commission, and is endorsed by the Australian Sports Medicine Federation.33
A key to coding the form was also developed to assist data input and analysis using a statistical computer program. The key was subjected to review by yardstick panel members and representatives from various sporting bodies.
Limitations
There were several limitations to this study. Information bias may arise when the method of collecting information differs between groups or raters.34 This study attempted to reduce information bias by ensuring that the measurement instrument under development was subjected to a rigorous face, content, and criterion validity process. To minimise this form of bias further, raters involved in the validation process using the yardstick were given the same written instruction, and raters who completed the form in situ were given the same verbal instruction.
Interview bias may be introduced into a study by systematic errors in data collection.32 Measures devised to minimise the possibility of systematically incorrect results plus validation procedures were the standardisation of injury definition, data collection forms, instructions, and information.
The researcher was the appointed medical officer in attendance at all games used to assess inter-rater reliability in situ. Although this had the potential to introduce bias into this area of the study, this was minimised by using ten raters at each game reviewed (n = 40). The forms completed by the medical officer were only used for comparison, and responses were not included in the calculation of inter-rater reliability in situ.
Finally, as participation in the study was voluntary and there were no refusals from random recruitment, this bias was minimised by randomly recruiting more participants, in all aspects of the study, than the minimum suggested in the literature.
Discussion
A comprehensive review of published literature failed to find a valid measurement instrument for collection of injury data in rugby union, therefore this study aimed to produce a validated measurement instrument. The major findings from the literature review indicated the need to investigate: the study design; sample size and representativeness of the sample; statistical analysis; filtering; definition of injury; validity; reliability. Of these seven criteria, the three that relate directly to instrument development are: definition of injury, validity, and reliability.
One group reported that standardised definitions of injuries were essential to allow comparison of results between studies in the same sport and also studies of subpopulations within and between sports—for example, by age or gender.35 This view was supported by numerous authors.3,5,6
The four injury definitions devised for this study were: minor, player able to return to game or training in which injury occurred; mild, player missed one week; moderate, player missed two weeks; severe, player missed more than two weeks. Once definitions were determined, the next decision was whether to make provision for recording all injuries, or only those that occurred during game play. It is important to collect injury data from both games and training as all injuries have the potential to impact on sporting performance.4 If minor injuries are omitted, then the true incidence of injury relating to sport may be significantly underestimated. The most comprehensive sports injuries study conducted in Australia was commissioned by the National Better Health Program in 1990; it found that about 32% of injuries occurred at training.1
A study by the National Athletic Trainers' Association36 found that 39% of injuries occurred during games and the remaining 61% during training sessions. This finding was also supported in an additional trial of the rugby union injury report form using elite junior players over a 26 week period (Report to the Western Australian Junior Rugby Union: Under 15's and Under 16's 1997 state campaign; unpublished). The incidence of injury in this trial would have been underestimated by 34% using the most common current definition of recording only those injuries that caused the player to miss at least one game. On the basis of findings from the literature review, the form was designed to collect injury data from both games and training, in a standardised manner, ensuring that all injuries were recorded.
The form was subjected to a rigorous validation process producing a data collection instrument with high face, content, and criterion validity. Representatives from groups who may be potential users of the validated form were systematically recruited on to a panel to assess face validity, in an endeavour to make the form acceptable and relevant to end users.5
This study assessed inter-rater and intrarater reliability of the form by using ten raters, over a five week period, to view the videotape of five injuries devised in the yardstick procedure. A 98% agreement level for inter-rater and intrarater reliability was achieved. According to Altman's scale,31 the form may be considered to have a high level of both inter-rater and intrarater reliability.
The use of trained raters in the field was found to be the most accurate data collection method.37 A trial of the form was conducted in situ to assess inter-rater reliability further. Results of this study indicated a high level of inter-rater reliability when raters in the field were used. Raters were chosen randomly from the spectators present at each game. All raters were screened before selection and only people who understood the terms intrinsic and extrinsic in relation to the mechanism of injury were used.
From informal discussions with raters before each game, it was found that all raters had a fundamental knowledge of rugby union and over 50% had actually played the game at one time. Almost 40% of raters had attended to rugby union injuries, although most had no formal qualifications to do so. It would, however, have been ideal to use covert observers to record each rater's behaviour during the trial, and report on the implications that this behaviour may have had on their findings.38 Covert observation is unobtrusive yet allows researchers to obtain a better understanding of the environment in which behaviours take place.
Raters were also asked to remain in one position during the game under review, thus blinding them to other raters' results, as Hennekens and Buring32 stated that this was the single most important way to minimise bias. This method has been used extensively in clinical medicine studies to increase the rigour of the validation process.39
Many researchers agree that validated data collection instruments are fundamental to injury prevention.1,3,5,7 The establishment of a yardstick increased the rigour of the validation process, as did the use of multiple raters, both expert and novice in the field of injury prevention and/or rugby union.
The major outcome of this research is a rigorous procedure for the development and validation of a measurement instrument for data collection in rugby union, with an inter-rater and intrarater reliability agreement of 98%. This validated form can now be confidently used in prospective studies for injury data collection in rugby union.
Conclusions and recommendations
This study has validated a measurement instrument for injury collection in rugby union, thus providing injury researchers with a basis for future studies in this area, as well as a procedure to develop/refine future instruments.
The following are recommendations for use of the rugby union injury report form for games and training.
-
Widespread adoption of the form. The form could be used for injury data collection at all levels of rugby union, from elite to social. It could also be used at all games and training. This would ensure that all injuries sustained by players are accurately reported, as, no matter how minor, all injuries affect performance.
-
Compatibility of results. As the form includes the OSICS,33 it will enable comparison of results between similar studies. Furthermore, the form can be used as a basis for comparison of other sports injuries that use the OSICS.
-
Assessing injury trends. The form could be used to collect data in longitudinal studies to identify injury trends over time. It facilitates this through the incorporation of a computer coding system for data analysis and collection of injury data through all levels of rugby union.
-
Calculation of incidence rates. The form could be used in conjunction with a player log or diary of game and training hours, to calculate incidence rates based on exposure time.
Acknowledgments
I would like to acknowledge the encouragement and guidance of Dr Donna Cross and Michael Rosenberg, Centre for Health Promotion Research, Curtin University, and Dr David Chalmers, Injury Prevention Research Unit, University of Otago, during the completion of this research. I would also like to thank Dr Tony Lower, School of Public Health, Curtin University for his critical appraisal of this manuscript.
Contributors: A McM formulated the primary study aims and objectives, conducted all components of this research, developed and validated the rugby union injury report form, collected, analysed and interpreted the data, prepared a thesis, detailed the research, and wrote the paper. Dr Tony Lower and Dr Donna Cross critically appraised the paper before submission. Dr David Chalmers chaired the yardstick panel and provided guidance in the initial stages of this research. Mr James Bridle, Mr John Tucker, and Mr David Hart were members of the panel. Michael Rosenberg supervised the initial stages of this research and assisted in the formulation of the research aims and objectives. Dr Donna Cross provided expert supervision in the later stages by editing and proofreading the thesis associated with this research.
Take home message
This study has provided researchers with the basis for collecting rugby union injury data using a rigorously validated injury report form. It has also provided a procedure to develop and validate injury data collection instruments in other sports.