Article Text

PDF

Stages in the development of a research project: putting the idea together
  1. Roger G Eston,
  2. Ann V Rowlands
  1. School of Sport, Health and Exercise Sciences, University of Wales, Bangor LL57 2EN, Wales, United Kingdom
  1. Correspondence to: Dr R G Eston.

Statistics from Altmetric.com

Aim

The purpose of this paper is to provide an overview of the process and stages involved in developing a research idea from its inception to realisation. It is not designed to be an all encompassing summary of the research process. It fprovides a brief guide to the most common sequence of stages involved in the development of a research idea into a viable research proposal. Useful references for further reading on important issues that are beyond the scope of this article are also provided.

Familiarity with the topic

Research projects should be built on the foundations of knowledge accumulated from previous studies.1 A thorough understanding of the topic to be researched is essential. To advance the boundaries of knowledge within a specific area, it is necessary to know the status quo within that area. This will include familiarity with the theory and terminology underlying that topic. A literature search is normally carried out before the start of the study. It is unlikely that reference books will contain the latest information on a topic because of the time it takes for publication. The approach taken in the study will be influenced by an evaluation of the latest findings from journal articles and conferences. Fortunately, the technology available today means the literature can be searched, and often journal articles obtained, with relative ease.

Reviewing the literature

Most literature searches begin with one of the many electronic databases available, using the author's name or a combination of carefully selected keywords. Individual databases are limited as to which journals are listed. Within sports and exercise science, Sports Discus and Medline are widely used. To avoid missing pertinent articles, it is suggested that a number of electronic databases are searched. Medline covers biomedically orientated journals, Sports Discus covers sport and exercise orientated sources, and the Science Citation Index covers scientific articles in general. In addition, the National Sports Medicine Institute has recently released the Sports Medicine and Related Topics (SMART) database of journal articles, which covers all aspects of sports and exercise science and medicine from 1985. Addresses for these databases on the internet are: for Medline, http://www.ncbi.nlm.nih.gov/PubMed/medline.html; for the Science Citation Index (United Kingdom higher education institutions only), http://www.bids.ac.uk/; for SMART, http://smart.nsmi.org.uk.

Most universities offering courses in sports and exercise science courses have access to Sports Discus available on a centrally accessed CD-ROM. Further relevant papers may be obtained by studying their reference lists. Papers that provide a critical review of the topic are also very useful. Many journals such as Sports Medicine specialise in reviews, although these can also be found in other mainstream journals such as the British Journal of Sports Medicine, Medicine and Science in Sports and Exercise, and Journal of Sports Sciences. If the library does not hold a particular journal, it should be possible to obtain issues through the university's interlibrary loan service, although this is expensive and may limit the number of papers obtained.

When writing a review paper or conducting a more sophisticated empirically based review, such as a meta-analysis, it is important to endeavour to obtain all papers asking a given research question,2 whether they are published or not. Although this may not be practically possible, the researcher should be careful not to select only those papers that are easiest to obtain as this may bias the review. Hence, further steps should be taken to obtain more elusive papers and/or obtain a random selection of relevant papers. Rosenthal3 has written a comprehensive paper on bibliographic retrieval for researchers wishing to conduct a meta-analysis.

The research hypothesis and rationale

The typical empirical journal article and research proposal commences with a brief literature review to provide the background and rationale for the research. Questions that remain unanswered, or findings that need clarifying, are often highlighted here. This leads into explicit statements about the importance and necessity of the planned research.

Once the research problem has been stated, the hypothesis is normally presented.4 However, often, hypotheses are not stated in research papers, primarily because of authors' assumptions that the reader can implicitly determine the hypotheses being tested from the description of the purpose or statement of the problem, which is most commonly stated at the end of the introduction.1

The expected results form the research hypothesis. For example, it may be hypothesised that the mean cholesterol levels of trained men are lower than those of sedentary men. This is a research hypothesis, as it states what the results are expected to be. Conversely, the null hypothesis often states what the researcher does not expect to be the case. Its purpose is for use in the statistical test of reliability of results. It usually, although not always,1 states that there are no differences between treatments, or that there is no relation between variables.4 For example, the null hypothesis for the above study may state that there is no difference between the mean cholesterol levels of trained and sedentary men. If the null hypothesis were true, any observed differences would be due to chance alone, and the statistically non-significant differences that existed in the sample would not be inferred to exist in the population. Huck and Cormier1(chapters 7 and 8) give a detailed description of the different forms of hypotheses and the stages of hypothesis testing.

The design of the study

A study should be designed to answer the research question being asked. A thorough evaluation of the literature can help the researcher avoid repeating design mistakes that have been made in the past. Theoretically, research studies should become better and better with time as past mistakes are rectified and studies become more and more robust. However, this is generally not the case as in reality each study is a new and novel endeavour.5

TYPES OF STUDY

As in medical research, broadly speaking the types of study used in sports science research can be split into two groups: observational and experimental.5

Observational design

In observational studies, the participants receive no treatment or experimental manipulation. As the name suggests, the variables of interest are recorded from the participant with no attempt to influence the variables in any way. This is a descriptive study. In observational studies, the researcher analyses the data with the aim of determining differences or relations between variables and reasons why they do or do not exist. In experimental studies, the effect of treatment or manipulation of the independent variable is examined.5 Examples of observational studies include those that have recorded skinfold levels and other estimates of body fat,6–9 or the measurement of body fat and physical activity in children to assess whether there is a relation between the two.10,11 In the latter examples, if the participants had received an aerobic training programme to assess the effects on body fat, the independent variable in the study would have been directly manipulated and the study would be experimental in nature.12

Experimental design

If the study has an experimental design—that is, one of the independent variables is to be manipulated, it is important to be sure that any observed changes in the dependent variable—for example, power output—are due to the experimental treatment—for example, creatine ingestion—and not due to chance, growth, learning, or other extraneous factors. For example, in a study to investigate the effects of creatine supplementation on maximal anaerobic capacity—for example, that of Worth et al13—a control group was necessary to separate the treatment effect from any other causes that may have improved performance. A placebo group was also included in the above study to determine whether any improvement in performance was due to the creatine supplementation or to a psychological effect.4

There are situations in which the inclusion of a placebo is not possible. For example, in studies in which the treatment is obvious to the participant (and the investigator). An example of this is a study to examine the effects of cryotherapy on exercise induced muscle damage and the soreness that accompanies it—for example, the study of Eston and Peters.14 In this study, the control group had no treatment for the symptoms of delayed onset muscle soreness, while the treatment group received cryotherapy by immersing the damaged arm in cool water for a limited period of time on several occasions after the eccentric exercise bout. It was not possible to have a placebo group in this study because it would be very clear to the participants what treatment they were receiving.

Repeated measures and independent groups design

Experimental studies can be conducted using separate groups for treatment, control, and placebo conditions (independent groups design) or by using the same group for all conditions (repeated measures design). The option chosen depends on the design of the experiment. There are advantages and disadvantages to each method.

REPEATED MEASURES DESIGN

In a repeated measures design, the same group is tested under all conditions. The experiment is more powerful, as the within group variability due to individual differences is removed15 and thus the number of participants (n) in each condition can be smaller than if separate groups are required for each condition. However, the commitment required from each participant is greater. In addition, there may need to be a large gap between conditions because there may be long lasting effects that may remain during the subsequent condition. For example, if the treatment is a drug, it may remain in the participant's system after the drug course has finished. It is important that the drug is completely flushed out from the system or it may affect the results from the control or placebo condition. An example of this type of study is that of Head et al.16 In this study, all participants received two types of β-blockers and a placebo for five days in a double blind randomised cross over design. A minimum of two days was allowed for wash out.

For many studies a repeated measures design is the best tool for tackling the research question. In a study of this nature, in which the same group of participants are exposed to several conditions, it is essential that the order in which they are exposed to the conditions is randomised.4 This helps control for any learning effect or acclimatisation related to the testing procedure. For example, when the effects of practice in using ratings of perceived exertion (RPE) to regulate exercise intensity were assessed, healthy17 and blind participants18 performed bouts of exercise at randomised RPEs.

INDEPENDENT GROUPS DESIGN

If two or more independent groups are used in a study, the groups should be similar except for the factor that is being investigated. For example, if the treatment group is comprised of young men, the control group should also consist of young men, not older men or young women. Ideally the participants should only differ with respect to the variable of interest. The method of allocating participants to groups must not be affected by the characteristics of the participants, therefore each participant should have an equal chance of being in any group. Bland5 (chapter 2) describes various methods of randomly allocating participants to groups. There are numerous examples of random assignment of participants to independent groups—for example, Doyle and Parfitt19 and Ehrlich and Haber.20

The equivalence of the groups with respect to various measures can be checked before treatment by simple independent groups t tests or, in the case of more than two groups, by a one way analysis of variance. Provided that there is sufficient power to detect differences that are meaningful, these tests can provide an assurance of the equivalence of the groups. Alternatively, but less commonly, the investigators may adjust the scores after treatment on the basis of differences in the groups' scores before the test by using analysis of covariance procedures—for example, the study of Eston et al.21 In this study, scores on muscle strength after treatment were adjusted for each group using the score obtained before the test as the covariate.22 This reduced the possibility of the scores obtained after treatment being influenced by initial group differences.

When independent groups are used, the commitment required from the participant is less. Normally, he/she will experience the procedure only once. The time taken is therefore less, as all groups may be studied simultaneously. However, the design is less powerful, as the within group variability is greater because of individual differences between groups. 15 This implies that more participants per group are necessary (in comparison with a repeated measures design) for the design to have sufficient power.

MIXED MODEL DESIGN

Perhaps the most commonly used experimental design in sports and exercise science research is the mixed model analysis of variance. This contains at least one repeated measures factor and one independent groups factor. A typical example of this would be an experimental study that compares effects before and after treatment. For example, the effects of aerobics training on peak oxygen uptake and submaximal heart rate measures in girls,23 or the effects of a prophylactic anti-inflammatory drug on muscle soreness after strenuous eccentric exercise—for example, the study of Semark et al.24 In both of these studies, the participants were randomly assigned to an experimental group and a control group. In the latter study, the control group received a placebo. There are many other examples of the mixed model type of study.

Blind/double blind studies

In a single blind study, participants do not know whether they are receiving the placebo or the experimental treatment. A double blind study is when the tester also does not know what treatment the participant is receiving. This strengthens the design as it also reduces the tester's potential influence on the participants' results. Hence, neither the participant's nor the tester's expectations of the effects of the treatment should affect the outcome of the study. This is obviously important in studies to determine the effects of orally administered substances on performance, such as in the study by Head et al,16 which assessed the effects of two different types of β-blocker on exercise metabolism, or in studies to assess the effects of oral creatine supplementation on anaerobic capacity.13

Power of the study

There is increasing criticism about the lack of statistical power of papers published in sports and exercise science and psychology journals.2527 Statistical power refers to the probability of rejecting the null hypothesis—that is, the probability that the study will lead to significant results.26 If the null hypothesis is false but not rejected, a type 2 error is incurred. Cohen26 suggested that a power of 0.80 is adequate when an alpha is set at 0.05—that is, the risk of type 1 error, which is rejection of the null hypothesis when it is true, is 0.05. This means that the risk of a type 2 error is 0.20.

An important consideration in relation to the statistical power of the study is the magnitude of the relation or treatment effect. This is known as the effect size. When calculated a priori, this quantifies the degree to which the investigator believes the null hypothesis to be false.26 Each statistical test has an effect size index, which ranges from zero upwards and is scale free.26 For example, the effect size index for a correlation is simply r; no conversion is necessary. For assessment of the difference between two sample means, Cohen's d , Hedges g, or Glass's Δ can be used. These divide the difference between two means by a standard deviation (see Rosenthal,28 p 35). Formulae are available for converting other test statistics—for example, t test, one way analysis of variance, and χ2 results—into effect size indexes (see Rosenthal,28 p 19).

To evaluate an effect size, some idea of its scale is needed.26 Effect sizes are often described as small, medium, and large. Correlations (r) equalling 0.1, 0.3, and 0.5 and Cohen's d equalling 0.2, 0.5, and 0.8 equate to small, medium, and large effect sizes respectively. A table detailing the magnitude of other effect size indexes equal to small, medium, and large effect sizes is presented in Cohen.26 The smaller the expected effect size, the larger the sample size necessary if the study is to have sufficient power to detect that effect size.

An example of a study in which the effect size may be medium, could be one to assess the effects of habitual physical activity on body fat in children—for example, that of Rowlands et al.10 In this study, there was a moderate correlation between habitual physical activity and body fat, corresponding to a medium effect size. A large effect size may be expected in a study to assess the effects of a very low energy diet on body fat in overweight women; an example is the study of Eston et al.29 In this study, a greatly reduced energy intake (daily intake 1695 kJ a day for six weeks) resulted in a substantial decrease in total body mass and percentage body fat.

The effect size should be estimated during the design stage of a study. This allows the determination of the sample size required to give adequate power for a given alpha. Hence, the study can be designed to ensure it has sufficient power to detect the effect of interest—that is, minimising type 2 error. A simple table detailing sample sizes necessary to detect small, medium, and large effect sizes, with a power of 0.80 and an alpha of 0.05, is presented in Cohen.26 This table covers eight statistical tests including the difference between independent means, product-moment correlation, χ2, and one way analysis of variance. More detailed descriptions of power analysis and methods for determining the sample size necessary in more complex tests can be found in the texts by Cohen 30 and Stevens.15 Power calculations can also be carried out on interactive sites on the internet—for example, http://members.aol.com/johnp71/javastat.html#Power.

When empirical data are available, this can sometimes be used to estimate the effect size for a study. However, for some research questions it is difficult to find enough information to estimate the expected effect size. Here, the expected effect size may be difficult to calculate because of the limited number of studies that provide empirical information on the topic, or there may be insufficient detail provided in the results of the relevant studies. To enable comparison of effect sizes from studies that differ in sample size, it is recommended that, in addition to reporting the test statistic and p value, the appropriate effect size index is also reported.

A review of 108 articles published in the Australian Journal of Science and Medicine in Sport (AJMS; now The Journal of Science and Medicine in Sport) in 1996 or 1997 showed that the median power to detect small, medium, and large effect sizes was 0.10, 0.46, and 0.84 respectively.27 No study had adequate power to detect a small effect size, 38% had adequate power to detect a medium effect size, and 75% had adequate power to detect a large effect size. It is clear that, as recently as two to three years ago, the power of studies was often not being considered at the design stage of a study, if at all.

Ethical considerations

A further consideration in the design of a study involves the ethics of the testing procedures. Some journals will not accept papers unless the study has had ethics approval from a recognised ethics committee. The ethical implications of the study are dependent on the procedures to be undertaken and the nature of the participants. For example, the British Association of Sport and Exercise Sciences (BASES) recommend that ethical clearance should be obtained before imposing unusual or severe psychological or physiological stress, administering any ergogenic aid, working with clients with disabilities, or using biopsy or venepuncture techniques.31 The above list is not complete, and where there is any doubt cases should be looked at individually. Certain procedures that may be approved for adult participants may not be approved for children. Children are recognised as a vulnerable group with a limited comprehension capacity.32 Consequently, they are unable legally to give consent. However, it is generally accepted that parents/guardians can give parental permission, and children who are old enough can choose whether or not to participate. Rowland32 (chapter 5) presents a thorough discussion on the ethical aspects of research with children.

Whether the participants are children or not, the relevance of each of the measures or treatments should be considered during the design stage of the research. There should be a clear and justifiable rationale for the necessity of invasive procedures, particularly if there are valid alternative and non-invasive measures available. The frequency of the invasive procedures and the effect this has on the participants should be considered.

A further ethical consideration involves the denial of potentially beneficial treatment. For example, in an experimental design there may be one group of participants who receive the treatment, one group who receive the placebo, and one group who receive the control treatment. Those who receive the placebo will, by definition, think they are receiving the treatment. In this case, both the placebo and control group have been denied the treatment. It is important to consider the ethics of denying these groups the treatment, particularly if the treatment is expected to be beneficial. This does not arise in a repeated measures design as all participants are exposed to all treatments.

A possible solution is to offer all groups the treatment after the study. This would not be possible with some studies—for example, when the purpose of the treatment is to reduce the symptoms of delayed onset muscle soreness, because the symptoms would have dissipated by the end of the study. However, if the effects of an ergogenic aid were studied, the participants may volunteer because they believe they will have the opportunity to benefit from the ergogenic aid. In this situation it would be possible to offer the aid to all participants after the study. Ethically, this may be preferable to withholding the treatment from two thirds of the volunteers. It may also prevent participants from withdrawing from a study which they consider is providing little or no benefit to them.

This issue becomes increasingly important if the treatment is for a medical condition or for rehabilitation. This is a common scenario in clinical trials. It has been argued that withholding a potentially beneficial treatment from patients is ethically justified, as any biologically active treatment is also potentially harmful. Hence, the benefits need to be conclusively demonstrated in properly controlled trials before general administration.5 The ethics of withholding treatment clearly depend on the type of treatment and participants involved.

Logistics

Before a study is embarked on, it is necessary to ensure that the study is viable. This involves making a realistic assessment of the costs, time, and availability of the participants. If there is an application for funding, these details have to be approved by the host institution and the funding body. Costs will be related to the sample size, duration of the study, equipment needed, consumables, research assistants or other staff, travel, conference presentations, and institutional overheads.

tk;3The facilities available for conducting the research also need to be considered. If the study is laboratory based, it may be necessary to book laboratory time relatively early, as many people typically share laboratory facilities. This can only be achieved if a reasonably accurate estimate of the laboratory time needed to conduct the research is known. A pilot study can help answer these questions, identify problems, and prevent or limit methodological faults in the main study. Piloting procedures are an essential part of preparing a study.4

Most studies within sports science and sports medicine require human participants. It is important to consider how these participants will be obtained and how representative of the relevant population they are. It may be necessary to advertise or send letters to request participation. If this is so, consideration should be given as to where to advertise, or where to obtain addresses of potential participants. For example, an advertisement in a leisure centre is likely to attract a different type of participant from a similar advertisement in a doctors surgery or outpatients clinic. The methods of obtaining participants will be largely determined by the population the sample is supposed to represent. For some studies, it is acceptable to use the most convenient sample of sports and exercise science undergraduates, but this is not appropriate for all proposed research.

Expected outcomes

During the planning stages of the study, the potential benefits should be considered. The expected outcomes are strongly linked with the literature review, hypothesis, and rationale. A useful exercise is to plot a graph of the expected outcomes for each group. This also helps to identify the most appropriate statistical analysis of the prospective data. An assessment of the expected outcomes and the potential value of these outcomes will help show whether or not the study is worth while.

Summary

We have considered some of the most important factors involved in designing a viable study that will adequately address the research question. Although we do not profess to be experts in all aspects of the above, we have learned through experience that attention to many of the above points will help to avoid frustration during the experimental process and when the study is presented for external review and subsequent presentation and publication. Good luck in your research.

References

View Abstract

Request permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.