Article Text

Download PDFPDF

Preparing for what the reporting checklists will not tell you: the PREPARE Trial guide for planning clinical research to avoid research waste
Free
  1. Thomas Bandholm1,2,3,
  2. Robin Christensen4,
  3. Kristian Thorborg1,5,
  4. Shaun Treweek6,
  5. Marius Henriksen4,7
  1. 1 Department of Occupational and Physical Therapy, Physical Medicine and Rehabilitation Research – Copenhagen (PMR-C), Amager-Hvidovre Hospital, University of Copenhagen, Copenhagen, Denmark
  2. 2 Department of Orthopedic Surgery, Amager-Hvidovre Hospital, University of Copenhagen, Copenhagen, Denmark
  3. 3 Clinical Research Centre, Amager-Hvidovre Hospital, University of Copenhagen, Copenhagen, Denmark
  4. 4 The Parker Institute, Copenhagen University Hospital Bispebjerg-Frederiksberg, Copenhagen, Denmark
  5. 5 Department of Orthopedic Surgery, Sports Orthopedic Research Centre – Copenhagen (SORC-C), Amager-Hvidovre Hospital, University of Copenhagen, Copenhagen, Denmark
  6. 6 Health Services Research Unit, University of Aberdeen, Aberdeen, UK
  7. 7 Department of Physical and Occupational Therapy, Copenhagen University Hospital Bispebjerg-Frederiksberg, Copenhagen, Denmark
  1. Correspondence to Dr Thomas Bandholm, Copenhagen University Hospital, Hvidovre, Clinical Research Centre, Copenhagen DK-2650, Denmark; thomas.quaade.bandholm{at}regionh.dk

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

For the early career researcher or PhD student about to have a leading role in a clinical trial, it would be a tremendous help to have a ‘How to’ guide, consisting of advice and key literature to help the researcher move successfully along the initial trial pathway from research question to protocol completion. While the ideal trial team should include experienced trialists, who participate in the preparatory research meetings, this support is not always available. The PREPARE Trial guide is intended to assist the early career researcher in this situation, but not replace an experienced trial team and regular research meetings. The guide may also serve as preparation for research meetings, or part of curricula for courses on research methodology, to be used by experienced PhD supervisor, trialist or lecturer.

Evidence from randomised controlled trials (RCTs) is often wasted1 2 (for a collection of papers relating to this issue, please see http://researchwaste.net/). Many trials are not published, and hence the results are not reported, which is indeed a waste. There are many reasons for this waste, but we believe that at least some trials are discontinued because of poor trial planning. Similarly, many completed and published trials suffer from fundamental flaws or poor conduct due to lack of preparation and careful planning. Some flaws in the conception or planning phase prior to the finalised trial protocol and registration are difficult to rectify once the trial starts. For example, patients may decline participation if the study involves delaying or withholding treatment, or if the experimental condition is deemed unfeasible by patients. This can be avoided, for example, by involving patients in the planning of the trial or by conducting pilot or feasibility studies prior to the full-scale trial. In many cases, the planning of a trial is as least as important as running or analysing it.

The PREPARE Trial guide provides information to assist in the preparation and planning of clinical trial research. The guide aims to reduce the number of circumstances in a trial where students and supervisors need to compensate for situations not considered during trial planning. The trial registration policy of the International Committee of Medical Journal Editors (ICMJE: http://www.icmje.org/) recommends that all medical journal editors require registration of clinical trials in a public trials registry at or before the time of first patient enrolment as a condition of consideration for publication. Therefore, key decisions relating to the methodological quality of the reported trial need to be made prior to protocol finalisation and trial registration. After this time, most trial changes have to be reported transparently in the trial report, and may ultimately limit the study’s conclusions. For example, if the choice of primary outcome for the trial was rushed due to deadlines and has to be changed after trial commencement, the sample size may not be reassessed or justified for the new primary outcome, limiting the study’s conclusions.

It is the aim of the PREPARE Trial guide to provide information to assist in the preparation and planning of clinical trial research to enhance research quality and reduce avoidable research waste. The main focus is the randomised controlled trial (hereafter ‘trial’), but the guide is applicable to other study types. To ease interpretation, practical examples are given from the paper by Kise and colleagues.3 Where examples are not provided, it is the intention that the Kise paper—along with this guide—is to be used for, for example, journal club discussions of trial concepts. This narrative guide will need to be updated over time in order to keep its relevance. Hence, we encourage all readers who have constructive feedback to use the PubMed Commons forum (http://www.ncbi.nlm.nih.gov/pubmedcommons/) to share opinions and information (eg, published key literature that postdates the PREPARE guide publication or links that are no longer functioning) relating to the guide, as this will then be linked to the guide’s PubMed entry. In this way, we will be able to take these comments into account in future updates.

The trial pathway

As an outline of the trial pathway, we recommend using the list of headline trial processes that was recently published as part of a report4 from the first Trial Forge platform (www.trialforge.org) meeting (please see below for more information on the Trial Forge platform). The list provides the trial nomenclature to facilitate trial discussions.

Research question, hypothesis and objective

Most trials start with a clinical problem, uncertainty of treatment approach or frustration with clinical outcomes from routine practice.5 The ability of a trial designed to address a clinical problem to become a clinical ‘game changer’ depends on the ability of the researchers and clinicians to turn the clinical problem into a good research question that is relevant to the stakeholders: patients, clinicians, decision makers and others. Getting the research question right at this stage is imperative, as the question will guide the trial design, methods and analytic strategies.5

To help frame your research question, we suggest you use the PICOT approach5 in conjunction with the FINER criteria.6 The PICOT approach requires that you specify the target Population (P), the Intervention of interest (I), the Comparator intervention (C), key Outcomes (O) and the Time frame (T) over which the outcomes are assessed5 (figure 1). This approach will help you formulate a research question that can be rephrased in the reader’s own words.5 This approach will also help you consider design details like initial piloting, relevant primary outcome/endpoint and choice of comparator. If, for example, you plan to run a trial investigating the effectiveness of exercise therapy to treat a degenerative meniscal tear, you will be guided to consider if a gold standard outcome measure exists for this condition, and if there is an established gold standard treatment or usual care intervention to be used as the comparator (control) intervention. You will also be guided to consider if your question relates to whether the exercise therapy intervention is ‘better than’, ‘as good as’ or ‘no worse than’ the established gold standard. These topics are explained in more detail in the following. The FINER criteria (Feasible, Interesting, Novel, Ethical and Relevant) will help you define the desirable properties of your research question once formulated, and give some suggestions concerning how to achieve each attribute6 (figure 2). This includes having stakeholder input from, for example, patients already at this time to help qualify the research question and ensure its clinical relevance. Patient involvement is explained in more detail in the following.

Figure 1

The PICOT criteria to frame the research question (top) and an example of a research objective using the PICOT approach (bottom). Modified with permission from Farrugia et al.7

Figure 2

The FINER criteria for a good research question. Modified with permission from Farrugia et al.7

Once you have developed a good research question, it is time to develop the research hypothesis. The hypothesis will be based on your research question in a form that allows for statistical testing.7 The hypothesis should be formulated as a ‘null’ hypothesis, for example, no difference between exercise therapy and arthroscopic partial meniscectomy for degenerative meniscal tear (when you believe that there is), as this enables statistical testing. After you have formulated your null hypothesis, you can formulate an alternate hypothesis stating the nature or direction of any difference, for example, exercise therapy is superior to arthroscopic surgery.7 If your subsequent trial findings are statistically significant—and of a clinically relevant magnitude—you can reject the null hypothesis and accept the alternate. When the hypothesis has been formulated, you can proceed to the primary research objective, which defines the specific aims of your trial. The objective could be formulated as illustrated in figure 1.

Please see the following papers for more information on how to develop your research question, hypothesis and objective using the PICOT approach and the FINER criteria: refs 5 7 8.

Feasibility or pilot studies

Feasibility or pilot studies may be worth considering prior to a full-scale trial. They are smaller studies that precede a full-scale trial in order to assess the feasibility of important aspects of a full-scale trial (eg, inclusion rate or intervention acceptability). It could be valuable to assess the feasibility of randomising patients to exercise therapy or arthroscopic surgery before starting the full-scale trial. In this way, it will become clear if patients actually accept being randomised to either of the two treatments. Again, this is an example of an issue where patient involvement in the design phase may help qualify the trial. In many cases, data from a pilot or feasibility study will help to attract funding for the full-scale trial. This will help to demonstrate the chances of success and how worthwhile an investment the project is for funders.

Pilot or feasibility studies have received increasing attention in recent years—evidenced by the birth of the scientific journal ‘Pilot and Feasibility Studies’ for the publication of stand-alone pilot or feasibility studies.9 A Consolidated Standards of Reporting Trials (CONSORT) extension for reporting of randomised pilot and feasibility studies also exists,10 and is freely available from the Enhancing the Quality and Transparency of Health Research (EQUATOR) website (http://www.equator-network.org/). If you consider undertaking a stand-alone pilot or feasibility study, we suggest you use the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) checklist (please see ‘Writing the trial protocol’ below) and register the study (please see ‘Trial registration’ below).

The following papers provide more detailed information on pilot and feasibility studies: refs 9 11 12.

Writing the trial protocol

The SPIRIT checklist as the backbone of the overall work frame

The trial protocol is a detailed description of what you intend to do for the next couple of years, so it is a piece of work that you will want to get right.

Besides having experienced trialists on your trial team, we suggest you use a combination of the SPIRIT checklist for trial protocols,13 the CONSORT checklist for reporting trials14 and the introduction-methods-results-and-discussion structure for a scientific paper15 as the overall work frame. Consider using the SPIRIT checklist as the backbone. We suggest you also consult the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH) efficacy guideline concerning the design, conduct, safety and reporting of clinical trials. Although the ICH guideline is mandatory for drug trials, it holds excellent information in the planning of a non-drug trial. The guideline is freely available for download from the ICH website (http://www.ich.org/home.html). Likewise, the SPIRIT and CONSORT checklists are freely available for download from the EQUATOR network’s website (www.equator-network.org). The checklists come with explanatory papers that elaborate in-depth on the specific checklist items and give you examples of good reporting.16 17 We have added information below concerning some of the items in the checklists, which we think may be useful to consult in addition to the explanatory papers.

The Trial Forge platform (www.trialforge.org) provides an evidence base for trial decisions

The Trial Forge platform aims to increase the evidence base for trial decision making and efficiency.4 The initiative will collate or guide to existing high-quality evidence on key trial processes, such as how to tailor recruitment strategies to particular contexts or how to effectively train trial staff. You will have to decide on these issues and many more before the start of the trial, and you will need to describe them in your trial protocol.

A key Trial Forge tool to support trial planning is PRECIS-2,18 which is a graphical tool to help you match design decisions to the needs of the people you expect to use the trial results.

The following web resources provide more detailed information on the evidence base for trial decisions: www.trialforge.org, www.precis-2.org.

Consult the reporting checklist when planning the trial

Reporting checklists are evidence-based recommendations on minimum reporting standards and are mandatory requirements for trial reports in many scientific journals. It is therefore nice to know in the planning phase what the minimum reporting requirement is, when you finally write the trial report. The SPIRIT checklist is intended mainly for preparing protocols for RCTs. It is intended to facilitate the subsequent reporting of the trial using the CONSORT reporting checklist. However, different types of RCTs exist, for which specific CONSORT extensions have been created (for an introduction to superiority, equivalence and non-inferiority trials, please see refs 19 and 20). So we suggest you consult the relevant CONSORT extension, which can be found at the EQUATOR network’s website (http://www.equator-network.org/).

The following web resources provide more detailed information on reporting checklists: http://www.consort-statement.org/, http://www.equator-network.org/.

Study background and choice of comparator

When you argue in your protocol the need for undertaking the trial, the SPIRIT checklist requires you to summarise relevant studies and so does the CONSORT reporting checklist.14 Both checklists strongly suggest that an up-to-date systematic review of relevant studies be summarised and cited in both the protocol and paper.13 14 By doing so already in the planning phase of your trial, you will reduce the risk of unnecessary research duplication and waste.21 It is also likely to help you choose your study comparator and indicate how likely your intervention is to work.

The following web resource provides more detailed information on systematic reviews with regard to fully utilising previous research and making your research evidence-based: www.ebrnetwork.org; and the following paper provides more information on how to select a control intervention for a trial: ref22.

Trial design

The design of your trial relates very closely to your research question, hypothesis and objective. The two-group parallel design is the most common type, in which an intervention is measured against a comparator (eg, placebo), or two active interventions are compared. This design is often used, as the statistical inference (the process of drawing conclusions about populations or scientific truths from a data sample) is clearer when associated with a contrast between only two groups.

The framework of your trial is also important to consider. This relates to your research question. Are you aiming to assess if an intervention is better than (superiority trial), is at least not worse than (non-inferiority trial) or whether it is equally effective as (equivalence trial) a comparator? These are important considerations, particularly for the sample size estimation and for the conclusion that can be drawn. The trial designed to show superiority is the most common, and if there is a statistically significant and clinically important difference between groups (in the hypothesised direction) you can conclude that the intervention of interest is superior to the comparator. However, in many cases, trials do not confirm the prespecified hypothesis of superiority because the results turn out statistically non-significant. Hence, you cannot conclude better performance or superiority of the intervention in relation to the comparator. Does that then mean that the intervention and comparator are equally good? The answer is ‘No’. The absence of evidence of superiority is not evidence of equivalence or non-inferiority. There will always be some small difference in the estimated effects that should be confirmed in a new trial with a (new) prespecified hypothesis of equivalence and an appropriate sample size estimation, which makes it possible to confirm small differences as true.19 In contrast, if your non-inferiority trial successfully documents that two interventions are not inferior to each other, it is possible to test for superiority of one of the interventions on the same data, that is, a new trial may not be necessary.19

You should also consider whether your research question and the results you are expecting are confirmatory or exploratory.23 This has implications for the nature and strength of the conclusion you can draw. Confirmatory research tests predefined hypotheses, usually derived from a theory or the results of previous studies that can be used to draw firm and often meaningful conclusions. This is because the researcher knows exactly what to look for and is confident in where to look for it. In contrast, exploratory research is less specific because the researcher may not know what to look for or where to look and typically generate hypotheses post hoc rather than test a predefined hypothesis. Thus, exploratory research has fewer predefined hypotheses, if any, due to a lack of theory or previous research. Exploratory research therefore entails an increased risk of finding false associations, making the conclusions drawn correspondingly weak. A friendly note of caution should be made in association with exploratory research. HARKing (Hypothesising After the Results are Known)24 is a natural part of exploratory research, in which you subsequently test the HARKed hypothesis as a predefined hypothesis in a new confirmatory trial. You should be cautious not to report your exploratory research as confirmatory.

A further important categorisation is the difference between a pragmatic and an explanatory trial. In the latter, many conditions surrounding the trial will be controlled in order to better understand the outcome(s). In contrast, the pragmatic trial aims at describing the effects of the intervention under conditions that reflect real-life or clinical practice as much as possible. The pragmatic trial design is necessary in certain analyses, such as cost-effectiveness analyses.

The following papers provide more detailed information on different types of trial designs: refs 19 25 26.

Eligibility criteria and recruitment

Specific research areas may have detailed recommendations for aspects of the protocol dealing with recruitment and enrolment (eg, diagnostic criteria). We suggest you do a systematic search within your specific research area. In general, care is required when selecting eligibility criteria as many trials exclude participants who are potential candidates for the treatment under evaluation, which raises questions about external validity.27 28 Regarding recruitment, Trial Forge (http://www.trialforge.org/) is currently preparing a Trinity Package, which is a comprehensive summary of what is known about trial recruitment.29 30

The following paper and web resource provide more detailed information on methods to improve recruitment to trials: ref 29 and http://www.trialforge.org/.

Interventions

Because the evaluation of interventions is often the primary objective in a trial, the quality of the intervention description is very important. Without a complete and thorough description, trial personnel will have difficulty in delivering the intervention during the trial. When the trial report becomes available, a poor description will mean that others may find the intervention difficult to implement in clinical work or replicate for research. Recently, the Template for Intervention Description and Replication (TIDieR) checklist was developed to facilitate a complete and thorough generic description of interventions.31 The TIDieR checklist is an extension of the CONSORT checklist and the SPIRIT guideline when dealing with checklist items of intervention descriptions. Complex interventions (such as rehabilitation or exercise interventions) are particularly prone to incomplete reporting.31 32 As suggested previously, it is recommended to consult this reporting checklist during the planning phase of the trial, so that you will be able to make protocol choices accordingly. As the TIDieR checklist is generic, it is possible that you will need to supplement the intervention description with research area-specific information. As with the CONSORT checklist, the TIDieR checklist also comes with an informative explanatory paper.31

The following web resources provide more detailed information on the TIDieR checklist: www.consort-statement.org , www.equator-network.org.

Patient involvement

Collaboration between patients and healthcare professionals (including researchers) seems to be an appropriate means to capture patient perspective33 and helps prevent a potential mismatch between patient preferences and the scientific focus.34 There are many reasons why it is important to consider patient involvement at the planning phase. To highlight one, the relevance of your research will likely increase. For example, patients will be able to provide you with valuable information on the symptoms they consider most disabling, which can help you choose the primary outcome for your trial. When confronted with an outline of the time requirements for an intervention or the trial measurements schedule, patients will also be able to give you an idea, as to whether this intervention is feasible in a real-life setting.

In order to collaborate with patients when writing the PREPARE Trial guide, we asked the following question to two patients who had just participated in a clinical trial: ‘We – as researchers – are preparing a ‘how-to’ guide for other researchers, in which we list issues that we feel are important for researchers to consider carefully before they start including participants into a clinical trial. The overall aim of these considerations is to increase research quality to benefit both patients and society. Are there issues that you – as a trial participant – feel are important for researchers to consider when they plan a clinical trial?’

To summarise the feedback, the patients stated that the following was important to consider: (1) what patients may gain from the trial findings; (2) what the time requirements are for participants, preferably with an illustration; (3) if participation is compatible with normal working hours; (4) what is the plan for disseminating the trial results to the participants; (5) that the researchers are friendly and helpful; and (6) that the researchers are familiar with the condition. This information would not have been part of the PREPARE Trial guide had we not interviewed trial participants.

The following papers provide more detailed information on patient involvement in research refs 33 34

Bias

Bias is something that you do not want in your trial but can be challenging to avoid. It refers to systematic error in results or inferences, that is, a deviation from the truth. Biases can vary in magnitude: some are small and trivial compared with the observed effect, and some are so substantial that an observed effect may be entirely due to bias. It is usually impossible to know to what extent biases have affected the results of a particular study, although there is good empirical evidence that certain flaws in the design, conduct and analysis of randomised clinical trials lead to bias.35–37 We suggest you plan your efforts to reduce bias, as if your trial was to be included in a systematic review, and accordingly assessed for risk of bias using the Cochrane Collaboration’s risk of bias tool.38

The following papers provide more detailed information on identifying and avoiding bias in research: refs 38 39.

Sequence generation and allocation concealment

Sequence generation and allocation concealment together cover a domain called selection bias.38 It refers to systematic differences between baseline characteristics of the groups that are compared. Successful randomisation prevents selection bias in allocating participants to interventions. Its success depends on several interrelated processes. You must specify a rule for allocating participants to interventions, based on some chance (random) process. We call this sequence generation, the sequence being, for example, a list of random 0s and 1s, by which participants are assigned to the groups of the trial. Protecting or concealing this sequence until participants have in fact been allocated to groups is referred to as allocation (sequence) concealment. This is done so that involved trial personnel or participants are kept unaware of the upcoming allocation.40 For example, the person who recruits trial participants cannot introduce bias by unintentionally recruiting patients with the perceived greatest needs every time he or she knows that the next participant will be allocated to the active treatment. If this bias was present, the treatment effect would most likely be overestimated.41

The following papers provide more detailed information on the types and methods of generating allocation sequences, as well as the principles of stratified randomisation for randomised trials: refs 42 43

Blinding

Blinding to minimise the risk of bias is a cornerstone of intervention evaluation.44 It is an attempt to hide the allocation sequence from involved trial personnel or participants following allocation to trial arms. It can be difficult to achieve in some types of trials, such as trials evaluating non-pharmacological interventions. So, when you plan your trial, it is relevant to know of the different types of blinding that exist to blind participants, intervention providers or outcome assessors. These include, for example, hypothesis-blinding, sham procedures and similar attention-control interventions. You will be asked to describe the type of blinding (if used) when you eventually report your trial, in accordance with the CONSORT checklist.

The following papers provide more detailed information on blinding methods and how to report them: refs 42 44

Outcomes

Choosing the most relevant primary outcome for your trial needs careful consideration. The primary outcome should reflect the needs of the end user (eg, patient) and have sound psychometric properties (ie, be reliable and valid). Patient involvement in the planning process can assist in this process, especially if no core outcome set exists. Core outcome sets represent the minimum that should be measured and reported in a clinical trial of a specific patient population. A good place to start is the COMET (Core Outcome Measures in Effectiveness Trials) Initiative and database website (http://www.comet-initiative.org/). The COMET Initiative database is a repository of core outcome sets and can serve as an inspiration. It can be tempting to add a large number of secondary, supportive outcomes, but please remember that the primary objective and sample size estimation pertain to the primary outcome only. Adding a high number of secondary outcomes will likely make the trial less efficient.45

The outcome measure needs to measure what it is intended to (accuracy) do so in a precise manner (reliability), and thereby measure when relevant changes have occurred in the participants. When deciding on primary and secondary outcomes, it is important to consider whether the outcome is either a clinically meaningful outcome that measures directly how patients feel, function or survive, or a surrogate measure. Surrogate measures do not measure these parameters in a direct way, although they may provide a causal pathway with a clinically meaningful outcome. Surrogate measures can, however, be associated with several problems; an intervention that produces a positive surrogate response can have no or even a harmful effect on the clinical outcome, and at worst lead to the implementation of harmful treatments.46

Finally, you should consider the fact that you will need to report on all prespecified outcomes in your trial, and report transparently if you want to report new outcomes that were added after the trial began. We suggest you carefully plan and report your trial outcomes, as if your trial intent and report were to be investigated by the COMPare trials project (http://compare-trials.org/). The COMPare team systematically checked every trial published in the top 5 medical journals between October 2015 and January 2016, with the purpose of searching for misreported findings and outcome switching. This team’s effort revealed a large degree of inconsistency in outcome reporting.

With respect to the issue of selecting outcome measures of good methodological quality for a specific patient population, a fine resource to guide you is the Consensus-based Standards for the Selection of Health Measurement Instruments (COSMIN) website (http://www.cosmin.nl/). The COSMIN initiative has developed a critical appraisal tool (a checklist) containing standards for evaluating the methodological quality of studies on the measurement properties of health-related outcome measures. When a possible relevant outcome measure has been identified, it is a good idea to pilot the outcome measure first. Piloting on your outcome measure is just as important as piloting on your intervention or inclusion procedures. In order to master any test or outcome measure, actual experience in performing the measuring/testing in the correct way, and knowing how your outcome measure reacts in different situations, will often give you great advantage. Unnecessary ‘noise’ from your measurement can hopefully be avoided or reduced so that true between-group differences can be detected.

The following papers provide more detailed information on validity, reliability and responsiveness of health-related outcome measures: refs 47 48.

Sample size estimation

Estimating the number of participants to be included in your trial can be troublesome if you do not understand the fundamental principles. Consulting a biostatistician is highly recommended, but it is a good idea to be able to do the ground work yourself. The sample size estimation should be made for your defined primary outcome to estimate how many participants you have to include for the trial to be large enough to provide a reliable answer in terms of the primary hypothesis and outcome. If for some reason the sample size is estimated on some other basis, we suggest you make this clear and provide a justification for this approach.49

In order to estimate and subsequently report your sample size sufficiently so that others can replicate it, you will need to specify several items, depending on your trial design. For a classic superiority trial with an intervention and control (comparator) group, these include the primary outcome (including whether it is a binary or continuous outcome); the test statistic (for example t-test or χ2 statistics); the minimal clinically important difference in response between groups, that is, the smallest effect of your intervention worth detecting; the probability of erroneously rejecting the null hypothesis or, in other words, the probability of a false-positive conclusion (‘α’, the type I error, commonly set at 0.05); and the probability of erroneously failing to reject the null hypothesis or, in other words, the probability of a false-negative conclusion (‘β’, the type II error, commonly set to 0.20 (statistical power of 80)). Hence, an α and β of 0.05 and 0.20, respectively, means that you desire less than 5% risk of a type I error and less than 20% risk of a type II error for your trial.

The following papers provide more detailed information on the principles of sample size estimations and how to report them: Refs50–52.

Statistical analysis plan

As stated above for the sample size estimation, consulting a biostatistician is highly recommended when you write your statistical analysis plan, but it is a good idea to be able to do the ground work yourself. An important part of the trial protocol is the plan for the statistical analyses that you will run on your data. It can be provided in detail in the trial protocol, but it can also be in the form of a separate document named ‘Statistical analysis plan’ that gives more details than that expected of a trial protocol. Whichever format is chosen, the main thing is that you have to plan in detail, the statistical analyses for your trial data, before the trial starts, or at the latest make minor last minute changes before breaking the trial blind. You also have the possibility of publishing your statistical analysis plan, either as part of a protocol or as a stand-alone document. Many journals now publish trial protocols and some publish statistical analysis plans as stand-alone documents.

As a frame for developing and writing your statistical analysis plan, we suggest you use the ICH-E9 guideline.49 The guideline is freely available for download from the ICH website (http://www.ich.org/home.html). In addition, the Statistical Analyses and Methods in the Published Literature (SAMPL) guidelines53 will also be good to consult in the planning phase. The SAMPL guideline is freely available for download from the EQUATOR network’s homepage (http://www.equator-network.org/). Finally, a comprehensive guidance document for statistical analysis plan content is currently being developed.54

The statistical analysis plan for your trial should include all the principal features of the proposed confirmatory analysis for the primary outcome variable(s) and the way you intend to handle anticipated analysis problems, such as missing data. For a confirmatory analysis, the statistical analysis plan should specify the hypotheses that are to be tested and the treatment effects that are to be estimated in order to satisfy the primary objective of the trial. The statistical methods used to accomplish these tasks should be described for the primary (and preferably the key secondary) outcome variables, and the underlying statistical model should be made clear.

We recommend that you clearly distinguish between the primary analysis of the primary outcome variable and supporting secondary analyses of the primary or secondary outcome variables. For example, you may have decided that your primary analysis for the primary outcome variable is to be on the full analysis data set (intention to treat-population), using multiple imputations for participants with missing data and without analysis adjustments. You may then want to supplement this primary analysis, which the reader will trust the most, with a supportive secondary analysis on the per-protocol data set, in which participants are excluded if they do not meet a predefined minimal treatment exposure. If in this secondary analysis of the primary outcome you also intend to use the baseline data to adjust the treatment estimates for potential baseline differences, this can then also be specified.

Different approaches exist by which you can create a full analysis data set for intention-to-treat-analyses. The principle implies that the analysis includes all randomised participants. Compliance with this principle would necessitate complete follow-up of all randomised participants so that no outcome data were missing. This is difficult to achieve, as some trial participants are almost always lost to follow-up because they move, become sick or do not show up as scheduled. The term ‘Full analysis set’ is used to describe the analysis data set, which is as complete as possible, and as close as possible, to the intention-to-treat population. To create a full analysis set, you will have to replace missing values with substituted values—a process called imputation. Imputation techniques range from carrying forward the last observation (data point) to the use of complex mathematical models.55 The main thing is that you plan and describe how you intend to handle missing data in your trial, and that you justify the approach chosen. The same can be said if you plan to run subgroup (ie, stratified) analyses or adjust for the influence of covariates, as these procedures will most likely impact your sample size and increase the risk of bias.56

When you eventually get to report the statistical analyses in the trial report, descriptive or summary statistics are very important.14 Because the statistical analysis plan holds information on how to report descriptive statistics, it is important to plan this before the trial starts. The CONSORT explanation and elaboration paper is a great resource for this.17 It presents examples of tables of results as well as, for example, information on how and when to use SD, 95% CIs and p values. Please note that CONSORT emphasises contrasts between groups, also known as effect sizes. It is therefore much more important for you to plan a clear presentation of between-group changes (contrasts) than within-group changes.

The following papers provide more detailed information on how to develop and write your statistical analysis plan: refs 17 49 53 55 57–59.

Trial registration

A trial registry is a web-based resource for patients, relatives, healthcare professionals and the public that provides access to information on public and privately supported trials. Essentially, the registry information is a summary of your trial protocol that becomes publicly accessible. At the same time, the registry assigns a unique identifier (trial number) to the trial, which can be used to track the trial and trial-related information. More and more journals require that clinical trials have been preregistered, that is, registered in a trial registry before enrolment of the first participant. This is also the trial registration policy of the ICMJE (please see the ICMJE website for more details: http://www.icmje.org/). We suggest you use a copy/paste approach for trial registration activities, so that the trial information is identical between the trial protocol and the registry. If you are in doubt as to whether your trial is a clinical trial and needs registration, we suggest you use the yes/no decision tree in the Hinman et al paper,60 or even better, register the trial regardless. The AllTrials campaign calls for registration of all types of trials (http://www.alltrials.net/).

Some trial registries have adopted an element to its platform to collect data-sharing plans, which is also an item in the SPIRIT checklist.13 It is now the recommendation of the ICMJE that authors choose a registry that includes a data-sharing plan element.61 Although it may not become reality,62 the ICMJE recently proposed that trial report authors share the de-identified individual patient data underlying the results presented in the article no later than 6 months after publication to enhance trial transparency.61 Nevertheless, it is important for you to develop and register a data-sharing plan, which, at the same time, complies with national regulations.

The following web resource and papers provide more detailed information on registration and reporting of trial results: http://www.alltrials.net/and refs 61 63 64.

Publishing the trial protocol

As preregistration of your trial with a trial registry commonly holds a summary of the trial protocol only, it is possible to publish the full protocol in a scientific journal. The general idea is that once a trial protocol becomes public, there will be people waiting for the results, which in turn reduces the chance of non-publication of the results.65 It also means that readers can check for differences between what was planned and what is reported in the final trial report. Publication of the trial protocol also provides an opportunity to publish details that may not fit into the final study report. Many journals have word count and appendix restrictions, and if your intervention or other aspects of your trial methodology needs elaboration, a published trial protocol serves as an excellent reference for these details. If you choose not to publish your trial protocol as a stand-alone paper, we suggest you consider making it publicly available as supplementary material to the trial report or at a permanent online repository.

The following papers provide more detailed information on publication of trial protocols: refs 65 66.

Summary

The preparatory steps in the trial pathway are very important for a good trial design and may increase the chance that your research is not wasted at later stages. The main point of the PREPARE Trial guide is to help you to prepare your research thoroughly, based on what you will face at later steps in the trial pathway. Figure 3 and online supplementary table 1 summarise the key issues that will help you make informed trial decisions by carefully considering them in the preparatory phase.

Supplementary file 1

Figure 3

Summary of key issues to consider when you prepare your trial. COMET, Core Outcome Measures in Effectiveness Trials; COSMIN, Consensus-based Standards for the Selection of Health Measurement Instruments; PICOT, Population, Intervention, Comparison group, Outcome of interest, Time; SPIRIT, Standard Protocol Items: Recommendations for Interventional Trials; TIDieR, Template for Intervention Description and Replication.

Acknowledgments

None

Acknowledgments

We thank Mette Merete Pedersen, Postdoc, PhD, and Rasmus Skov Husted, Research Assistant, MSc, Clinical Research Centre, Hvidovre Hospital, Copenhagen, for valuable comments to an early version of the manuscript, and Mikkel B Klausen, PhD student, Metropolitan University College, Copenhagen, for practical help with patient interviews. We also thank Jeanette Kirk, Postdoc, PhD, Clinical Research Centre, Hvidovre Hospital, Copenhagen and Rasmus S Husted, PhD student, Clinical Research Centre, Hvidovre Hospital, Copenhagen  for providing valuable stakeholder input and comments. Finally, we thank former trial participants Nadia W Houman and Benny Frederiksen for coproducing the PREPARE Trial guide by providing valuable patient-specific input.

References

Footnotes

  • Contributors TB conceived the idea for the guide and drafted the first manuscript. TB, RC, KT, ST and MH all provided intellectual input and contributed to drafting revisions of the manuscript. TB, RC, KT, ST and MH all read and approved the final version to be published.

  • Funding This research was supported in part by a donation from the Oak Foundation to The Parker Institute, Bispebjerg – Frederiksberg Hospital, Copenhagen (RC and MH).

  • Competing interests RC reports non-financial support from board membership, grants from consultancy (AbbVie, Amgen, Axellus A/S, Bristol-Myers Squibb, Cambridge Weight Plan, Celgene, Eli Lilly, Hospira, MSD, Norpharma, Novartis, Orkla Health, Pfizer, Roche, Sobi, Takeda), personal fees from employment (Research Unit for Musculoskeletal Function and Physiotherapy, Institute of Sports Science and Clinical Biomechanics, University of Southern Denmark), non-financial support from expert testimony, grants from grants/grants pending (Axellus A/S, AbbVie, Cambridge Weight Plan, Janssen, MSD, Mundipharma, Novartis and Roche), grants from payment for lectures including service on speakers bureaus (Abbott, Amgen, Axellus, Bayer HealthCare Pharmaceuticals, Biogen Idec, Bristol-Myers Squibb, Cambridge Weight Plan, Ipsen, Janssen, Laboratoires Expanscience, MSD, Mundipharma, Norpharma, Novartis, Pfizer, Roche, Rottapharm-Madaus, Sobi and Wyeth), grants from payment for manuscript preparation (Axellus, Bristol-Myers Squibb, and Cambridge Weight Plan, Aleris-Hamlet (via Norpharma)), non-financial support from patents (planned, pending or issued), non-financial support from royalties, grants from payment for development of educational presentations (Bristol-Myers Squibb, MSD, Pfizer), non-financial support from stock/stock options, grants from travel/accommodations/meeting expenses unrelated to activities listed (Abbott, AbbVie, Axellus, Bristol-Myers Squibb, Cambridge Weight Plan, Celgene, Laboratoires Expanscience, Norpharma, Novartis, Pfizer, Roche, Rottapharm-Madaus and Wyeth), non-financial support from other (err on the side of full disclosure), outside the submitted work; RC is involved in many healthcare initiatives and research that could benefit from wide uptake of this publication (including Cochrane, OMERACT, IDEOM, RADS and the GRADE Working Group). The Musculoskeletal Statistics Unit, The Parker Institute, is grateful for the financial support received from public and private foundations, companies and private individuals over the years. The Parker Institute is supported by a core grant from the Oak Foundation; the Oak Foundation is a group of philanthropic organisations that, since its establishment in 1983, has given grants to not-for-profit organisations around the world. ST reports grants from Chief Scientist Office during the conduct of the study. The Health Services Research Unit, University of Aberdeen, receives core funding from the Chief Scientist Office of the Scottish Government Health Directorates. MH reports grants from the Oak Foundation during the conduct of the study. KT and TB report no conflicts of interest.

  • Patient consent Obtained.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data sharing statement The manuscript does not contain any data.

Linked Articles

  • Warm up
    Karen Kotila Kristian Thorborg Thomas Bandholm Anders Vinther Carsten B Juhl Bente S Andersen