Article Text

Download PDFPDF

What are network meta-analyses (NMAs)? A primer with four tips for clinicians who read NMAs and who perform them (methods matter series)
  1. Amin Doosti-Irani1,
  2. Maryam Nazemipour2,
  3. Mohammad Ali Mansournia3,4
  1. 1 Department of Epidemiology, School of Public Health and Research Center for Health Sciences, Hamadan, Iran
  2. 2 Psychosocial Health Research Institute, Iran University of Medical Sciences, Tehran, Iran
  3. 3 Department of Epidemiology and Biostatistics, School of Public Health, Tehran University of Medical Sciences, Tehran, Iran
  4. 4 Sports Medicine Research Center, Neuroscience Institute, Tehran University of Medical Sciences, Tehran, Iran
  1. Correspondence to Professor Mohammad Ali Mansournia, Department of Epidemiology and Biostatistics, School of Public Health, Tehran University of Medical Sciences, Tehran, Iran, Islamic Republic of; mansournia_ma{at}yahoo.com

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

One randomised controlled trial (RCT) of a treatment does not revolutionise clinical practice. Clinician readers of BJSM will have noticed the increasing frequency of meta-analysis of treatment studies—valuable synthesis from many RCTs. The classic pairwise meta-analysis compares two treatments—most commonly one treatment over a control. The meta-analysis reports that exercise treatment reduces pain in osteoarthritis versus an education programme (control). Boom! Done.

But what about when there are many treatment options? We can immediately think of at least eight treatments for Achilles tendinopathy. Some of those treatments have not been compared against each other in RCTs (eg, heel raise inserts vs plasma-rich protein). This is where it is time for the clinician to call 1-800 Network Meta-Analysis. Network meta-analysis (NMA) is a clever and popular statistical method to compare numerous treatments for a condition. An NMA allows two treatments to be compared statistically, even if they have not been compared directly in any previous randomised controlled trial, via a common comparator. NMA methods also allow researchers to rank a series of treatments by their effectiveness.1

The Enhancing the Quality and Transparency of health Research reporting guidelines such as the Cochrane and Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklists are extremely helpful for conducting NMAs1 and reporting the results,2 but many published NMAs have limitations. In the spirit of improving the quality of research—methods matter—we share some tips on conducting and reporting NMA.

In this primer for clinicians we focus on: (1) searching to obtain the full suite of relevant articles, (2) considering the assumptions inherent in the NMA methods, (3) data analysis and (4) benefiting from the consensus templates to report NMA findings.

Seek and ye will find: but you need to search thoroughly

NMA requires a multidisciplinary specialised review team including experts in both areas of content and statistical analysis. The first step for NMA, and in fact more generally any systematic review, would be providing a well-defined research question and a well-designed search strategy for that question. In many systematic reviews, the authors do not search all available databases,3 4 for example, they search PubMed but no other major databases such as Web of Sciences or Scopus and so may miss many medical papers published in medical journals that are not indexed in PubMed. Another problem with published systematic reviews including NMA is restriction in time and the language of individual studies.5 6 These limitations increase the risk of publication bias, and consequently, the results of such NMA papers cannot provide a comprehensive picture regarding the effectiveness of all available treatments.

Two key assumptions that underpin NMA methods

There are two main assumptions underlying NMA including similarity (transitivity) and consistency.1 Similarity is not statistically testable; it requires content and methodological knowledge. Consider three treatments for Achilles tendinopathy: A (exercise), B (extracorporeal shock wave therapy) and C (surgery). Suppose the trials compared A versus B and A versus C, but not B versus C, and you are interested in the (indirect) effect estimate of B versus C (figure 1A). The similarity (transitivity) assumption implies that the distribution of the effect modifiers (explained below) is similar in trials comparing ‘A versus B’ and ‘A versus C’. An example of an effect modifier is the severity of the condition at baseline (say measured by Victorian Institute of Sports Assessment-Achilles (VISA-A) score); it will influence the effect of treatment on the outcome. The effect estimate for B versus C would be unbiased only if the distribution of disease severity in trials comparing ‘A versus B’ and ‘A versus C’ is identical. The similarity assumption can be checked clinically and epidemiologically by content and methodology experts, and it should be reported (see below, reporting).1

Figure 1

Direct (solid lines) and indirect (dotted lines) comparisons of treatments A, B and C in a network meta-analysis: (A) B and C can be only indirectly compared under similarity assumption, (B) a closed loop by treatments A, B and C, so we can directly and indirectly compare all pairs of treatments.

Consistency is a statistically testable assumption, which is met if direct and indirect effect sizes are the same. Consider three treatments A, B and C form a closed loop: trials comparing ‘A versus B’, ‘A versus C’ and ‘B versus C’ are available (figure 1B).

So we can estimate the direct effect sizes for all comparisons. We can also estimate the indirect effect sizes under the similarity assumption. If direct and indirect effect sizes are the same for ‘B versus C’ (and other pairs of treatments), the consistency assumption is met. There are two approaches for testing consistency including loop-specific and design-by-treatment interaction.1 If this assumption is not held, the authors should follow alternative strategies, including assessing extracted data, explore the source of inconsistency using meta-regression, encompass with inconsistency models and potentially (if warranted!) withdrawal of the NMA.7 We recommend against submitting NMAs that suffer from inconsistency. Please report consistency assumptions explicitly.

Data analysis

There are two analysis approaches for NMA: frequentist and Bayesian. A major concern in Bayesian methods is selection of the best prior distribution, which is unclear in some published NMA papers.5 Also only the non-informative prior distribution is sometimes used in NMA.6 The sources and process of selecting the best prior distribution should be explicitly reported in the statistical methods of a Bayesian NMA.8 9

Pooling the results of studies with different designs is another concern in data analysis. We recommend against combining the results of RCTs with observational studies in an NMA, because the observational studies are prone to confounding bias. This violates the similarity assumption.10

Reporting findings: use the PRISMA checklist and guidelines!

Despite the invaluable PRISMA checklist for reporting NMAs, some published NMAs are not well reported.4 Based on the PRISMA guideline, 10 items should be presented in the results section of the NMA: the process of study selection in a flow diagram, network structure in a network plot, a summary of network geometry, the study characteristics, risk of bias within included studies, results of individual studies, results of NMA in the league table and forest plot, and also results of treatment ranking, results of assessment of consistency assumption, risk of bias across studies and results of additional analysis including sensitivity analysis and meta-regression.2

References

Footnotes

  • Contributors All authors had contributed in planning and reporting the manuscript.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.