Knowledge Translation Series - Guest Editor, Sharon Straus
Methodologies to evaluate the effectiveness of knowledge translation interventions: a primer for researchers and health care managers

https://doi.org/10.1016/j.jclinepi.2010.02.022Get rights and content

Abstract

Objective

The evidence base for interventions to change clinical practice is modest but growing. Given the large variation in impact and costs, this review aims to highlight the importance of evaluating knowledge translation (KT) interventions by managers and researchers.

Study Design and Setting

To meet its objectives, this review article discusses how the need for local or generalizable information and the resources available guide the evaluations. Furthermore, we discuss how study designs can focus on establishing internal validity or applicability and how the choice of focus affects the study design.

Results

This review argues that managers should routinely incorporate simple evaluation designs into program implementation to assess and increase the impact of their services, whereas researchers should focus on creating cumulative knowledge through rigorous designs and rich descriptions of context and implementation to explore reasons for success and failure.

Conclusion

We suggest that the combined effort of managers and researchers will provide a richer knowledge base to move the field of KT forward and increase the use of evidence-based implementation strategies to improve the quality of health care provided.

Introduction

What is new?

  • 1.

    Interventions to change clinical practice should be evaluated using the most rigorous design feasible, considering the need for local vs. generalizable knowledge.

  • 2.

    This article summarizes the rationale and methods for evaluating knowledge translation interventions, considering the perspectives of managers and researchers.

  • 3.

    Managers should routinely incorporate simple evaluation designs into program implementation to increase the impact of their services.

  • 4.

    Researchers should contribute to the development of cumulative knowledge of effectiveness of interventions by conducting rigorous evaluations with rich descriptions of context and implementation, exploring reasons for success and failure.

Evidence based medicine should be complemented by evidence based implementation—Richard Grol

The field of knowledge translation (KT) promotes the uptake of evidence-based practices, but the methods used to promote these practices are often not themselves evidence based [1], [2]. There is a dearth of information about which interventions work and under which circumstances [3]. This may encourage people and thus pressure to attempt to improve quality of care using common sense, rather than evidence-based approaches [4]. The complexity of implementation research is daunting. It requires taking into account multiple levels: patients nested within a provider’s practice nested within a multidisciplinary team nested within a health facility nested in local and national health care systems. The conceptual and methodological challenges are significant. These challenges are likely why the average impact of implementation interventions has so far been found to be modest [5]. Furthermore, the conclusions one can draw about what are the most effective approaches and how they should be applied in a given setting are also limited [6]. For example, the Cochrane Collaboration has registered over 350,000 randomized controlled trials in clinical medicine [1] but only 2,400 experimental and quasi-experimental trials of interventions to improve health care delivery [7]. The complexity of changing the behavior of organizations and service providers and the potential benefits of doing this warrants a partial shift in effort from the development of new treatments to developing approaches to consistently implement treatments that are already known to work [8]. For instance, a recent study showed that a checklist of simple, routine, and evidence-based procedures for surgical safety had a profound effect on mortality and complications rates [9]. In addition to being successful, this intervention was extremely cost-effective and fairly simple to roll out. Shifting focus toward creating and evaluating interventions to implement evidence-based knowledge will be essential for improving health services.

In this article, we outline the need for evaluation of KT initiatives and efforts, describe methodologies for studies of effectiveness, and suggest how to choose amongst these and how to combine these with qualitative studies to explain mechanisms of effect.

Section snippets

The need for evaluation

Given the limited evidence base to work from, both practitioners and researchers involved in quality improvement (QI) have a responsibility to evaluate the effectiveness of their efforts [10] not only because many interventions are ineffective and may lead to a waste of resources [3] but also because evaluation creates knowledge that may benefit others. When considering how to evaluate the impact of an intervention, the first issue is whether we are interested in local or generalizable

Evaluation study designs

Evaluations enable managers and researchers to understand if the intervention works and perhaps provide some insight into why (or why not) that is the case. There are also some key features of an intervention that improve our ability to evaluate it and, thus, should be considered before choosing the study design. How to measure the impact of an intervention is crucial to getting a meaningful answer to whether it works or not. The most significant indicators are patient outcomes, which can be

Establishing internal validity

There are many reasons why an intervention may appear effective, when it is not. For example, a treatment for the common cold may seem to work because a person is cured a few days after taking it. The clinical improvement may be because of the effect of the treatment or the natural course of a self-limited disease that lasts a few days. Similarly, quality of care in a given department may improve after an intervention because the intervention was effective or because quality has been improving

Establishing applicability

The types of studies mentioned previously vary in their ability to control for bias and ascertain whether an observed effect is the result of the intervention in question. This involves having a sufficient sample size, blinding of assessors and participants (where possible) to group allocation, and using a placebo when feasible. These all increase internal validity. However, even a perfectly valid study may not allow us to determine the degree to which a result is relevant to real world

Understanding successes and failures

Rigorous outcome studies control for bias and pragmatic designs attempt to maximize the applicability of results, but even at their best, they can only say if a given intervention improved the measured outcome or not. On their own, these quantitative evaluation studies can neither explain why an intervention worked or did not work nor will they tell us how the intervention could be improved in the future. Without understanding how an intervention was received by study participants, we cannot

Summary

Given the time and resources necessary to implement KT interventions, there is a need to evaluate them rigorously. Complex strategies of this nature should be pilot tested, and if promising, they should be evaluated in trials [27]. Randomized controlled designs are the least subject to bias but require a large number of units (patients, providers, and clinics), which may be logistically challenging. Pragmatic randomized designs also increase the confidence with which one can extrapolate to

References (27)

  • S.H. Woolf et al.

    The break-even point: when medical advances are less important than improving the fidelity with which they are delivered

    Ann Fam Med

    (2005)
  • A.B. Haynes et al.

    A surgical safety checklist to reduce morbidity and mortality in a global population

    N Engl J Med

    (2009)
  • J. Lynn et al.

    The ethics of using quality improvement methods in health care

    Ann Intern Med

    (2007)
  • Cited by (0)

    View full text