Article Text

Download PDFPDF
Patient-reported outcome measures (PROMs): how should I interpret reports of measurement properties? A practical guide for clinicians and researchers who are not biostatisticians
  1. Megan Davidson1,
  2. Jennifer Keating2
  1. 1Department of Physiotherapy, School of Allied Health, La Trobe University, Melbourne, Victoria, Australia
  2. 2Department of Physiotherapy, School of Primary Health Care, Monash University, Frankston, Victoria, Australia
  1. Correspondence to Dr Megan Davidson, Department of Physiotherapy, School of Allied Health, La Trobe University, Melbourne, VIC 3086, Australia; m.davidson{at}latrobe.edu.au

Abstract

This paper will help clinicians and researchers to understand studies on the validity, responsiveness and reliability of patient-reported outcome measures (PROMs) and to interpret the scores and change scores derived from these and other types of outcome measures. Validity studies provide a method for assessing whether the underlying construct of interest is adequately assessed. Responsiveness studies explore the longitudinal validity of a test and provide evidence that an instrument can detect change in the construct of interest. Reliability is commonly assessed with correlation indices, which indicate the stability of repeated measurements and the ‘noise’ or error in the measurement. Proposed indicators for clinical interpretation of test scores are the minimum clinically important difference, the standard error of measurement and the minimum detectable change. Studies of the Victorian Institute of Sports Assessment questionnaire for patellar tendinopathy and other PROMs are used to illustrate concepts.

  • Measurement
  • Evaluation

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.