Article Text

Download PDFPDF

Surveillance alone is not the answer
  1. Barry Pless
  1. Dr B Pless, Clinical Research, Montreal Childrens Hospital, 2300 Tupper, F259 Montreal, Canada H3H1P3; barry.pless{at}mcgill.ca

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

One popular theme in the injury prevention literature is the perceived need for more and better surveillance. This arises because of the belief that surveillance is a prerequisite for preventive programs. I have serious reservations about this belief and I could even argue that an undue emphasis on surveillance could be harmful. That, admittedly extreme, view applies when surveillance fails to achieve its most critical objective while consuming resources that could be better directed elsewhere. Two papers in this issue identify limitations in one such emergency department (ED)-based system, the Canadian Injury Reporting and Prevention Program (CHIRPP), but neither speaks directly to my main concern.1 2

CHIRPP is modeled on a similar program (VISS) in Victoria, Australia.3 When I helped to initiate CHIRPP 18 years ago, our primary goal was to use the results to raise the profile of injuries among children.4 Because many more injured children are treated in EDs than die or are hospitalized, we naively assumed that once the public and policy makers became aware of the larger numbers, they would be moved to improve prevention.

Unfortunately, despite all the fanfare at its birth and the long interval since then, CHIRPP has prompted few preventive actions. I suspect the same is true for most other such systems. Consequently, I question whether there is any evidence that a surveillance system—even one that operates perfectly—actually contributes to prevention. If not, are there alternatives we should consider?

Before going further, let’s agree on the vocabulary: surveillance, surveys, and registries are closely related activities, but are not identical. Unfortunately, the terms are often mistakenly used interchangeably. Surveys are either one-off, episodic, or occur at regular intervals. Most surveys are able to collect detailed data, although recall problems may compromise the accuracy of some of the details.5 Registries are “the file of data concerning all cases of a particular disease… in a defined population such that the cases can be related to a population base.”6 Unlike surveys, however, most registries contain few additional details. Importantly and in contrast, the definition of surveillance given by the CDC is “the ongoing systematic collection, analysis, and interpretation of health data, essential to the planning, implementation, and evaluation of health practice, closely integrated with the timely dissemination of these data to those who need to know. The final link in the surveillance chain is the application of these data to prevention and control.”7 (The italics are mine.)

To make this point more emphatically, I urge readers to consider what Robertson wrote about surveillance in his book, Injury epidemiology.8 He notes that the term refers to “collection of data on who, when, where, and sometimes how people become injured” (p 49). The purpose, he adds, is “to monitor trends, to target injury control measures, and, importantly, “to identify subsets in defined locations, (which) coupled with Haddon’s technical options for control, could lead to substantial reductions if implemented.” The section on hospital-based surveillance concludes: “There is a fundamental issue that people collecting surveillance data must address: how are the data being used? Are (taxpayers) getting anything for their money? What changes … have been made based on the data? Indeed, have the data been given to anyone in a position to do something to reduce injury incidence and severity?” To which I would add—and, if so, did anyone actually do anything?

The CDC definition suggests that an ideal surveillance system should provide more timely results than a survey, thus enabling urgent action to be taken. Injury systems cannot hope to be as rapidly responsive as infectious disease surveillance, and normally there is no reason why they need to be. Both the National Electronic Injury Surveillance System (NEISS) in the USA and CHIRPP appear to be at least 1 year behind in coding reports. (In fact, it is likely that NEISS is further behind than its website suggests.) Given these delays, set aside the “timely” criterion. It is the last part of the definition (…the application of these data to prevention…) that is the most critical, and it is the part that far too many surveillance proponents ignore. Instead, they focus on the mechanics of making the system run well.

Because I was one of CHIRPP’s grandparents, my comments about the papers in this issue may be biased. But my involvement could equally prompt me to be critical or laudatory. On the one hand, you might assume I would defend CHIRPP in spite of the shortcomings both papers1 2 reveal. On the other hand, as is true when a child disappoints its parents, my frustration from its failure to improve prevention might make me unduly critical. Although both papers may help some readers to develop “better” surveillance systems, they do little to enhance prevention. There is even the potential for harm if the existence of a surveillance system gives the impression that whoever is responsible (governments, hospitals, etc) are genuinely concerned about the injury problem when, in fact, they have little intention to do more than collect data. In fact, I question the value of the exercise even if results are disseminated to “those who need to know” unless they are inclined and equipped to take action. In short, the central element in the CDC definition, “the application of these data to prevention and control”, is the ultimate test of a system’s value.

THE YORKHILL STORY

Yorkhill’s CHIRPP (Y-CHIRPP) system started with excellent intentions, struggled for 10 years, and then died.1 The authors list the components of a “good system”, and, using these criteria, they conclude that Y-CHIRPP was “at best, a partial success”. They then identify elements needed to maintain its viability. For example, the authors note that surveillance must be viewed by all, especially hospital personnel, as a service tool geared to improving prevention, not research. I agree.

But viability is not the name of the game. Consistent with my core message, they also propose that their system (and by implication, others) must include “an individual with responsibility for developing and/or lobbying for the implementation of preventive measures.”

The authors believe that Y-CHIRPP raised the profile of injury prevention in Scotland, and they point to six child safety initiatives the team developed. Yet it is hard to imagine any prevention program “based around leaflet and poster campaigns” being a genuine success. Thus, no matter how simple, flexible, etc, their system may have been, the bottom line is that the goal that really counts, prevention, was not achieved. Part of the explanation for this disappointment is that support for Y-CHIRPP appears to have come almost exclusively from the hospital and not from the public health department (local health authority). Hospitals are not famous for their services to prevention, whereas most public health departments view prevention as their core mission.

THE OTTAWA STUDY

The paper by Macpherson and her colleagues2 is an elegant, well-reported study of the sensitivity and representativeness of the Canadian version of CHIRPP as it operates in Ottawa. The results suggest that both elements are disappointing, although I suspect the results might be better at other sites. This paper also points to ways in which the system might be improved, but fails to address how the data are used for prevention.

CONCLUSIONS

One conclusion from both papers is that CHIRPP has many imperfections. This should not lead one to conclude that CHIRPP is any worse than other surveillance systems. It is far more likely that it is much better than most others because it has frequently been evaluated. In addition, it has been examined more closely and more often than most other surveillance systems. Consequently, more problems have been identified and improvements made.912

A more important conclusion prompted by these papers is that there seems to be little point in fussing over capture rates, coding, sensitivity, or non-representativeness when there is no evidence that the data are being used to prevent injuries. Macpherson et al2 make no mention of data use, no doubt because that is not the subject of the paper; Shipton and Stone1 tell us that local public health officials were “pleased to have the system in place”. But, to be blunt—so what?

These papers are like tinkering with the engine of a car to make it run more smoothly when neither the driver nor the passengers know where they want to go or little expectation of getting there. In short, there is distressingly little evidence of more than “some” success of injury surveillance in Canada, England, Wales, Scotland, or Australia. The accounts provided in these papers are undoubtedly valuable for those who are keen to initiate a surveillance system or improve one already in place. But they only help make the engine run more smoothly.

Most injury surveillance systems do little more than collect data that may raise the profile of the injury problem, or provide some insights into causes. Being able to identify clusters is all well and good, but doing so is only important if mechanisms are in place to act on the findings. Certainly, that does not appear to have been the case in Yorkhill nor is it so in Canada. In spite of operating under a health department with little inclination to act, CHIRPP may well have been responsible for some noteworthy accomplishments. For example, the ban on baby walkers, the establishment of improved playground standards, and modification of building codes with respect to prevention of hot water scalding all made use of CHIRPP data.13

Nevertheless, I maintain that these are the exceptions. Success for surveillance requires more than solid data collection, analysis, and dissemination—timely or not. CHIRPP totals have been waved in front of the public or politicians in the hope that they would attract some attention. There may even be health ministers somewhere gasping in surprise at the numbers, muttering to themselves, “Wow! This problem is much bigger than most other diseases we have been pouring money into.” Maybe, but I am not holding my breath. But even if the numbers are eye-catching, I know of no reports that persuade me that they have prompted more federal leadership, better coordination, and, above all, funding dedicated to prevention. The information may be well disseminated to those who need to know, but I have yet to discover recipients who have initiated important preventive actions as a result.

I increasingly doubt if surveillance is worthwhile in the public health climate that now prevails in most countries, ie, one in which injury is a low priority. Obviously, the best solution is to improve how these systems operate and, critically, to take steps to ensure that their sponsors, usually health departments, are positioned to act on the findings. If this cannot be achieved, it is time to consider alternatives. Periodic surveys are likely to be more informative, and, although they are no more likely to enhance prevention, they usually cost less.1416 If all that is wanted is numbers, a registry should suffice, and these too are far cheaper than a full-blown surveillance system. But I must be clear: the ideal solution is to make surveillance serve the goal of prevention. This requires the establishment of a permanent health department division with responsibility for injury prevention and control. Anything short of this is almost certain to lead to the same disappointments and frustrations I have repeatedly expressed.17

Ultimately, surveillance requires a recipient of the information who has the mandate, resources, and determination to take the appropriate action. This is the missing link, in both papers, and in the situation as a whole. Surveillance is sterile and pointless if it is not somehow tied to preventive interventions. I challenge readers to send examples where this has happened and I will humbly eat my words.

REFERENCES

Footnotes

  • Competing interests: None.