Original Article
Citation analysis of identical consensus statements revealed journal-related bias

https://doi.org/10.1016/j.jclinepi.2009.09.012Get rights and content

Abstract

Objective

To examine whether the prestige of a journal, measured by its impact factor, influences the numbers of citations obtained by published articles, independently of their scientific merit.

Study Design and Setting

In this cohort study, citation counts were retrieved for articles describing consensus statements that were published in multiple journals and were correlated with the impact factors of the source journals.

Results

Four consensus statements were published in multiple copies: QUOROM (QUality Of Reporting Of Meta-analyses) was published in three journals, CONSORT (CONsolidated Standards Of Reporting Trials) in eight journals, STARD (STAndards for Reporting of Diagnostic accuracy) in 14 journals, and STROBE (STrengthening the Reporting of OBservational studies in Epidemiology) in eight journals. For each consensus statement, the impact factor of the source journal and the number of citations were highly correlated (Spearman correlation coefficients: QUOROM, 1.00; CONSORT, 0.88; STARD, 0.65; and STROBE, 0.81—all P < 0.02). When adjusted for time since publication, each logarithm unit of impact factor predicted an increase of 1.0 logarithm unit of citations (95% confidence interval: 0.7–1.3, P < 0.001), and the variance explained was 66% (adjusted r2 = 0.66).

Conclusions

The prominence of the journal where an article is published, measured by its impact factor, influences the number of citations that the article will gather over time. Citation counts are not purely a reflection of scientific merit.

Introduction

What is new?

Key findings

  1. When identical articles published in multiple journals were considered, the impact factor of source journals predicted the number of citations per article.

  2. The association was strong: if the impact factor was twice as high, the article received twice as many citations.

What this adds to what was known
  1. By definition, citations to articles drive journal impact factors, but it appears that the converse is also true.

What is the implication and what should change?
  1. Citation counts are not purely a reflection of scientific merit.

  2. Journal impact factors are in part self-perpetuated.

  3. Other indicators of scientific values should be considered.

The assessment of the research output of individuals and institutions often relies on citation counts [1], [2], [3]. The assumption is that better research is cited more than less-worthy work, but variables other than research quality also influence the number of citations [2]. In particular, the prestige of the journal in which the article is published may drive citations, independent of the scientific worth of the article. If this were the case, the research assessment would be distorted, and researchers might be tempted to devote a disproportionate amount of energy to finding the most prestigious channel of communication in addition to (or perhaps instead of) doing the best possible research.

Several studies have shown that articles published in journals with high impact factors are cited more often than those published in less prominent journals [4], [5], [6], [7], [8], but this may simply reflect the ability of the best journals to attract the best articles [4]. By definition, citations to articles drive journals' impact factors; it is whether the opposite relationship is also true which occupies us here. To show whether the prominence of the journal influences citation counts, one would need to randomize articles to high- or low-profile journals or to publish the same article in several journals, neither of which is practical. However, a recent phenomenon in academic publishing provides an opportunity to clarify this issue: the concomitant publication of the same article in several journals, typically a consensus statement about the reporting of a specific type of research. Anyone who wants to cite such a guideline has a choice between several equivalent citations, which differ only in the source journal. If citations to high-profile journals were preferred, this would support the existence of a journal-related citation bias.

In this cohort study, I identified consensus statements that were published in multiple journals and explored the associations between the journals' prominence, measured by their impact factors [9], and the numbers of citations to the published articles.

Section snippets

Methods

Consensus statements about research reporting were identified from the Equator Network Web site (http://www.equator-network.org). A consensus statement was eligible if it was published in at least three copies by the same core group of authors. Editorials or commentaries that raised awareness of the statements or provided an endorsement or a critique were not included and neither were translations or secondary articles that included additional explanations of the consensus statement. Finally,

Results

The following consensus statements were published three times or more: QUOROM (QUality Of Reporting Of Meta-analyses) was published three times, the revised CONSORT (CONsolidated Standards Of Reporting Trials) eight times, STARD (STAndards for Reporting of Diagnostic accuracy) 14 times, and STROBE (STrengthening the Reporting of OBservational studies in Epidemiology) eight times (Table 1). In addition, SQUIRE (Standards for QUality Improvement Reporting Excellence) was published five times

Discussion

This analysis shows that the prominence of the journal where an article is published, measured by its impact factor, is positively correlated to the number of citations that the article will gather over time. Because identical articles published in different journals were compared, the characteristics of the articles themselves (be it quality of writing, scientific originality, or repute of the authors) could not have explained the observed differences. Hence, these results reflect pure

Acknowledgment

Cynthia Lokker, PhD, McMaster University, provided an unpublished analysis cited in the discussion.

References (29)

  • J.F. Etter et al.

    Citations to trials of nicotine replacement therapy were biased toward positive results and high impact-factor journals

    J Clin Epidemiol

    (2009)
  • P. Nieminen et al.

    Statistically significant papers in psychiatry were cited more often than others

    J Clin Epidemiol

    (2007)
  • D. Hendrix

    An analysis of bibliometric indicators, National Institutes of Health funding, and faculty size at Association of American Medical Colleges medical schools, 1997-2007

    J Med Libr Assoc

    (2008)
  • L. Bornmann et al.

    What do citation counts measure? A review of studies on citing behavior

    J Doc

    (2008)
  • H.F. Moed

    Citation analysis in research evaluation

    (2005)
  • M.L. Callaham et al.

    Journal prestige, publication bias, and other characteristics associated with citation of published studies in peer-reviewed journals

    JAMA

    (2002)
  • P.O. Seglen

    Causal relationship between article citedness and journal impact

    J Am Soc Inform Sci

    (1994)
  • P. Nieminen et al.

    The relationship between quality of research and citation frequency

    BMC Med Res Methodol

    (2006)
  • K.B. Filion et al.

    Factors related to the frequency of citation of epidemiologic publications

    Epidemiol Perspect Innov

    (2008)
  • S. Saha et al.

    Impact factor: a valid measure of journal quality?

    J Med Libr Assoc

    (2003)
  • Y. Huang et al.

    Bibliometric analysis of pentachlorophenol remediation methods during the period of 1994 to 2005

    Scientometrics

    (2008)
  • C. Lokker et al.

    Prediction of citation counts for clinical articles at two years using data available within three weeks of publication: retrospective cohort study

    BMJ

    (2008)
  • D.W. Aksnes

    Citation rates and perceptions of scientific contribution

    J Am Soc Inform Sci Technol

    (2006)
  • W.R. Shadish et al.

    Author judgements about works they cite: three studies from psychology journals

    Soc Stud Sci

    (1995)
  • Cited by (0)

    Competing interests: The author has no conflict of interest related to this article.

    Contributions: T.V.P. conceived the idea of the study, performed the data retrieval and analysis, interpreted the results, and wrote the article.

    Funding: The study received no specific funding.

    Data access: T.V.P. had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

    Ethics committee approval: Approval was not sought.

    View full text