Author + information
- Anthony N. DeMaria, MD, MACC*
- ↵*Address correspondence to:
Anthony N. DeMaria, MD, MACC, Editor-in-Chief, Journal of the American College of Cardiology, 3655 Nobel Drive, Suite 400, San Diego, California 92122, USA.
It is the nature of contemporary society to be constantly engaged in rating and comparing things. In some cases the process is straightforward and easy. The question of which professional baseball team is best is readily resolved by the winner of the World Series. There are, however, many entities for which evaluation criteria are neither quantitative nor precise. Interestingly, the lack of objective measures in no way diminishes the attempts at ranking and comparison. Thus, debates continue as to which colleges and universities are predominant or, in fact, which hospitals and doctors are “the best.”
Medical journals fit into the latter of the above categories. Despite the lack of clear, quantitative, objective criteria for assessment, the inherent tendency to rate journals continues. Some journals are generally recognized as more prestigious or “top tier” than others. However, the criteria for such a categorization are often in the eye of the beholder. The desire for an objective report card has led to attempts to devise quantitative methods to rank journals, the most prominent of which is the impact factor.
The absence of a universally agreed upon scorecard is probably related to the different parties involved in producing and using medical journals. Stakeholders include contributing authors, readers, and even the organizations that sponsor or own the journals. Based on surveys and focus groups with which I am familiar, these parties often seek different characteristics from a journal. Authors are primarily interested in the prestige of a journal and use this as the primary criteria in deciding where to submit a manuscript. In regard to prestige, new original research articles, particularly of a basic science nature, figure importantly. The size of the potential readership and the rapidity with which a decision on acceptance is reached are also important to authors. However, I am convinced that many would choose to publish articles in a journal perceived to be top tier, even if it went largely unread by those to whom it was most directly applicable. In contrast to authors, many readers seek information that is immediately usable and is presented in a compact format. Review-type manuscripts are often more highly valued than original research. Sponsoring organizations often focus intensely on financial issues while seeking a journal that is prestigious and widely read.
Not surprisingly, data have been derived to indicate how well a journal fulfills each of these goals. Because a journal must be financially viable to exist, the business aspects of the publication are of basic importance. In fact, journals often contribute to the financing of the sponsoring organization. Subscriptions and advertisements are the major sources of revenue, and data regarding these metrics are catalogued and closely watched. Most journals can relate how well they are competing for advertising dollars compared with other publications in their field. In this regard, the number of subscriptions is not the only asset when seeking advertising: the degree to which the subscribers actually read the journal is also an asset. A method has evolved by which it can be estimated how often and completely a journal is read. In fact, similar data can be obtained for individual issues or manuscripts. Thus, the number of subscribers, the degree to which those subscribers actually read the publication, and the amount of advertising revenue are all important metrics for a journal. Fortunately, JACCexcels in all categories.
Another readily available measure of the vitality of a journal is the number of articles it receives for publication. The number of submissions reflects the popularity and image of the journal with authors. In addition, the greater the number of manuscripts received, the more selective the publication can be in choosing only the very best material. Of course, if the number of available pages is fixed, more submissions will mean more rejections and a greater number of unhappy authors. However, on balance, the quantity of submissions is a measure of success. Last year, JACCreceived nearly 25% more manuscripts than in the previous year.
The primary objective of a medical journal is to publish important new information that influences clinical care or subsequent research. The degree to which this goal is achieved is probably best assessed only years later, when the changes in practice or investigation induced by publications can be determined. Several indexes have been developed to provide a more immediate evaluation of success, the most commonly used of which is the impact factor. The “impact factor” is an index of citations produced by the Institute for Scientific Information (ISI), a firm based in Philadelphia. Specifically, the “impact factor” of a journal for any year is the number of times that articles published in that journal in the previous two years have been cited, divided by the total number of original and review manuscripts. The concept is that the citation of a manuscript is evidence of its influence on practice or research activities. The impact factor is proffered as a measure of the quality of a journal or of a manuscript or author. The ISI proposes that the impact factor is useful to librarians in managing journal collections, to advertisers in selecting venues, and “in the process of academic evaluation.” Although of less prominence in the U.S., the impact factor is assigned major importance in other countries. As has been true for some time, JACChas the third highest impact factor of cardiovascular journals.
In the absence of any comprehensive quantitative parameter reflecting the overall success of a journal, the easily accessible impact factor has come to represent the simplest numerical indicator of comparability, the equivalent of ejection fraction for left ventricular function. In fact, it is widely touted by some journals and sponsoring organizations as evidence of their prestige. Following the release this June of the most recent annual data, I am aware of at least one publication that notified its Editorial Board members of an increase in impact factor by e-mail. Nevertheless, the impact factor includes a number of variables, has been the subject of controversy (1,2), and is, I believe, overrated as a measure of quality. In addition, over-emphasis on the impact factor could provide an inappropriate incentive for favoring some types of articles over others.
The limitations of the impact factor have been highlighted in an earlier article that listed 19 problems associated with this parameter (1). As is apparent from its definition, the impact factor has a numerator and a denominator, either of which can influence its value. The impact factor will increase either if citations increase in number or if published articles decrease in number. This is especially relevant because not all types of publications (e.g., letters, editorials) are counted in the denominator. Moreover, review articles are cited much more frequently than original research articles because they summarize numerous previous findings. Thus, some journals with a format consisting of a small number of original research articles, frequent editorials and reviews, and an extensive “Letters to the Editor” section, have extremely high impact factors. The fact that self citations are taken into account in determining the impact factor provides an incentive for journals to favor articles citing their own publications. Research fields undergoing major developments will favor a large number of citations from journals in those fields. The nature of a journal is an additional variable that influences the impact factor because basic science studies are usually cited by clinical articles but not vice versa. Although the impact factor may be the most available quantitative measure of journal quality, it is clearly flawed.
Given the foregoing limitations, it is surprising that the impact factor has received so much publicity and emphasis. A former mentor, joking about the process of academic advancement, once quipped that the promotion committee first placed all CV/bibliographies on a scale and then evaluated only those exceeding a certain weight. Although the impact factor may not be as gross a measure of quality as weight, it certainly falls short of encapsulating the value of a journal in a single number. Moreover, if all journals began competing for the highest impact factor, this could lead to stilted publications containing only a small number of manuscripts most likely to be cited in the next 24 months, along with review articles, reports from committees and working groups, and correspondence.
Just as is true for all endeavors, it is clear that we need a method of assessing the efficacy of medical journals. The citation-based impact factor represents a limited start toward quantifying the amount of published material that affects clinical care and research. However, over-emphasis on the impact factor may serve to obscure these limitations, thereby inhibiting improvement and fostering the formatting of journals to maximize this metric (the equivalent of teaching to answer the test). A report card is needed, but should evaluate all aspects of the journal, including financial status, readership size and satisfaction, number and breadth of submissions, and the quality of research and non-research material published. Fortunately, considering the current status of JACC, we can be open to almost all formulations.
☆ Editor-in-Chief, Journal of the American College of Cardiology
- American College of Cardiology Foundation