Author + information
- Valentin Fuster, MD, PhD∗ ()
- Zena and Michael A. Wiener Cardiovascular Institute, Icahn School of Medicine at Mount Sinai, New York, New York
- ↵∗Address for correspondence:
Dr. Valentin Fuster, Zena and Michael A. Wiener Cardiovascular Institute, Icahn School of Medicine at Mount Sinai, One Gustave L. Levy Place, New York, New York 10029.
In 2014, when the Journal first reached number 1 in impact factor (IF) for cardiovascular journals, I wrote that an editor’s first responsibility is to always publish original research and review articles that will have the greatest impact on our readers and on their clinical practice (1). I pointed out that IF remains an imperfect metric, but is one by which we are judged externally (1). Since that time, I have often said that IF does not drive the decision-making of the JACC Editorial Board when selecting manuscripts or creating new content strategies, as we constantly seek to publish papers that will have the most relevance for the clinician and the clinical investigator. I am proud to say that we continue to achieve this goal on a weekly basis.
In 2013, the 3 top cardiovascular journals were separated by only a <0.65-point differential (1). When the 2016 IF was released in June, we learned that the same 3 journals were separated by a <0.587-point differential—which makes me question even more the credibility of these IF rankings, given the infinitesimal difference between the journals each year. As a clinical comparison, if a patient presented 1 year with a blood urea nitrogen level of 8.1 mg/dl, and presents the following year with 8.9 mg/dl, would you change your approach to that patient? What I am trying to say is that with such slight differences between the top journals, we need another way—a better way—to determine their value to us as cardiovascular clinicians and investigators. Thus, I would encourage you to look at the strategies of our major cardiovascular journals, and how these journals garner citations to determine their value in your professional lives.
Journals that simply focus on IF to determine their editorial decisions can alter the numbers drastically. For instance, 1 journal in the cardiovascular field increased its IF by 35% between 2011 and 2012, and it has continued to increase its rankings through 2 distinct strategies: by publishing more guidelines and scientific statements, while publishing less original manuscripts (often defined as scholarly output). By 2012, the guidelines contributed 18% of the citations for this journal, and it has not dipped below this percentage since that time, based on Web of Science (2). In fact, 1 very highly cited guideline has drastically helped this journal to increase 31% in IF between 2015 and 2016—also because it remains at the lowest overall scholarly output among its competitors. Comparatively, the breakdown for the 2016 IF for JACC was 72.9% for original articles, 15.2% for State-of-the-Art Reviews and Review Topics of the Week, and 12% for guidelines. We have chosen to remain consistent with the amount of original manuscripts, State-of-the-Art Reviews, and Review Topics of the Week that we publish each week, so as not to change the denominator. However, watching a journal succeed through this type of IF-focused strategy has really made us question the value of IF. It is interesting to see that the gold standards in the field of general medicine, namely New England Journal of Medicine and The Lancet, have succeeded without publishing guidelines, but rather through publishing high-quality original and review papers. The Lancet, in particular, has found great success and relevance in its focus seminars, which tackle such relevant topics as obesity, pollution, or global health considerations.
After a greater understanding of how IF works, I now have a clearer understanding of why new metrics are emerging to challenge the dominance of IF, such as CiteScore (which incorporates citations from 3 years of published documents), Eigenfactor (which seeks to quantify a journal’s usage), Altemetric Score (which tracks how papers perform in media and social media), or SciVal (relatively new bibliometric measures) (3). I do not know how these new metrics will fare in competing with the dominance of IF, but they are trends we should watch.
I am realistic about IF—it may be a flawed metric, but it is here to stay, especially because many deans, government agencies, and employment panels use this as a performance measure, particularly outside of the United States (4,5). However, I am more determined than ever to stay on the course that we established when I became Editor of JACC. I realize that this means we may not always be number 1, based on IF. Irrespective of that ranking, I am proud of the journal we publish each week, as we aim for JACC’s original papers/scholarly output to delineate the journal and aid the clinician and cardiovascular investigator in taking care of patients and moving his or her research forward. Also, our vision going forward is to launch a series of scientific panels and focus seminars that will seek to inform these audiences in highly relevant clinical areas, such as cardiovascular health promotion.
Finally, we will have to accept the reality of the dilemma that I posed 3 years ago in my Editor’s Page (1): Does IF truly determine the impact to the reader?
- Fuster V.
- ↵Web of Science. Available at: https://login.webofknowledge.com. Accessed August 10, 2017.
- ↵Huggett S. Impact factor ethics for editors. How impact factor engineering can damage a journal’s reputation. June 4, 2012. Available at: https://www.elsevier.com/editors-update/story/journal-metrics/impact-factor-ethics-for-editors. Accessed August 1, 2017.
- Abbasi K.