Author + information
- ∗University of Minnesota School of Medicine and VA Medical Center, Minneapolis, Minnesota
- †Icahn School of Medicine at Mount Sinai, New York, New York
- ↵∗Reprint requests and correspondence:
Dr. Jagat Narula, Icahn School of Medicine at Mount Sinai, One Gustave L. Levy Place, Box 1030, New York, New York 10029.
The landscape of scientific publishing is changing rapidly, as are the challenges faced by journal editors. While estimates are imprecise, approximately 8.5 million science, technical, and medicine (STM) researchers published more than 1.8 million articles in almost 22,000 peer-reviewed journals in the year 2012 (1). The total corpus of published knowledge is more than 50 million papers (2). Older studies had predicted a 3% yearly growth, doubling over a 24-year period (3), but medical publications actually may have been growing faster than predicted—at a 5.6% annual growth rate, doubling over a 13-year period. The explosion of newer ways of creating and sharing content online will continue to accelerate this pattern. The volume of research data available is projected to grow from 0.8 trillion gigabytes to more than 35 trillion gigabytes in a decade (4). What are the new challenges to authors, reviewers, editors, and readers in the new landscape?
Role of a Journal: Eye of the Beholder
Society wants published science to advance mankind with believable progress. Medical journals are responsible for maintaining relevant content and moving science forward in a meaningful way. Readers are interested in staying informed, keeping up with the torrent of information, and finding their “go-to” source. The younger generation of readers, growing up in the digital age, may ask even more pointed questions, including: should editors decide what is presented to them, or should consumers be accessing unvetted content freely and decide on their own? This new wave of thinking might eventually creep into medicine as well. The ultimate “Holy Grail” for the reader is the ability to seamlessly access and personalize published information, and maybe even to create content they can store or share.
Research and Quest for Truth: Is Science a Sisyphean Task?
Editors are faced with major hurdles in the quest for publishing perfectly believable and translatable, yet impactful, content. Science deals with uncertainty and pure scientific truth is like the speed of light— something we can move toward but never reach. Ionnadis et al. have concluded that very many papers are likely to be wrong due to a variety of inherent issues (5) and even randomized controlled trials might be disproven or their impact minimized with subsequent investigation (6). Many studies might not be repeatable, and conclusions change over time, such as cholesterol guidelines, or hormone replacement therapy in women. Most highly regarded papers do their best to provide a practical truth that might be usable with maximum benefit and minimal detriment, rather than an absolute truth. This delicate balance is difficult to understand and can influence societal support for the scientific method.
Perfection in an Imperfect World
Given the imperfections in research, what can make science believable to stakeholders? Publication of high-quality papers is the obvious answer. This is a big hurdle, since there is no good universal metric for quality. A working paradigm could be to publish “novelty that moves the field forward.” Often the flawed metric of impact factor seems to creep into decision-making. Publishing in a high impact factor journal does signify a higher quality paper, but such journals also have a higher number of retractions (7). The review process is an attempt to ensure that the data are robust, unbiased, and obtained with a rigorous, repeatable methodology but cannot assure that the data are true. Can editors arbitrate between believable and unbelievable science? Should they withhold science they do not believe, especially when the “unbelievable part” might have the potential to be the groundbreaking science? Editors sometimes have a nagging feeling that our preconceived notions might cloud our judgment. Time and again, unbelievable data have turned out to be very important. The early papers on vasodilator therapy or the use of beta-blockers in heart failure provide some examples. Editorial boards face a daunting task to distinguish among the inaccuracies that arise from negligence, and academic sloppiness, to outright intentional fraud, and to the revolutionary novelty of data. Editorial misjudgments about where a controversial paper fall in this spectrum can have significant consequences, and most editors tread carefully in this realm. Their decisions might affect public acceptance of science and patient compliance and could erode the support for science. It is evolving into a major issue (8) and seemingly increased reporting of academic malpractice and journal retractions make this a minefield for editors.
How Common is Malpractice in Medical Research?
Media headlines make it seem like medical journals are rife with unhealthy science (9,10). However, is scientific research fraud really on the rise or are the reported ones the most egregious (10)? It appears that while retractions are on the rise (11), the most exhaustive study in this area found that misconduct was seen in 1 in 10,000 papers. Of these, only 25% were due to error and the rest due to fraud of varying degrees (7). It might also just be a case of discovering fraud more easily under greater scrutiny and automatic digital sleuthing (e.g., CrossCheck). However, some feel that the fraudulent reporting is underestimated (12). Of the many possible avenues of falsehood in manuscripts submitted for publication, some are easier to detect (e.g., the plagiarism) than others (fabrication or falsification). Plagiarism-detecting software or image-matching software (11) or even crowd wisdom on social media (13) can help, but the editors have to rely on experience, depth of expertise of the board members, and most importantly, eagle-eyed peer reviewers.
Is Science Based on “Trust” the Culprit? Do We Need a Sarbanes-Oxley Act?
The current paradigm of science—where editors review papers on face value and honesty is assumed—allows unwanted elasticity in ethical publishing. However, we do not foresee any other system that may replace this process. Quite similar to Churchill’s quote on democracy it may be the best of all imperfect systems, since, editors cannot police research data all of the time. We strongly feel that nipping bad data at source is more effective than catching it at the Journal stage and feel that this kind of supervision should be primarily left to universities and research facilities. We are also cautiously intrigued by the idea of having someone other than the investigator oversee/certify the veracity of the process at its source before submission for publication; this could even become a “just in time” step, after the paper is accepted. This may better ensure that the research submitted conformed to current standards, a situation akin to good laboratory practice (GLP) studies. We also realize the pitfalls. It may not be possible for administrators to detect subtle dishonesty, and might impose a burden on investigators. Regardless, something must be done to restore confidence in the entire research enterprise. A similar requirement exists in the financial world, the Sarbanes-Oxley Act. There is an ongoing debate whether the scrutiny of the reporting company has resulted in enhanced honesty. This experience should guide us while considering such an option in research. However, the implications of dishonest research in medicine are significantly more devastating than a company misreporting its financial performance; whereas there are civil and criminal remedies for those affected by financial misconduct, nearly none exist for victims of biomedical research. There needs to be serious discussions about certifying veracity of the effort and determine who in addition to the vested investigator, should be involved to ensure the apt reporting of science at the source.
Do Changes in the Publishing Landscape Encourage Loose Science?
Journals, in the traditional sense, are meant to foster scholarly communication while ensuring some form of “quality” via robust, expert peer review. At the same time, STM publishing is also a big business (1). It is estimated that English STM journals generated $9.4 billion in 2011, and the entire market was worth $23.5 billion. This has naturally attracted a number of less reliable players. There were approximately 8,115 open-access journals in 2012, and this number is said to be growing at the rate of 3.5 new journals every day. Only a small proportion of these are fully peer-reviewed in the traditional sense, leading to abuses like pay to play (14), ghost writing (15) or selling authorship (16). Because aggregated sources including repositories may not be able to discern quality and process differences among Journals, these kinds of publications are more likely to destroy veracity in the scientific research process than premeditated fraud in peer-reviewed journals.
Does This Need a Systemic Solution?
Within the democratic scientific enterprise, science will, at least in the foreseeable future, remain an endeavor of trust. However, we need to upgrade this to “trust, but verify”. A number of suggestions have been made on how to enhance confidence among stakeholders that scientific papers are indeed useful and accurate (17). One recommendation is increased training for investigators (18), which has been met with mixed results (19) A harmonization of good research practices training internationally might be of help. In the end, we are left with self-correcting mechanisms like more thorough peer review, including disclosing comments and/or the reviewers; post-publication comments/critiques; efforts at replication of studies; and the wisdom of the readership to cast sunshine on papers. Mandatory contribution of data to repositories might enhance widespread availability—and scrutiny. Data mining software might reveal less obvious inconsistencies. Future steps might include provision of raw data for editorial scrutiny, periodic audit of the peer-review process, as well as more comprehensive disclosures. A strong correspondence section that relaxes the statute of limitations for debate might be useful. Publishing negative reports might diminish pressure to slant experiments to show a positive, publishable result. Statements from the Committee on Publication Ethics and the U.K. concordat are trying to highlight the problem, and suggest solutions.
An alternate route is to address the root causes of suboptimal science. In the last few decades, the reward structure of medical research has changed significantly, and pressure to publish and the winner-takes-all formats have made research an extremely challenging career. There is greater collaboration with industry, bringing with it a conflict of interest and pressure to align questions, protocols, and in some instances, format of publication with those of a nonindependent funding source. Even guidelines, once considered a pure, filtered collective wisdom of experts, are now looked upon with suspicion (20). With advent of the Internet and greater emphasis on publication productivity as a metric for rewards, the pressure to publish positive studies has created more relaxed avenues for publication. Correcting these problems, while important, is a long-term undertaking. The two of us personally believe that the most urgent change should be to encourage non-punitive redressing of flawed data. Journals & regulatory agencies should create a mechanism to provide some kind of meaningful immunity for self-disclosure of scientific dishonesty, even if done anonymously. Authors should be allowed to retract data without stigma if they believe they can no longer vouch for its veracity. Cleaning up science is so vital that it should take precedence over punishing perpetrators. Our personal opinion is that this will help clean up science more effectively than all possible regulatory or paperwork solutions.
Ultimately, clean and honest science vacuums up inaccurate science. Early findings have the greatest uncertainty, and froth; dubious research is discovered only with the passage of time when the ferment of science settles down and other eyes and experiments get to scrutinize it. As Warren Buffett says, “Only when the tide goes out do you discover who’s been swimming naked”. One wishes there was a more expeditious way to discover flawed science rather than this expensive route in terms of time and wasted resources, but it is currently the best pathway without creating a policing milieu and scaring away investigators altogether from science. Journals play an important role in preventing dubious research and in disseminating the truth; demystifying science and scientific communication will be more important in future. Encouraging data sharing, requiring detailed raw data files, increasing access via open repositories, using best practice guidance, and introducing GLP-like requirements will help. Stopping misconduct at its source may be more effective than at the publication stage. Failure to stem this scourge will destroy society’s faith in scientific research and may shackle science by inviting burdensome, ineffective regulations. We need to bend the curve of imperfect science, and we must do it relatively soon.
- American College of Cardiology Foundation
- ↵Ware M, Mabe M. The stm report: an overview of scientific and scholarly journals publishing. International Association of Scientific, Technical and Medical Publishers, 209. Available at: http://www.stm-assoc.org/2009_10_13_MWC_STM_Report.pdf. Accessed July 2014.
- ↵Tenopir CW, King DW. The growth of journals publishing. In: Cope B, Phillips A (editors). The Future of the Academic Journal. Chandos Publishing/Woodhead Publishing Ltd.; 2009.
- ↵MarkWare Consulting. Unlocking the Value of Research Data. Available at: http://www.markwareconsulting.com/science-2-0/unlocking-the-value-of-research-data/. Accessed July 2014.
- Fang F.C.,
- Steen R.G.,
- Casadevall A.
- ↵Freedman DH. Lies, damned lies and medical science. Atlantic Monthly November 2010. Available at: http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-and-medical-science/308269/#disqus_thread. Accessed July 2014.
- ↵Research fraud on the rise: AP October 1, 2012. Available at: http://www.cbsnews.com/news/study-scientific-research-fraud-on-the-rise/. Accessed July 2014.
- Rodgers G.
- ↵(2013) Random samples. Science 339:888–889.
- ↵Normile D. Whistleblower Uses YouTube to Assert Claims of Scientific Misconduct, 2012. Available at: http://news.sciencemag.org/people-events/2012/01/whistleblower-uses-youtube-assert-claims-scientific-misconduct. Accessed July 2014.
- Bohannon J.
- ↵Looks good on paper. A flawed system for judging research is leading to academic fraud. The Economist, September 28, 2013.
- Casadevall A.,
- Fang F.C.
- Steneck N.H.
- Role of a Journal: Eye of the Beholder
- Research and Quest for Truth: Is Science a Sisyphean Task?
- Perfection in an Imperfect World
- How Common is Malpractice in Medical Research?
- Is Science Based on “Trust” the Culprit? Do We Need a Sarbanes-Oxley Act?
- Do Changes in the Publishing Landscape Encourage Loose Science?
- Does This Need a Systemic Solution?