Author + information
- Robert L. Jesse, MD, PhD* ()
- ↵*Reprint requests and correspondence:
Dr. Robert L. Jesse, Veterans Health Administration and Virginia Commonwealth University Health System, Cardiology Division, 810 Vermont Avenue NW, Room 804, Washington, DC 20420
I had recently commented publicly that “when troponin was a lousy assay it was a great test, but now that it's becoming a great assay, it's getting to be a lousy test.” And my penance for that utterance is to write this editorial.
So, let's think about that statement…
What constitutes a great laboratory test? Is it something inherent or unique in the data themselves? Is it the information that it provides? Or, is it the knowledge that it brings, and by that I mean, does it provide information that can be put to productive use? Said bluntly, does knowing a particular test result impact our decision making in some substantive way, or change how we would manage the patient? Does it have a clear and measurable clinical implication?
And what constitutes a great assay? Is it the accuracy of the result or the reproducibility of the number? Is it the coefficient of variability (CV), and at what cut point is this important? (We have declared that the ideal troponin [Tn] assay will have a CV of <10% at or below the 99th percentile cutoff) (1). But how do we determine the appropriate population from which to derive that 99th percentile? Is it 1-size-fits-all, or does it need to be parsed by age, sex, race, body mass, and so forth? Are there temporal constraints such that value of the information decays over time? In other words, how important is turnaround time?
Finally, what is the clinical context for which the test was performed? Does the cutoff value change, or is it impacted by specific clinical circumstances? Would the reference ranges be different if the values were measured in true normals, versus all persons presenting to the emergency department, versus only those who were being admitted for presumed acute coronary syndromes? Couch potatoes versus marathoners? Symptomatic versus asymptomatic? History of coronary disease versus none? And, how do we manage problems introduced by comorbidities such as renal failure?
The answers to much of the above can be learned from looking at the history of Tn development over the past 2 decades.
A history lesson
When the early cardiac troponin T (TnT) and troponin I (TnI) assays were first being evaluated in the early 1990s, the initial target populations invariably included patients presenting with ST-segment elevation myocardial infarction (MI). A good cohort to study, but irrelevant clinically as the laboratory test had virtually no added value in this population since the diagnosis had been made based on, and the clinical action dictated by, the presenting electrocardiogram (ECG [ST-segment elevation MI]). Then an interesting finding arose: although TnT and TnI were confirmed to be specific to the myocardium, the specificity of the assays was poor relative to expectations. In the landmark study by Katus et al. (2), despite very high sensitivity, the specificity of TnT for acute MI was only 78% when using a cutoff of 0.5 μg/l. However, if patients with the clinical diagnosis of unstable angina were excluded from the calculation, the specificity improved to 95%. Why? Simply because the gold standard, creatine kinase-myocardial band (CK-MB), was an imperfect standard. Why? Because the reference standard, CK-MB mass assay, measured the normal population distribution, and the normal population distribution is very broad, resulting in a relatively high 99th percentile cutoff value. (It could also be confounded by the presence of skeletal muscle damage.) This allowed for detectable Tn in patients who had “normal” CK-MB levels, even with the relatively insensitive early assays. The obvious conclusion must be that either there are false positive Tn values, or that Tn is more sensitive than CK-MB. This has been answered by the reproducible observation that patients who have positive Tn but negative CK-MB on average have worse outcomes compared to patients who are also Tn negative (3).
A history lesson
The discordance between Tn and CK-MB was initially addressed in the context of existing definitions. The logical conclusion at first was that Tn could diagnose unstable angina (i.e., acute ischemic heart disease with MI ruled out by serial negative CK-MBs). As an initial attempt to reconcile this, in 1999 the National Academy of Clinical Biochemistry published the Standards of Laboratory Practice recommending 2 cutoffs for Tn (4). The first was at the 95th percentile, which for the first-generation assays roughly correlated to the CK-MB cutoffs, and would be diagnostic of acute MI. The second would be at the 99th percentile, or lower limit of detectability. That range between this lower cutoff and the “diagnostic” cutoff was frequently referred to at the time as a “gray zone,” but since it did provide clear prognostic information, the feeling was that it should be formally recognized, and a new diagnostic category was recommended: “minor myocardial damage” and “minimal myocardial injury” were terms frequently being offered at the time. A different approach was taken in 2000 by the European Society of Cardiology/American College of Cardiology Joint Committee on the Redefinition of Myocardial Infarction (1). The new definition was “myocardial necrosis caused by ischemia,” and it designated Tn as the “preferred” biochemical marker for detecting necrosis, noting that it had “…nearly absolute myocardial tissue specificity, as well as high sensitivity….” It went on to set the cutoff at the 99th percentile, but cautioned that the acceptable imprecision (the CV) at this cutoff should be ≤10%. This was followed shortly by an editorial from Jaffe et al. (5), arguing that “it's time for a change to a troponin standard.”
A history lesson
The first-generation Tn assays were relatively insensitive. The second-generations Tn assays were more sensitive, but still relatively insensitive. The third-generation Tn assays are even more sensitive, and yes, still relatively insensitive. An important consideration is that as the assays have improved in their sensitivity, the analytical performance has also been improving. So as to not belabor the point, we will accept as true the premise that any detectable Tn by current commercial assays is in fact indicative of irreversible myocardial injury, and as such carries prognostic information. However, there are now emerging state-of-the-art research assays that can measure Tn in the normal population, including the individual day-to-day variability (6). These are what I would consider as highly sensitive assays. Up to this point, the absence of detectable Tn was considered “normal,” and with each successive iteration of the assay, the normal level was reset. But soon, for the first time, we will actually be able to measure a normal Tn in clinical practice. This is a mixed blessing: whereas the 99th percentile will actually be a true representation of the normal population, we will soon have to make decisions about what reference population is appropriate. In effect, Tn will then be like CK-MB was, and we will have to seriously consider the relationship of age, body mass, sex, comorbidities, and so forth, when interpreting the data. In effect, Tn has appeared to be a very sensitive assay simply because it was more sensitive than CK-MB, the existing but flawed gold standard at the time. However, regardless of the diagnostic cutoff, with each successive improvement in the assay, the fundamental hypothesis that myocardial necrosis is a bad prognostic indicator has held fast.
Which brings us, finally, to the article by Bonaca et al. (7) in this issue of the Journaldescribing the prognostic relationship of a “current generation” Tn assay in patients presenting with presumed acute coronary syndromes who were enrolled in the MERLIN–TIMI 36 (Metabolic Efficiency With Ranolazine for Less Ischemia in Non-ST Elevation Acute Coronary–Thrombolysis In Myocardial Infarction 36) trial. The assay used was TnI-Ultra (ADVIA Centaur, Siemens Healthcare Diagnostics, Deerfield, Illinois), which has a lower limit of detection at 0.006 μg/l, and the 99th percentile reference limit at 0.04 μg/l with a 10% CV at 0.03 μg/l. In a nutshell, patients with TnI ≥0.04 μg/l and <0.1 μg/l had significantly more adverse outcomes (death/MI) at 30 days and 1 year compared with patients with <0.04 μg/l, thus confirming the prognostic significance of an ever-lower Tn cutoff using an ultrasensitive assay with good performance characteristics. (In fairness, it should be mentioned that there was not a statistical adverse outcome demonstrated for patients in the range <0.04 μg/l and the lower limit of detection, 0.006 μg/l.)
A history lesson
Despite the redefinition of MI to a Tn standard, detection of Tn alone does not equal the diagnosis of MI. The redefinition of myocardial infarction (1) explicitly states that only myocardial necrosis secondary to ischemia constitutes a MI. To further complicate matters, not all MIs (Tn elevations secondary to ischemia) are due to acute coronary syndromes. To address this and the particularly confusing issue of procedural Tn elevations, the “universal definition of myocardial infarction” was published in 2007 by the joint European Society of Cardiology/American College of Cardiology/American Heart Association/World Heart Federation (ESD/ACCF/AHA/WHF) task force on the redefinition of myocardial infarction (8). This describes the 5 categories of MI inclusive of sudden death, the typical acute coronary syndrome, other supply-demand mismatch situations, and procedural infarcts (both percutaneous coronary intervention and coronary artery bypass graft surgery). Absent an improvement in our ability to detect ischemia, the increasingly sensitive Tn assays will continue to challenge the distinction between MI and other etiologies of myocardial necrosis.
In that regard, it should be noted that the study by Bonaca et al. (7) was performed in a highly selected and enriched population. Entry criteria defined those enrolled as having unstable angina on the basis of either objective data, (ECG, biomarkers), or at high risk for it (diabetes mellitus, TIMI risk score). This is important because any positive Tn would not be unexpected, and thus would be assumed at face value to be true, and hence diagnostic of MI. The other issue is that the Tn obtained is a single point value drawn sometime after enrollment, which was an average of 23 h after the onset of symptoms. While this may or may not have an implication on the results of the study per se, it would be an important consideration in the clinical assessment of a patient in the emergency department.
A history lesson
Chapter 5, Modern Times (2010)
Although the diagnosis of MI is now crystal clear and based on a Tn standard, we have done little to resolve the confusion around nonischemic Tn elevations. These are frequently referred to as “false positive troponins”—or by some as “‘expletive' false positive troponins.” The consequence of ever more sensitive assays will be the finding of more and more patients with small Tn elevations lacking the supportive clinical findings to assign the diagnosis of MI. This is compounded by the tendency to order Tn tests on increasing numbers of patients lacking clinical signs or symptoms suggestive of acute coronary events (those with a low pre-test probability). The result is an increasing number of undifferentiated patients with low-level Tn elevations, and consequently, a growing distrust of the test by many practitioners who fail to appreciate the clinical significance of the findings absent a diagnosis of MI. First and foremost, we must always remember that elevated Tn does not necessarily equate to an acute coronary event, and it has been that singular issue that accounts for most of the confusion.
And herein lays the crux of the issue: what is the added value of a positive Tn finding? Make no mistake, while the finding of elevated Tn imparts prognostic information, knowing that information does not necessarily change the clinical approach to the patient in a way that improves care or outcome. In fact, one could argue that many of the current low-level Tn results may in fact negatively impact patient care by promoting over-reaction on the part of clinicians. Said simply, when it comes to Tn, and for that matter, all cardiac biomarkers, it is easy to show prognosis; it is very difficult to show prognostic value. This is true in part because prognosis is generally demonstrated in large populations, and prognostic value relates to clinical effects on individual patients.
In the original World Health Organization definition of MI, published in 1979, the diagnostic criteria were based on the triad of ECG, clinical findings, and serial cardiac enzymes (9). A major consequence of the increasingly sensitive and specific Tn in the context of the “redefinition to a troponin standard” has been the erosion of the importance of the clinical findings, a diminished value of the ECG, and most importantly a marginalization of the value of serial biomarker measurements. As the Tn assays become more and more sensitive, and analytical performance improves, the clinical context in which results are interpreted will be increasingly important.
A history lesson
The more things change the more they stay the same. In addition to factoring in the ECG and clinical situation when making a diagnosis of MI, the temporal rise and fall in cardiac markers remains an important component of the diagnosis. This dates back to the original WHO definition, and has been reiterated in both the 2000 ESC/ACC myocardial infarction redefined consensus paper and again by the universal definition of myocardial infarction ESD/ACCF/AHA/WHF task force. Dr. Jaffe has been reminding us for well over 10 years about the importance of the temporal information derived from serial biomarker measurements. This was very eloquently reiterated as the key to differentiating low-level Tn elevations for the diagnosis of an acute coronary event versus prognostic implications in other conditions in a 2006 editorial entitled “How low can you go if you can see the rise?” (10). This was an important prognostication; as the current generation assays have become more and more sensitive and have improved analytical characteristics, the discrimination between Tn elevations that represent acute coronary events versus other pathological conditions will be based on the temporal changes and/or interpretation of the clinical findings. So, for instance, in some renal failure patients, a small Tn elevation that rises and falls back to baseline over 8 to 12 h may represent a MI, whereas a small elevation that remains constant over several days will likely not.
In conclusion, when Tn was a lousy assay, it really was a great test because when used in the appropriate context it markedly improved on the existing gold standard, CK-MB, for the diagnosis of MI. In effect, the diagnosis of MI had become a technical decision based predominantly on a laboratory result because of its apparent sensitivity and highly reliable cardiac specificity. However, as the Tn assays have improved, both in threshold of detection and with imprecision, higher sensitivity has become a double-edged sword. While the prognostic value of an elevated Tn continues to appear valid regardless of how low the threshold at which we are able to detect it—as is again demonstrated for the latest generation Tn assay by Bonaca et al. (7)—we will most certainly find more and more clinical instances where Tn can be detected outside of situations that clinically constitute MI. That will continue until such time as commercial assays are accurately measuring Tn in the true normal range with acceptable CVs. At that point, Tn will truly be a great assay, and it may well then be a really lousy test unless we can factor in “true normal” when interpreting the result. And at that time, the diagnosis of MI will again become a cognitive decision integrating both objective and clinical data.
But for now, as long as the absence of Tn is the only normal, and a positive Tn can confidently be considered abnormal, it will likely continue to contribute to prognosis regardless of etiology. With each subsequent improvement to the Tn assay, the question then will be, as it is now: does it continue to provide prognostic value? That alone is the true measure of a great test.
↵* Editorials published in the Journal of the American College of Cardiologyreflect the views of the authors and do not necessarily represent the views of JACCor the American College of Cardiology.
- American College of Cardiology Foundation
- Alpert J.S.,
- Thygesen K.,
- Bassand J.P.,
- et al.
- Katus H.A.,
- Remppis A.,
- Neumann F.J.,
- et al.
- Kontos M.C.,
- Shah R.,
- Fritz L.M.,
- et al.
- Wu A.H.B.,
- Apple F.S.,
- Warshaw M.M.,
- Valdes R. Jr..,
- Jesse R.L.,
- Gibler W.B.
- Jaffe A.S.,
- Ravkilde J.,
- Roberts R.,
- et al.
- Wu A.H.,
- Quynh A.L.,
- Todd J.,
- Moecks J.,
- Wians F.
- Bonaca M.,
- Scirica B.,
- Sabatine M.,
- et al.
- Thygesen K.,
- Alpert J.S.,
- White H.D.
- Nomenclature and Criteria for Diagnosis of Ischemic Heart Disease
- Jaffe A.S.