Author + information
- Robert J. Myerburg, MD, FACC⁎ ( and )
- Agustin Castellanos, MD, FACC
- ↵⁎Reprint requests and correspondence:
Dr. Robert J. Myerburg, University of Miami School of Medicine, D-39, P.O. Box 016960, Miami, Florida 33101-6960.
The evolution of scientific evidence supporting the benefit of implantable cardioverter defibrillators (ICDs) began with a 16-year hiatus between the first clinical implant in 1980 and the publication of the first randomized clinical trial (RCT) results in December 1996 (1). During that interval, important perceptions developed from a combination of case-control studies, observational data, and expert consensus. Although these sources lacked the scientific authority of RCTs, they were sufficient to achieve U.S. Food and Drug Administration approval for ICDs by 1985, and in 1986 Medicare approved funding for patients surviving cardiac arrest or experiencing life-threatening arrhythmias. Calculations based on numbers of implants and the cost of devices suggest that the ICD industry approached sales of $1 billion per year by the time the first RCT was published.
Since December 1996, the results of a substantial series of RCTs have been published, targeted to both secondary and primary prevention strategies. During most of that time interval, the major debates have centered around the primary prevention strategies, given the large numbers of potential candidates and the consequent impact on health care costs. After the earlier of these studies, Medicare, and subsequently the Center for Medicare and Medicaid Services (CMS), took a rather restrictive policy position for approval of device implants, but after publication of the results of the Sudden Cardiac Death-Heart Failure Trial (SCD-HeFT) (2), CMS proposed a much more permissive posture (3). The potential impact of that decision on health care costs may be as high as $10 to $15 billion according to CMS estimates (3). With figures of this magnitude in play, against the backdrop of a fiscally stressed health care system, it behooves clinicians, clinical investigators, and health services administrators to clarify the potential for benefit within the broad groups of ICD candidates.
Clinical trial designs
The earliest among the primary prevention clinical trials had very rigorous entry criteria. The Multicenter Automatic Defibrillator Implantation Trial (MADIT) required four clinical features for entry into the study: an ejection fraction (EF) ≤35%, ambient arrhythmias after a myocardial infarction, inducibility of a sustained arrhythmia in the electrophysiology laboratory, and failure to suppress inducibility by an antiarrhythmic drug (4). The Multicenter UnSustained Tachycardia Trial (MUSTT), which had similar qualifying requirements, was designed to determine the value of antiarrhythmic strategies for patients with ambient arrhythmias and inducible arrhythmias, and a low EF after myocardial infarction (5). One of the antiarrhythmic treatment options in the complex algorithm was ICD implantation. The ICD group fared better by far than antiarrhythmic drug-treated subjects and untreated control patients, and MUSTT was accepted as further support for ICD benefit even though ICD therapy was not randomized.
In contrast to the initial studies, MADIT II did not require ambient arrhythmias or electrophysiological inducibility, simply an EF ≤30% after a recent myocardial infarction (6). Another study, Defibrillators in Nonischemic Cardiomyopathy Treatment Evaluation (DEFINITE), included only patients with non-ischemic cardiomyopathy, PVCs or non-sustained VT, and an EF ≤35% (7). The absolute benefit, measured as reduction in total mortality, was less in the MADIT II and DEFINITE studies than observed in the MADIT and MUSTT studies, which had more selective designs (Table 1).At first glance, this might not seem surprising, given the less stringent requirements for entry into the MADIT II and DEFINITE studies, but it is counterintuitive to the fact that the EFs of subjects enrolled in the MADIT II and DEFINITE studies were somewhat lower than the studies with greater frequencies of outcome events (Table 2).This apparent paradox suggests that influences in study design that are additive to the power of EF, but are yet to be identified, contributed significantly to the net outcome experience.
Clinical application of the MADIT II study criteria
Although there has been no substantial debate about the MADIT II study in regard to its fundamental design and primary end point outcome, there were debates about its general applicability, particularly in regard to appropriate clinical applications of secondary subgroup analyses. These included the question of whether the observed clustering of benefit in patients with wide QRS complexes was a valid basis for limiting device use in patients with normal QRS durations, and whether electrophysiological inducibility might identify a subgroup that would achieve greater benefit, compared with those without induced arrhythmias, such as suggested in the MUSTT data (8). With the advent of data from the SCD-HeFT (2), designed for patients with class II or class III heart failure of ischemic or non-ischemic etiology, and an EF ≤35%, the QRS duration issue faded, based in part on observations in that study and on CMS’s subsequent actions. Inducibility as a discriminating factor still remained uncertain.
In this issue of the Journal, Daubert et al. (9) provide a retrospective analysis of a subgroup of patients enrolled into the MADIT II study who received ICDs after having had an electrophysiological study before entry. The study design did not permit the results of electrophysiological testing to influence the randomization process, but one wonders whether it influenced patient-physician discussions about enrolling. Regardless, because a substantial proportion of the MADIT II study enrollees who received ICDs did, indeed, undergo prior electrophysiological testing, the investigators had a database sufficiently large to permit a valid analysis. The primary end point of this substudy was an appropriate ICD shock during follow-up, analyzed as a function of pre-implantation inducibility status. Definitions of induced arrhythmias included monomorphic ventricular tachycardia (MonoVT), polymorphic ventricular tachycardia (PolyVT) and ventricular fibrillation (VF).
The investigators carried out three parallel analyses defined by arbitrary definitions of induced arrhythmias. The standard definition was MonoVT or PolyVT induced by ≤3 extrastimuli, or VF induced by ≤2 extrastimuli. This was compared with a narrow definition of MonoVT only, and a broad definition that allowed MonoVT, PolyVT, or VF induced by ≤3 extrastimuli. The narrow definition group had a higher likelihood of ICD discharges than those in the standard definition group. Those in the broad definition group had a weaker association than the standard group. Interestingly, patients inducible into MonoVT or PolyVT had a lower mortality rate than non-inducible patients. Therefore, the study suggests that inducibility into more stable arrhythmias (MonoVT) predicts the occurrence of spontaneous stable arrhythmias, but that it does not provide a marker for mortality compared with non-inducible patients. The study also supports the notion that inducibility of VF, particularly with aggressive protocols, is non-specific. This is in contrast to the MUSTT substudy, which showed an increased risk of all-cause mortality in inducible patients (8), and that inducibility predicted a higher likelihood that a subsequent death would be by an arrhythmic mechanism.
This retrospective analysis from the MADIT II study does not provide data supporting the notion that inducibility in an electrophysiology laboratory will differentiate those patients within the general MADIT II study criteria who would benefit from ICD implantation. However, the primary end point in the MADIT II study, and the basis for ICD use derived from the study, was a mortality benefit, and the design of this subgroup analysis was not intended to address the mortality issue as a primary end point. Thus, the question of identification of a higher mortality risk subgroup remains unanswered. Nonetheless, it remains a pressing question. If the estimates from CMS and other sources are correct, identification of subgroups within a substantially large population of potential candidates is both clinically and economically relevant.
Practical versus optimal study designs
It is useful to look at the MADIT II study and this substudy as part of the universe of trials addressing primary mortality benefit. Unfortunately, interpretation of the entire series of clinical trials for ICD implantation, particularly among the primary prevention strategies, is hampered by design features that permit broad general conclusions, but limited specific insights. An example is the pattern of EF observations in each of the clinical trials. There is no trial in which EFs were stratified in an attempt to derive valid scientific answers to the question of whether varying levels of EFs correlate with the magnitude of clinical benefit. Table 2lists a series of primary prevention trials and shows that the upper limit of EF ranged from 30% and 40% among the various studies, with all but two set at ≤35%. The mean or median EF values were considerably below those upper limits. Averaging down from an upper limit is a mathematical expectation when entry is so defined, but the magnitude of the deviation in these studies is striking. This pattern is relevant because once the results of a trial are accepted into clinical practice, it is the entry criterion and not the group actually studied that has driven practice guidelines. For each of the trials, it is likely that stratifying EFs into at least two or three ranges would have provided far more specific information on the applicability of the conclusions derived from these studies than is achievable from more general outcome figures. A hint about this dilemma was seen in a group analysis of the Antiarrhythmics Versus Implantable Defibrillators (AVID) study (a secondary prevention trial), suggesting that individuals who survive cardiac arrest and have EFs >35% fared no better with an ICD than with amiodarone (10). As a retrospective subgroup analysis, this observation is hypothesis-generating rather than outcome-defining and has not had any impact on ICD guidelines; only a prospective study could define that interpretation. It could be correctly argued that stratification of the primary prevention trials into multiple tiers of EF would have been far more costly and taken longer to complete. However, viewed from the perspective of cost to society continuously over time for indications that may or may not be valid, accurate insights into the outcomes of clinical trials have a far more enduring fiscal impact than a one-time increment in cost of a major research project.
Where do we go from here?
There are two mathematical constructs that are relevant to the interpretation of any clinical trial, including the MADIT II study:
The benefit of a therapy based on relative (population domain) and absolute (individual domain) improvements in outcome
The efficiency of a therapy based on the proportion of treated patients who will have an event for which the therapy is intended
In regard to the more general primary prevention ICD study designs (e.g., MADIT II, SCD-HeFT), the relative and absolute risk reductions are in acceptable ranges, but appropriate use rates raise concern about their efficiency. The cumulative fraction of ICD-treated patients who experience appropriate shocks ranged from 5% to 12% per year (range, 21% over 5 years in the SCD-HeFT study to 35% over 3 years in the MADIT II study). Thus, the majority of ICD recipients for primary prevention indications, using the MADIT II, DEFINITE, and SCD-HeFT study indications, did not experience appropriate shocks during the course of those studies. This observation identifies the need for strategies that can identify subgroups within the general target population that provide better predictive accuracy.
Despite this need, it will be difficult to perform additional clinical trials to fine-tune the efficacy and efficiency of primary prevention ICD strategies. Accordingly, CMS has mandated that, concomitant with more relaxed approval criteria, post hoc registry data for ICDs can be used to identify segments of the more general population recently approved who may not achieve benefit, and therefore validate a future modification of approved implantation criteria. This is an ambitious chore, but one that is not likely to lead to a scientifically acceptable data set used to make decisions of this importance. Effective enforcement of this mandate remains uncertain. Control of data quality, thoroughness of data subsets, and selection biases are all factors that might impede implementation of these results into such a strategy (11,12). Furthermore, despite reticence to conduct additional primary prevention studies, the cost of a registry might not be very different than that of doing additional prospective studies in an attempt to answer the remaining imponderables.
The larger perspective
It is a general reality that clinical electrophysiology resides within a larger economy of health care, which, in turn, is part of society’s macroeconomy. Although we in the field of electrophysiology recognize and passionately support the clinical opportunities of ICD therapy and its important value for patient care, such knowledge is diluted in the larger pool of societal needs, and our views will not prevail with marginal or uncertain cost benefits. It is necessary that physicians, hospitals, and industry, in cooperation with organizations that control the flow of research funds, work toward the common goal of achieving greater efficacy and accuracy in ICD indications. Short of this, non-scientific influences will eventually exert control over ICD availability. These concerns derive from the experience of the economics of health care in the 1980s, during a period when the medical enterprise had warnings about cost escalations that it did not heed (13,14). As predicted, a disproportionate share of the nation’s economic wealth allocated to health care served as a catalyst for the disastrous health care reform policies of the late 1980s and early 1990s. Unfortunately, even these reforms did not improve circumstances.
As future technologies emerge, we should take heed of the lessons learned from the ICD trials. Controllers of the funding for clinical trials, whether governmental, industrial, or organizational, must be encouraged to recognize that quick, less expensive general answers to complex scientific questions might look good on one year’s balance sheet, but will not likely provide an enduring benefit to patients, physicians, or industry, as health care delivery issues become more complicated and costly. As we should have learned in the 1980s, a societal revolution against perceived excesses, once started, will be beyond control by any elements of the medical complex (13). Society’s voice, expressed through political forces, will ultimately prevail.
Will we have an opportunity to respond effectively to the specific issue addressed in this overview? We believe so, but that is for another discussion.
Dr. Myerburg is supported in part by the American Heart Association Chair in Cardiovascular Research and the Cardiovascular Genetics Center funded by the Miami Heart Research Institute.
↵⁎ Editorials published in the Journal of the American College of Cardiologyreflect the views of the authors and do not necessarily reflect the views of JACCor the American College of Cardiology.
- American College of Cardiology Foundation
- Buxton A.E.,
- Lee K.L.,
- Fisher J.D.,
- Josephson M.E.,
- Prystowsky E.N.,
- Hafley G.,
- Multicenter Unsustained Tachycardia Trial Investigators
- Buxton A.E.,
- Lee K.L.,
- Hafley G.E.,
- et al.,
- MUSTT Investigators
- Daubert J.P.,
- Zareba W.,
- Hall W.J.,
- et al.
- Domanski M.J.,
- Sakseena S.,
- Epstein A.E.,
- et al.,
- AVID Investigators
- Myerburg R.J.,
- Mitrani R.,
- Interian A. Jr..,
- Castellanos A.
- Huikuri H.V.,
- Makikallio T.H.,
- Raatikainen M.J.,
- Perkiomaki J.,
- Castellanos A.,
- Myerburg R.J.