Author + information
- Received April 1, 2013
- Revision received May 22, 2013
- Accepted May 28, 2013
- Published online August 13, 2013.
- Paul S. Chan, MD, MS∗,†∗ (, )
- Robert A. Berg, MD‡,§,
- John A. Spertus, MD, MPH∗,†,
- Lee H. Schwamm, MD⋮,
- Deepak L. Bhatt, MD, MPH¶,
- Gregg C. Fonarow, MD#,
- Paul A. Heidenreich, MD, MS∗∗,
- Brahmajee K. Nallamothu, MD, MPH††,
- Fengming Tang, MS†,
- Raina M. Merchant, MD, MSHP§,
- AHA GWTG-Resuscitation Investigators
- ∗Saint Luke's Mid America Heart Institute, Kansas City, Missouri
- †University of Missouri, Kansas City, Missouri
- ‡Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
- §University of Pennsylvania Perelman School of Medicine, Philadelphia, Pennsylvania
- ⋮Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts
- ¶VA Boston Healthcare System, Brigham and Women's Hospital, and Harvard Medical School, Boston, Massachusetts
- #Ronald Reagan-UCLA Medical Center, Los Angeles, California
- ∗∗VA Palo Alto Health Care System, Palo Alto, California
- ††VA Health Services Research and Development Center of Excellence, VA Ann Arbor Healthcare System, Department of Internal Medicine and Center for Healthcare Outcomes and Policy, University of Michigan, Ann Arbor, Michigan
- ↵∗Reprint requests and correspondence:
Dr. Paul S. Chan, Mid America Heart Institute, 5th Floor, 4401 Wornall Road, Kansas City, Missouri 64111.
Objectives The purpose of this study is to develop a method for risk-standardizing hospital survival after cardiac arrest.
Background A foundation with which hospitals can improve quality is to be able to benchmark their risk-adjusted performance against other hospitals, something that cannot currently be done for survival after in-hospital cardiac arrest.
Methods Within the Get With The Guidelines (GWTG)-Resuscitation registry, we identified 48,841 patients admitted between 2007 and 2010 with an in-hospital cardiac arrest. Using hierarchical logistic regression, we derived and validated a model for survival to hospital discharge and calculated risk-standardized survival rates (RSSRs) for 272 hospitals with at least 10 cardiac arrest cases.
Results The survival rate was 21.0% and 21.2% for the derivation and validation cohorts, respectively. The model had good discrimination (C-statistic 0.74) and excellent calibration. Eighteen variables were associated with survival to discharge, and a parsimonious model contained 9 variables with minimal change in model discrimination. Before risk adjustment, the median hospital survival rate was 20% (interquartile range: 14% to 26%), with a wide range (0% to 85%). After adjustment, the distribution of RSSRs was substantially narrower: median of 21% (interquartile range: 19% to 23%; range 11% to 35%). More than half (143 [52.6%]) of hospitals had at least a 10% positive or negative absolute change in percentile rank after risk standardization, and 50 (23.2%) had a ≥20% absolute change in percentile rank.
Conclusions We have derived and validated a model to risk-standardize hospital rates of survival for in-hospital cardiac arrest. Use of this model can support efforts to compare hospitals in resuscitation outcomes as a foundation for quality assessment and improvement.
In-hospital cardiac arrest is common, affecting approximately 200,000 patients annually in the United States (1). Rates of survival, however, can vary substantially across hospitals (2). As a foundation for improving quality in their cardiovascular registries, the American Heart Association (AHA) and the American College of Cardiology have developed methods to risk-standardize hospital outcomes for other conditions and procedures. More recently, the Joint Commission and the AHA have expressed interest in developing performance metrics for in-hospital cardiac arrest to facilitate benchmarking and comparison of survival outcomes among hospitals.
Unlike process-of-care measures for resuscitation (e.g., timely defibrillation), which do not require risk adjustment as their performance should be independent of patient characteristics, survival measures require risk standardization to account for variations in patient case-mix across sites so as to facilitate a more unbiased comparison across hospitals (3). Although risk-adjustment models for survival already exist for other medical conditions, such as acute myocardial infarction, heart failure, and community-acquired pneumonia (4,5), a validated model to risk-standardize survival after in-hospital cardiac arrest has not been developed. This current deficiency in the methodology for in-hospital cardiac arrest is a significant barrier to identifying high and low performing hospitals to disseminate best practices and promote quality improvement.
To address this current gap in knowledge, we derived and validated a hierarchical regression model to calculate risk-standardized hospital rates of survival after in-hospital cardiac arrest. We used data from Get With The Guidelines (GWTG)-Resuscitation—the largest repository of data on hospitalized patients with cardiac arrest. We also assessed the stability of the model over time by examining model performance in multiple years and different time periods. Creating this outcome model can assist ongoing efforts to support ongoing quality assessment and improvement efforts.
GWTG-Resuscitation, formerly known as the National Registry of Cardiopulmonary Resuscitation, is a large, prospective, national quality-improvement registry of in-hospital cardiac arrest and is sponsored by the AHA. Its design has been described in detail previously (6). In brief, trained quality-improvement hospital personnel enroll all patients with a cardiac arrest (defined as the absence of a palpable central pulse, apnea, and unresponsiveness) treated with resuscitation efforts and without do-not-resuscitate (DNR) orders. Cases are identified by multiple methods, including centralized collection of cardiac arrest flow sheets, reviews of hospital paging system logs, and routine checks of code carts, pharmacy tracer drug records, and hospital billing charges for resuscitation medications (6). The registry uses standardized “Utstein-style” definitions for all patient variables and outcomes to facilitate uniform reporting across hospitals (7,8). In addition, data accuracy is ensured by rigorous certification of hospital staff and use of standardized software with data checks for completeness and accuracy, and a prior report had determined an error rate in data abstraction of 2.4% (6).
From 2000 to 2010, a total of 122,746 patients 18 years of age or older with an index in-hospital cardiac arrest were enrolled in GWTG-Resuscitation. Since in-hospital survival rates have improved over time (9), we restricted our study population to 48,841 patients from 356 hospitals enrolled between 2007 and 2010 to ensure that our risk models were based on a contemporary cohort of patients.
Study outcome and variables
The primary outcome of interest was survival to hospital discharge, which was obtained from the GWTG-Resuscitation registry.
In all, 26 baseline characteristics were screened as candidate predictors for the study outcome. These included age (categorized in 10-year intervals of <50, 50 to 59, 60 to 69, 70 to 79, and ≥80), sex, location of arrest (categorized as intensive care, monitored unit, nonmonitored unit, emergency room, procedural/surgical area, and other), and initial cardiac arrest rhythm (ventricular fibrillation, pulseless ventricular tachycardia, asystole, pulseless electrical activity). In addition, the following comorbidities or medical conditions present before cardiac arrest were evaluated for the model: heart failure, myocardial infarction, or diabetes mellitus; renal, hepatic, or respiratory insufficiency; baseline evidence of motor, cognitive, or functional deficits (CNS depression); acute stroke; acute non-stroke neurologic disorder; pneumonia; hypotension; sepsis; major trauma; metabolic or electrolyte abnormality; and metastatic or hematologic malignancy. Finally, we considered for model inclusion several critical care interventions (mechanical ventilation, intravenous vasopressor support, pulmonary artery catheter, intra-aortic balloon pump, or dialysis) already in place at the time of cardiac arrest. Race was not considered for model inclusion, as prior studies have found that racial differences in survival after in-hospital cardiac arrest are partly mediated by differences in hospital care quality for blacks and whites (3,10).
Model development and validation
We randomly selected two-thirds of the study population for the derivation cohort and one-third for the validation cohort. We confirmed that a similar proportion of patients from each hospital and calendar year were represented in the derivation and validation cohorts. Baseline differences between patients in the derivation and validation cohorts were evaluated using chi-square tests for categorical variables and Student t tests for continuous variables. Because of the large sample size, we also evaluated for significant differences between the 2 cohorts by computing standardized differences for each covariate. Based on prior work, a standardized difference of >10 was used to define a significant difference (11).
Within the derivation sample, multivariable models were constructed to identify significant predictors of in-hospital survival. Because our primary objective was to derive risk-standardized survival rates for each hospital, which would require us to account for clustering of observations within hospitals, we used hierarchical logistic regression models for our analyses (12). By using hierarchical models to estimate the log-odds of in-hospital survival as a function of demographic and clinical variables (both fixed effects) and a random effect for each hospital, this approach allowed us to assess for hospital variation in risk-standardized survival rates after accounting for patient case-mix.
We considered for model inclusion the candidate variables previously described in the Study Outcome and Variables section. Multicollinearity between covariates was assessed for each variable before inclusion (13). To ensure parsimony and inclusion of only those variables that provided incremental prognostic value, we employed the approximation of full model methodology for model reduction (14). The contribution of each significant model predictor was ranked, and variables with the smallest contribution to the model were sequentially eliminated. This was an iterative process until further variable elimination led to a greater than 5% loss in model prediction as compared with the initial full model.
Model discrimination was assessed with the C-statistic, and model validation was performed in the remaining one-third of the study cohort by examining observed versus predicted plots. We also evaluated the robustness of our findings by reconstructing the models with data from: 1) only 2010; 2) 2009 to 2010; and 3) 2008 to 2010, and comparing the predictors and estimates of these models with that from the main study period (from 2007 to 2010). On validation of the model, we pooled patients from the derivation and validation cohorts and reconstructed a final hierarchical regression model to derive estimates from the entire study sample for risk standardization.
Hospital risk-standardized survival rates
Using the hospital-specific estimates (i.e., random intercepts) from the hierarchical models, we then calculated risk-standardized survival rates for the 272 hospitals with at least 10 cardiac arrest cases by multiplying the registry's unadjusted survival rate by the ratio of a hospital's predicted to expected survival rate. We used the ratio of predicted to expected outcomes (described in the following text) instead of the ratio of observed to expected outcomes to overcome analytical issues that have been described for the latter approach (15–17). Specifically, our approach ensured that all hospitals, including those with relatively small case volumes, would have appropriate risk standardization of their cardiac arrest survival rates.
For these calculations, the expected hospital number of cardiac arrest survivors is the number of cardiac arrest survivors expected at the hospital if the hospital's patients were treated at a “reference” hospital (i.e., the average hospital-level intercept from all hospitals in GWTG-Resuscitation). This was determined by regressing patients' risk factors and characteristics on in-hospital survival with all hospitals in the sample, then applying the subsequent estimated regression coefficients to the patient characteristics observed at a given hospital, and then summing the expected number of deaths. In effect, the expected rate is a form of indirect standardization. In contrast, the predicted hospital outcome is the number of survivors at a specific hospital. It is determined in the same way that the expected number of deaths is calculated, except that the hospital's individual random effect intercept is used. The risk-standardized survival rate was then calculated by the ratio of predicted to expected survival rate, multiplied by the unadjusted rate for the entire study sample.
The effects of risk standardization on unadjusted hospital rates of survival were then illustrated with descriptive plots and statistics. In addition, we examined the absolute change (either positive or negative) in percentile rank for each hospital after risk standardization. This approach overcomes the inherent limitation of just examining the proportion of hospitals that are reclassified out of the top quintile with risk standardization, as some hospitals may be reclassified with only a 1% decrease in percentile rank (e.g., from 80% percentile to 79% percentile), whereas other hospitals would require up to a 20% decrease in percentile rank to be reclassified (e.g., hospitals with an unadjusted 99% percentile rank).
Because rates of do-not-resuscitate (DNR) orders may vary across hospitals and influence rates of in-hospital cardiac arrest survival, we conducted the following sensitivity analysis to examine the robustness of our findings. For hospitals in the lower 2 quartiles of risk-standardized survival, we assumed that the rate of DNR status for all admissions was 5%. We then assigned DNR rates at hospitals in the top and second highest quartiles to be 100% and 50%, respectively, greater than that of the lower 2 quartiles. We assumed that the rate of in-hospital cardiac arrest for DNR patients to be 5% and calculated the number of cardiac arrests at each hospital that would have occurred if no patients were made DNR. For instance, for a hospital in the highest quartile of survival with 10,000 annual admissions, an additional 50 cardiac arrests (10,000 × 0.10 [DNR rate] × 0.05 [rate of cardiac arrest]) were added to the denominator for each year of data submission.
For each of these “imputed” patients, we assigned an age of ≥80 years and 1 of the following characteristics: renal insufficiency, cancer, or hypotension. We then recalculated risk-standardized survival rates for the entire hospital sample and examined what proportion of hospitals in the original analysis was no longer classified in their quartile of risk-standardized hospital survival rates. If only a minority of hospitals were recategorized into a different quartile, that would suggest that our classification of hospitals in the top 2 quartiles was robust and persisted despite a higher DNR rate for their admitted patients.
All study analyses were performed with SAS version 9.2 (SAS Institute, Cary, North Carolina) and R version 2.10.0 (18). The hierarchical models were fitted with the use of the GLIMMIX macro in SAS.
Dr. Chan had full access to the data and takes responsibility for its integrity. All authors have read and agree to the manuscript as written. The institutional review board of the Mid America Heart Institute waived the requirement of informed consent, and the AHA approved the final manuscript draft.
Of 48,841 patients in the study cohort, 32,560 were randomly selected for the derivation cohort and 16,281 for the validation cohort. Baseline characteristics of the patients in the derivation and validation cohorts were similar, based on comparisons of both p-values and standardized differences (Table 1). The mean patient age in the overall cohort was 65.6 ± 16.1 years, 58% were male, and 21% were black. More than 80% of patients had a nonshockable cardiac arrest rhythm of asystole or pulseless electrical activity, and nearly half were already in an intensive care unit during the arrest. Respiratory insufficiency and renal insufficiency were the most prevalent comorbidities, whereas one-quarter of patients were hypotensive and one-third were receiving mechanical ventilation at the time of cardiac arrest.
Overall, 10,290 (21.1%) patients with an in-hospital cardiac arrest survived to hospital discharge. The survival rates were similar in the derivation (n = 6,844; 21.0%) and validation cohorts (n = 3,446; 21.2%). A comparison of baseline characteristics between patients who survived and did not survive to hospital discharge is provided in Online Table 1. In general, patients who survived were younger, more frequently white, more likely to have an initial cardiac arrest rhythm of ventricular fibrillation or pulseless ventricular tachycardia, and to have fewer comorbidities or interventions in place (e.g., intravenous vasopressors) at the time of cardiac arrest.
Initially, 18 independent predictors were identified in the derivation cohort with the multivariable model, resulting in a model C-statistic of 0.738 (Table 2; see Online Table 2 for variable definitions). After model reduction to generate a parsimonious model with no more than 5% loss in model prediction, our final model comprised 9 variables, with only a small change in the C-statistic (0.734). The predictors in the final model included age, initial cardiac arrest rhythm, hospital location of arrest, hypotension, septicemia, metastatic or hematologic malignancy, hepatic insufficiency, and requirement for mechanical ventilation or intravenous vasopressor before cardiac arrest. The beta-coefficient estimates and adjusted odds ratios are summarized in Table 3. Importantly, there was no evidence of multicollinearity between any of these variables (all variance inflation factors <1.5).
When the model was tested in the independent validation cohort, model discrimination was similar (C-statistic of 0.737). Calibration was confirmed with observed versus predicted plots in both the derivation and validation cohorts (R2 of 0.99 for both). When we repeated the analyses using data from year 2010 only, 2009 to 2010, and 2008 to 2010, our model predictors were unchanged, and the estimates of effect for each predictor were similar.
Figure 1 depicts the unadjusted and risk-standardized distribution of hospital rates of cardiac arrest survival (see Online Table 3 for calculations of the risk-standardized rates). The mean unadjusted hospital survival rate was 21 ± 13%, whereas the mean risk-standardized hospital survival rate of 21 ± 4% showed a much narrower distribution. Similarly, the median unadjusted hospital survival rate was 20% (interquartile range 14% to 26%; range 0% to 85%), whereas the interquartile range and range for the risk-standardized hospital survival rates were substantially smaller: median of 21% (interquartile range: 19% to 23%; range 11% to 35%). Nine (3.3%) of the 272 hospitals had risk-standardized survival rates of ≥30%, or ∼50% higher than the average hospital.
To examine the effect of risk standardization at individual hospitals, the change in percentile rank for each hospital was examined (Fig. 2). Of 272 hospitals, 143 (52.6%) had at least a 10% positive or negative absolute change in percentile rank after risk standardization (e.g., hospital ranked at 39% percentile before and at 53% percentile after risk standardization). Moreover, 50 hospitals (23.2%) had a substantial ≥20% absolute change in percentile rank, with 24 having a 20% or greater increase and 26 having a 20% or greater decrease.
Finally, we found that our study findings were unlikely to be influenced by higher rates of DNR at hospitals with higher risk-standardized survival. Only 1 of 68 hospitals in the top quartile of risk-standardized survival was reclassified to a different quartile, even after assuming that hospitals in the top quartile had DNR rates that were twice the DNR rate of the lower 2 quartiles. Similarly, only 1 of 68 hospitals in the second highest quartile of risk-standardized survival was reclassified, even after assuming that these hospitals had DNR rates that were 50% higher than those in the lower 2 quartiles (Online Table 4).
Within a large national registry, we derived and validated a risk-adjustment model for survival after in-hospital cardiac arrest. The model was based on 9 clinical variables that are easy to identify and collect. Moreover, the model had good discrimination and excellent calibration. Importantly, our model adhered to recommended standards to be employed for public reporting, including the use of hierarchical models, timely and high-quality data, and clearly defined study population and outcomes (3). As a result, we believe this model provides a mechanism to generate risk-standardized survival rates to facilitate more accurate comparisons of resuscitation outcomes across hospitals.
Because substantial variation in hospital survival rates after in-hospital cardiac arrest exists (2), there are currently efforts to measure hospital performance for this condition. The Joint Commission, for instance, is developing a number of metrics to assess hospital performance in resuscitation. The AHA's GWTG-Resuscitation national registry has also developed a number of target benchmarks to highlight hospitals with exceptional performance. Most of these performance metrics are process-oriented, such as time to defibrillation and time to initiation of cardiopulmonary resuscitation, and are therefore independent of confounding by patient case-mix. However, both organizations also plan to profile survival outcomes after cardiac arrest.
In contrast to process measures, several key challenges exist in comparing survival outcomes across hospitals. First, and most important, hospital variation in survival may be simply due to heterogeneity in patients' case-mix. Hospitals with cardiac arrest patients who have higher illness acuity may have lower survival rates. To date, a risk-adjustment model that uses appropriate analytical techniques to account for nesting of data within hospitals (i.e., hierarchical models) has not been derived and validated. Although several multivariable models for in-hospital cardiac arrest exist (19,20), these have not been validated, were based on less contemporary cohorts of patients, and used analytical approaches that do not adequately account for clustering of patients within hospitals. Therefore, these other models may have under-estimated standard errors, which can lead to type I errors in inferences regarding statistical significance and inappropriately label certain hospitals as performing better, or worse, than average (21). Moreover, unlike hierarchical models used in this study, these other approaches do not have a mechanism to weight the number of observations contributed by each hospital to account for differences in the sample sizes across hospitals.
Second, prior efforts in risk standardization for other disease conditions have been based on the ratio of observed to expected outcomes. This approach has significant limitations (16,17), especially the inability to risk-standardize rates for sites with low case volumes. In this study, we overcame both of these barriers by deriving and validating a risk-adjustment model using hierarchical random-effects models and basing our risk standardization on the ratio of predicted to expected outcomes (15), thereby allowing us to generate risk-standardized rates for hospitals in the study.
Without risk standardization, differences in hospital survival rates for in-hospital cardiac arrest may be due to differences 1) patient case-mix; and 2) quality of care between hospitals. From a quality perspective, only the last difference is of interest. With our risk-standardization approach, which controlled for differences in patient case-mix across hospitals, the range of hospital survival rates narrowed enormously, with the interquartile range decreasing from 12% to 4%. Even more importantly, we found that more than half of hospitals changed in percentile rank by at least 10%, and nearly a quarter of hospitals changed in percentile rank by 20% or greater, suggesting a significant impact of risk standardization (to account for differences in case-mix) in assessing a hospital's survival outcomes for in-hospital cardiac arrest. Both of these findings suggest that simple comparisons of unadjusted hospital survival rates would be problematic and likely to lead to incorrect inferences.
Importantly, despite the reduction in variability with our risk-adjustment methodology, there remained notable differences in risk-standardized rates of survival. That suggests that some hospitals were able to achieve higher survival rates than others. For instance, some (9 of 272 [3.3%]) hospitals had risk-standardized survival rates of ≥30%, or ∼50% higher than the average hospital. Which hospital factors or quality improvement initiatives are associated with the higher survival outcomes in these hospitals remain unknown. Therefore, identifying best practices at these top-performing hospitals should be a priority (22), as their dissemination to all hospitals has the potential to significantly improve survival for all patients with in-hospital cardiac arrest.
Our study should be interpreted in the context of the following limitations. First, although our risk model was able to account for a number of clinical variables, unmeasured confounding may exist. Specifically, our model did not have information on some prognostic factors, such as creatinine or the severity level for each comorbid condition. In addition, thorough documentation of patients' case-mix (e.g., comorbidities) and access to telemetry and intensive care unit monitoring may differ across sites, which could account for some of the hospital variation in risk-standardized survival rates. Second, our model did not adjust for intra-arrest variables (such as quality of cardiopulmonary resuscitation and time to defibrillation) which are known to influence survival outcomes. However, because these latter variables are attributes specific to a hospital's performance, their inclusion in a model developed to profile hospitals for resuscitation performance would be improper (3). Third, we did not have information on DNR status for all admitted patients or the proportion of deaths with attempted resuscitation at each hospital, and this rate is likely to vary across hospitals. Such variation is likely to affect a hospital's crude rank performance for cardiac arrest survival. However, in our sensitivity analyses, we found that a hospital's risk-standardized rank performance was relatively unaffected by variation in DNR rates across sites, thus underscoring the importance of risk standardization for meaningful comparisons of in-hospital cardiac arrest survival across hospitals.
Fourth, our study population was limited to hospitals participating within the AHA's GWTG-Resuscitation program. Therefore, our findings may not apply to nonparticipating hospitals. Fifth, our model was developed in patients with in-hospital cardiac arrest. Because the reasons for cardiac arrest and comorbidity burden differ for patients with out-of-hospital cardiac arrest, our findings do not apply to cardiac arrests occurring outside hospitals. Finally, we have not developed a model for survival with good neurological outcome. Although this is an important consideration for patients with in-hospital cardiac arrest and should be the focus of a future study, our goal was to develop a risk-standardization model for in-hospital survival, as this is the outcome proposed by national organizations for a performance measure.
Given poor survival outcomes for in-hospital cardiac arrest, there is growing national interest in developing performance metrics to benchmark hospital survival for this condition. In this study, we have developed and validated a model to risk-standardize hospital rates of survival for in-hospital cardiac arrest. We believe that use of this model to adjust for patient case-mix represents an advance in ongoing efforts to profile hospitals in resuscitation outcomes, with the hope that clinicians and administrators will be stimulated to develop novel and effective quality improvement strategies to improve their hospital's performance.
For a list of the AHA GWTG-Resuscitation (formerly, the National Registry of Cardiopulmonary Resuscitation) investigators and supplementary tables, please see the online version of this article.
The American Heart Association (AHA) Get With the Guidelines-Resuscitation Investigators (formerly, the National Registry of Cardiopulmonary Resuscitation) are listed in the Online Appendix. The underlying research reported in the article was funded by the U.S. National Institutes of Health. Drs. Chan (K23HL102224) and Merchant (K23109083) are supported by Career Development Grant Awards from the National Heart Lung and Blood Institute (NHBLI). Dr. Chan is also supported by funding from the AHA. GWTG-Resuscitation is sponsored by the AHA. Dr. Schwamm is the Chair of the AHA's GWTG National Steering Committee. Dr. Bhatt is on the advisory board of Medscape Cardiology; the Board of Directors of Boston VA Research Institute and the Society of Chest Pain Centers; is Chair of the AHA GWTG Science Subcommittee; has received honoraria from the American College of Cardiology (Editor, Clinical Trials, Cardiosource), Duke Clinical Research Institute (clinical trial steering committees), Slack Publications (Chief Medical Editor, Cardiology Today Intervention), WebMD (CME steering committees); is the Senior Associate Editor, Journal of Invasive Cardiology; has received research grants from Amarin, AstraZeneca, Bristol-Myers Squibb, Eisai, Ethicon, Medtronic, Sanofi Aventis, and The Medicines Company; and has received unfunded research from FlowCo, PLx Pharma, and Takeda. Dr. Fonarow has received grant funding from the NHLBI and AHRQ; and consulting for Novartis and Medtronic. Dr. Spertus has received grant funding from the NIH, AHA, Lilly, Amorcyte, and Genentech; serves on Scientific Advisory Boards for United Healthcare, St. Jude Medical, and Genentech; and serves as a paid editor for Circulation: Cardiovascular Quality and Outcomes; has intellectual property rights for the Seattle Angina Questionnaire, Kansas City Cardiomyopathy Questionnaire, Peripheral Artery Questionnaire; and has equity interest in Health Outcomes Sciences. Dr. Merchant has received grant funding from NIH, K23 Grant 10714038, Physio-Control, Zoll Medical, Cardiac Science, and Philips Medical. All other authors have reported they have no relationships relevant to the contents of this paper to disclose.
- Abbreviations and Acronyms
- American Heart Association
- do not resuscitate
- Get With The Guidelines
- Received April 1, 2013.
- Revision received May 22, 2013.
- Accepted May 28, 2013.
- American College of Cardiology Foundation
- Krumholz H.M.,
- Brindis R.G.,
- Brush J.E.,
- et al.
- Krumholz H.M.,
- Wang Y.,
- Mattera J.A.,
- et al.
- Krumholz H.M.,
- Wang Y.,
- Mattera J.A.,
- et al.
- Cummins R.O.,
- Chamberlain D.,
- Hazinski M.F.,
- et al.
- Jacobs I.,
- Nadkarni V.,
- Bahr J.,
- et al.
- Austin P.C.
- Goldstein H.
- Belsley D.A.,
- Kuh E.,
- Welsch R.E.
- Harrell F.E.
- ↵R Development Core Team (2008). R: a language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. Available at: http://www.R-project.org.