Author + information
- Received January 21, 2011
- Revision received April 14, 2011
- Accepted May 10, 2011
- Published online August 2, 2011.
- Tracy Y. Wang, MD, MHS, MSc⁎,⁎ (, )
- David Dai, PhD, MS⁎,
- Adrian F. Hernandez, MD, MHS⁎,
- Deepak L. Bhatt, MD, MPH†,
- Paul A. Heidenreich, MD‡,
- Gregg C. Fonarow, MD§ and
- Eric D. Peterson, MD, MPH⁎
- ↵⁎Reprints requests and correspondence:
Dr. Tracy Y. Wang, Duke Clinical Research Institute, 2400 Pratt Street, Durham, North Carolina 27705
Objectives This study examined the degree to which hospital performance for acute myocardial infarction (AMI) and heart failure (HF) care processes are correlated.
Background Although AMI and HF care processes may be amenable to similar quality improvement interventions, whether these are indeed correlated and whether hospitals with consistently superior performance for both care metrics have the best outcomes remains unknown.
Methods We compared hospital performance of the Centers for Medicare & Medicaid Services AMI and HF core measures in 283 hospitals submitting 10 or more patients to the Get With The Guidelines AMI and HF programs between January 2005 and April 2009.
Results Median hospital adherence to AMI and HF composite measures were 93% (interquartile range: 87% to 97%) and 92% (interquartile range: 85% to 96%), respectively, with only a modest correlation between hospital performance on these 2 composite metrics (r = 0.50; 95% confidence interval: 0.41 to 0.58). Hospitals with superior performance to both AMI and HF processes had significantly longer duration of Get With The Guidelines participation and lower adjusted in-hospital mortality (odds ratio: 0.79; 95% confidence interval: 0.63 to 0.99) for AMI and HF patients, whereas hospitals with superior adherence to either alone had similar mortality rates as hospitals with superior adherence to neither measure.
Conclusions Hospitals that had consistent, superior performance for both AMI and HF care had significantly lower risk-adjusted mortality than those with superior performance either alone or for neither measure. Whether a single scoring system to assess global, rather than condition-specific, quality of cardiovascular care would facilitate care quality improvement more consistently and would optimize patient outcomes merits further investigation.
Despite recent advances, acute myocardial infarction (AMI) and heart failure (HF) remain associated with significant morbidity and mortality. Studies have demonstrated that these outcomes can be improved with appropriate treatments that have been summarized into national guidelines and performance standards. These outcomes are targets of hospital quality improvement initiatives (e.g., the American Heart Association's Get With The Guidelines [GWTG] program) and are used by accreditation agencies (e.g., Joint Commission) and payers (e.g., Centers for Medicaid & Medicare Services [CMS]) to assess hospital quality of care (1,2).
Nevertheless, there are limited data on how well AMI and HF process performances are associated with one another. Specifically, do hospitals that perform well on AMI metrics also perform well on HF metrics? Because many of these care processes are amenable to similar hospital quality improvement interventions, one would expect their performance to be correlated strongly, yet the interaction between these care processes has not been characterized previously. Additionally, studies linking care process performance with outcomes have been limited. Although a few prior studies have found that hospitals with superior adherence to AMI guideline recommendations and performance measures have modestly lower risk-adjusted in-hospital mortality rates (3,4), results from HF studies have been more mixed (5–8).
For purposes of this study, we examined hospitals participating in the GWTG program with the following objectives: (1) to assess whether hospital-level performance of AMI and HF performance were associated with each other, (2) to describe the patient and hospital features of those centers that provided consistently superior AMI and HF process performance, and (3) to evaluate whether centers with consistently superior process performance also have superior AMI and HF patient outcomes.
The GWTG program is a large national, observational registry started in 2000 to support and facilitate quality improvement in the care of patients with cardiovascular disease. Details of GWTG have been described previously (9,10). In brief, the GWTG-coronary artery disease (CAD) program enrolls patients hospitalized with a confirmed diagnosis of CAD (International Classification of Diseases-Ninth Edition codes 410 through 414), whereas the GWTG-HF program enrolls patients with a confirmed diagnosis of HF (International Classification of Diseases-Ninth Edition code 428). Participation in GWTG is voluntary, but sites use the data to support their performance reporting into the Joint Commission and CMS. As such, GWTG hospitals adhere to the CMS standard for core measure reporting, including submission of consecutive eligible patients, to each database. However, CMS does allow hospitals with more than 75 admissions per quarter to use random sampling; thus, there are a few large centers that choose to report a random selection of cases on a quarterly basis. A prior study evaluating the representativeness of the Organized Program to Initiate Lifesaving Treatment in Hospitalized Patients with Heart Failure registry, which is the precursor to the GWTG-HF program that was renamed after sponsorship change, but had largely overlapping hospitals, compared registry patients to all other fee-for-service Medicare beneficiaries that were hospitalized nationwide for HF in the same time frame and found similar patient characteristics and in-hospital outcomes without evidence of selection bias (11).
Trained data abstractors at participating GWTG hospitals collect detailed information on baseline demographic and clinical characteristics, in-hospital care processes and outcomes, and discharge treatment using a standardized set of data elements and definitions. All GWTG study personnel were provided with training, overseen and audited by American Heart Association GWTG program personnel, on case ascertainment, data standards, and quality control. Data are collected via a web-based patient management tool that provides decision support with real-time online reporting features. Using this data entry system, data quality is monitored to assure completeness and accuracy of the submitted data. Predefined logic criteria are used to identify any potential double patient entry, in which case the participating site is queried for verification or removal of redundant data. The data abstraction tool also supports accurate data collection by integrating logic features (e.g., edit and range checks) and user alerts to identify potentially invalid format or values entry. Required fields are structured so that valid data must be entered before the data can be saved as a complete record and entered into the database. Sites also receive individual data quality reports each quarter to promote data completeness and accuracy. Only sites and variables with a high degree of completeness were used in analyses. Data auditing for a random 5% sample of Organized Program to Initiate Lifesaving Treatment in Hospitalized Patients with Heart Failure (which had similar variables and coding instructions) showed 95% or more concordance in more than 90% of data fields when compared with source documentation.
Because collected data are used primarily for institutional quality improvement and because de-identified patient information is collected anonymously through retrospective chart review, individual informed consent is not required under the common rule. However, participation in GWTG requires compliance with local regulatory and privacy guidelines and the approval of the institutional review board of each hospital. For this study, the Duke Clinical Research Institute served as the data analysis center and analyzed aggregate de-identified data for research purposes.
From January 1, 2005, through April 1, 2009, 301 hospitals in GWTG submitted data to both the CAD and HF programs, totaling 400,574 patients treated for AMI or HF. We restricted this analysis to GWTG hospitals that submitted at least 10 CAD and 10 HF patient records to define a threshold for stable hospital-level performance assessment of each set of measures (18 hospitals; 7,351 patients excluded). This yielded a final population of 283 hospitals that varied in size, teaching status, and surgical capability from all census regions of the United States. A total of 393,223 patients (205,656 patients hospitalized for AMI and 194,989 hospitalized for HF) were treated at these hospitals and included in the assessment of quality measure performance.
We evaluated hospital-level performance of core quality measures for AMI and HF with numerators and denominators for each measure as defined by Joint Commission and CMS (12); these measures are outlined in the Appendix. Composite process measure performance for each hospital was calculated as the number of times the selected care process was provided to eligible patients divided by the total number of eligible patients (i.e., absence of contraindications). Patients who died during the hospitalization (n = 14,204) or were transferred to another institution for inpatient care (n = 16,917) were excluded from the discharge process performance assessment.
Descriptive statistics were used to characterize hospital-level performance of AMI and HF core measures. The correlation between hospital performance in AMI measures and hospital performance in HF measures was determined using Spearman correlation coefficients. Superior adherence was defined as the upper tertile of performance for each set of metrics. Hospitals then were stratified into 4 groups: 1) superior adherence to both sets of metrics; 2) superior adherence to AMI measures alone; 3) superior adherence to HF measures alone; and 4) superior adherence to neither measure. Hospital and patient characteristics were compared across these groups, with median (interquartile range) values and frequencies being reported for continuous and categorical variables, respectively. Comparisons between groups were assessed with Pearson chi-square tests for all categorical variables and Kruskal-Wallis tests for all continuous or ordinal variables.
Multivariate logistic regression analysis using the generalized estimating equation approach (13) was performed to compare adjusted mortality for each group and was referenced to the group of hospitals with superior adherence to neither AMI nor HF core measures. This method accounted for within-hospital correlation of responses (i.e., patients at the same hospital were more likely to have similar responses relative to patients in other hospitals). Variables for adjustment included patient characteristics adapted from a previously developed mortality risk model in this data set (14) (i.e., age, sex, race, insurance status, body mass index, prior myocardial infarction, prior CAD, prior HF, prior stroke or transient ischemic attack, peripheral vascular disease, atrial fibrillation or flutter, diabetes, hyperlipidemia, hypertension, chronic lung disease, renal insufficiency, chronic dialysis, anemia, tobacco use, depression, and systolic blood pressure at admission) and hospital characteristics (i.e., bed size, teaching hospital, region, surgical capability, and transplant capability). The c-statistic for this model was 0.74 in the overall population, 0.77 among AMI patients, and 0.69 among HF patients. As a secondary approach, we created a propensity model with scores representing the estimated probability of a patient being treated at a hospital with good adherence to both core measure sets compared with a hospital with good adherence to neither measure using the above patient and hospital variables. Inverse probability-weighted modeling was performed to compare risk-adjusted mortality outcomes between these 2 groups. The results of this analysis were similar to those using the generalized estimating equation logistic regression approach.
Variables incorporated into the models were missing in less than 7% patients, except for body mass index, which was missing in 11% of patients and thus was imputed to gender-specific median values. Patients who were transferred to another institution for inpatient care (n = 16,917) were excluded from the denominator of mortality assessment because outcomes after transfer cannot be determined because of U.S. privacy laws. A 2-sided p value of < 0.05 was considered statistically significant for all tests. No adjustments were made for multiple comparisons. All analyses were performed using SAS software version 9.2 (SAS Institute, Cary, North Carolina).
Among 283 hospitals participating in both GWTG-CAD and GWTG-HF programs, median hospital performance of the composite AMI core measures was 93.2%, with an interquartile range of 87.4% to 96.5% (Fig. 1). Hospital adherence to composite HF core measures was slightly lower, with a median of 92.1% and an interquartile range of 85.3% to 96.0%. When hospitals are divided into tertiles based on their performance of each set of core measures, there were significant differences in the performance between the highest, middle, and lowest tertiles (Table 1).
As shown in Figure 2, there was only very modest correlation between hospital performance on one set of metrics versus its performance in the other (r = 0.50; 95% confidence interval: 0.41 to 0.58). With superior adherence defined as the upper tertile of performance for each set of metrics, hospitals with superior adherence to AMI core measures performed better with AMI than with HF core measures (97.2% vs. 92.8%; p < 0.001). Similarly, hospitals with superior adherence to HF core measures performed better with HF than with AMI core measures (97.2% vs. 94.6%; p < 0.001). When hospitals were divided into 4 groups based on performance, 50 hospitals (18%) had superior adherence to both AMI and HF core measures, 145 hospitals (51%) had superior adherence to neither measure. The remaining hospitals were split evenly between the 44 (16%) hospitals with superior adherence to AMI measures only and the 44 (16%) hospitals with superior adherence to HF measures only.
Table 2 shows that hospitals with superior adherence to both AMI and HF measures were larger and were more likely to be teaching hospitals compared with those that had superior adherence to neither measure. Hospitals with superior adherence to both AMI and HF measures were likely to have participated longer in the GWTG program than hospitals with superior adherence to neither measure. Hospitals with superior adherence to both AMI and HF measures treated patients who more frequently were younger, male, and white compared with hospitals without superior adherence to either. Only small differences were seen in other patient characteristics (such as presence of comorbid conditions) between groups.
Table 3 shows the performance of composite and individual care measures in each group. For the composite AMI measure, hospitals with superior adherence to both AMI and HF core measures had better performance than hospitals with superior adherence to AMI measures alone (p = 0.001). Nevertheless, reperfusion process performance was best among hospitals with superior adherence to AMI only (p = 0.006 for fibrinolysis; p < 0.0001 for primary percutaneous coronary intervention compared with the other 3 groups). For the composite HF measure, again hospitals with superior adherence to both AMI and HF core measures had better performance than hospitals with superior adherence to HF measures alone (p = 0.015). Hospitals with superior adherence to AMI measures only performed most HF measures well, except for providing HF discharge instructions (p < 0.0001 compared with hospitals with good adherence to HF or both); however, hospitals with superior adherence to HF measures lagged only on almost all measures of AMI care (p < 0.001 for all measures compared with hospitals with good adherence to AMI or both), except smoking cessation counseling. Interestingly, although discharge beta-blocker and angiotensin-converting enzyme inhibitor/angiotensin receptor blocker use are common to both sets of performance measures, hospitals with superior adherence to HF measures alone did not perform as well on these measures for their AMI patients (p < 0.0001 for both measures compared with hospitals with good adherence to AMI or both).
Observed in-hospital mortality rates were lower in hospitals with superior adherence to both AMI and HF measures compared with hospitals with superior adherence to neither measure (3.3 vs. 4.2%; p < 0.0001). After adjustment for patient and hospital characteristics, overall patient mortality remained significantly lower in hospitals with superior adherence to both (odds ratio: 0.79; 95% confidence interval: 0.63 to 0.99) compared with superior adherence to neither measure. For AMI and HF patients individually, hospitals with superior adherence to both also had the lowest mortality rates (Fig. 3) and trended toward lower adjusted mortality, although this did not reach statistical significance. In contrast, superior adherence to only one set of metrics alone (either AMI or HF) was not associated with improved mortality.
The GWTG program provided a unique opportunity to examine a broad range of hospitals providing cardiovascular care in the United States. Our study found that hospitals that performed well in one set of cardiovascular metrics (e.g., AMI care) do not necessarily do well in another set of metrics (e.g., HF care). Yet the centers that excel at both sets of metrics generally perform better than the centers that excel at each alone. Furthermore, these hospitals have lower risk-adjusted mortality for cardiovascular disease patients compared with other hospitals.
Our results showed that hospitals participating in GWTG generally performed well on the AMI and HF core measures (median adherence: 93% and 92%, respectively), which reflects the success of guideline dissemination and quality improvement initiatives to date. Because many of the AMI and HF care processes are amenable to similar quality improvement interventions, one might expect a spillover effect in which strong performance in one disease state should also lead to superior performance in a separate, albeit related, cardiovascular disease state. Despite this expectation, our study found only very modest correlation between hospital performance of AMI and HF core measures (r = 0.50). A potential explanation for this surprise finding may be that the types of efforts, resources, and personnel needed to implement performance improvement vary with the particular care process selected (15). For example, hospitals that excelled at performing AMI core measures did not excel at providing HF discharge instructions, whereas hospitals that excelled at providing the HF measures did not perform as well with providing reperfusion-related measures. Along the same lines, a previous analysis of the GWTG program showed that hospitals that performed well on cardiovascular measures did not necessarily do well on CMS pneumonia or surgical infection composite performances (16). These data emphasize that dedicated efforts (e.g., trained personnel, specialized resources) are needed to drive disease-specific care improvement.
A second major finding of this study was that hospitals that excelled in both AMI and HF measures generally performed better on these metrics than hospitals that had superior adherence to neither measure or superior adherence to one set of measures alone. In part, this may be mediated by the longer participation in the GWTG program observed for the hospitals who had superior adherence to both measures compared with hospitals with superior adherence to neither measure. The GWTG program tracks and regularly provides performance feedback on process of care measures, quality improvement tools, and process redesign support to improve care iteratively (17). Previous studies have shown that hospitals enrolled in a quality improvement initiative have better processes of care than nonparticipating hospitals (18). As such, superior adherence to both AMI and HF measures simply may be a marker of hospitals with early interest and action in quality improvement.
Alternatively, superior adherence to both AMI and HF measures may reflect more organized, synergistic hospital care processes, rather than the additive effect of individual AMI and HF care measures. An examination of the metrics common to both sets of core measures helps to illustrate this point. Although common to both AMI and HF core measures, discharge beta-blockers are provided variably to AMI and HF patients. Hospitals with superior adherence to AMI measures do well with discharge beta-blocker use for AMI patients, but do not perform this metric as well for HF patients, whereas hospitals with superior adherence to both AMI and HF measures perform this metric well for both AMI and HF patients, and in fact do better on this metric than hospitals that have good adherence to either AMI or HF alone. These results extend earlier findings by Eagle et al. (19) and Fonarow et al. (20), who discovered that centers with disease-specific standardized care algorithms, order sets, and discharge processes are more likely to prescribe recommended treatments wherever indicated.
Most importantly, hospitals with superior performance on both AMI and HF measures had lower risk-adjusted mortality compared with hospitals adherent to neither or either alone. Part of this improvement in patient outcomes may be accounted for by the more consistent use of therapies, although previous studies found variable association between current performance measures and outcomes (5,8). For AMI patients, Peterson et al. (3) reported an association between guideline adherence and mortality (i.e., every 10% increase in hospital performance was associated with a 10% decrease in in-hospital mortality). Similarly, Bradley et al. (4) showed that AMI process measure performance was associated with improved outcomes, yet noted that only a small component (6%) of hospital-level variation in mortality was explained by these care differences. For HF, the results were somewhat mixed. The Hospital Quality Alliance program found that higher hospital performance for both AMI and HF patients had modestly improved outcomes (8,21). However, Fonarow et al. (5) and Patterson et al. (7) found that the current American College of Cardiology/American Heart Association HF performance measures were not tightly associated with mortality, although an emerging measure (use of beta-blockers in patients with reduced left ventricular ejection fraction) was tightly associated (6).
A third explanation for the lower mortality in hospitals with superior performance on both AMI and HF measures may be a better global quality of care provided in contrast to hospitals with superior adherence to neither, AMI alone, or HF alone. Each set of performance measures captures information in a select group of patients with a given condition and measures discrete disease-related care processes. However, risk-adjusted mortality and other outcomes are likely influenced by factors that are not routinely assessed in a registry database, but are closely associated with the overall quality of hospital care (22). In a survey of high-performing and low-performing hospitals, high performers had more consistent and committed leadership infrastructure, greater investment in information technology, and clearer mechanisms to ensure clinical accountability (23). From this, we postulate that hospitals that are able to provide superior adherence to both AMI and HF measures reflect an optimal hospital culture—that is, resource allocation, activity of quality oversight committees, staffing, or other processes—that elevate the global quality of care and outcomes.
Our results should be interpreted in light of several considerations. First, although the observational nature of this study permits real-world assessment of care patterns, the association between care processes and outcomes in this observation study do not necessarily prove causality, and we cannot eliminate the possibility of confounding from unmeasured variables explaining the difference in risk-adjusted mortality. Second, GWTG currently only reports in-hospital outcomes, so it will be important to assess the association of process measure improvement with longitudinal outcomes. Third, although GWTG represents a spectrum of hospital types and sizes, participation is voluntary, reflecting an inherent interest in quality improvement, and thus may not be representative of national care patterns and outcomes.
Our results have several important implications. First, it cannot be assumed that hospitals that excel in one set of cardiovascular metrics necessarily will perform well in another set of cardiovascular metrics; thus, targeted performance feedback and improvement efforts are needed to elevate global hospital quality of care (24). However, the implementation of care processes in one particular therapeutic area should be examined for potential synergism in nurturing more widespread changes across quality indicators (25,26). Second, from a hospital management perspective, quality improvement projects entail organization of both clinical (e.g., prescription of a particular medication) and administrative (e.g., providing discharge HF education or smoking cessation counseling) activities of care (27,28). Finally, the quality of a hospital's care—and by extrapolation, its patient outcomes—may be reflected more accurately by its performance on a combined set of cardiovascular disease measures than by a single set of performance measures. Given the need to assess and improve global quality of care, assessments of hospital performance may be aligned better with the overall quality of inpatient cardiovascular care, rather than the delivery of care within a specific therapeutic area. For example, if a pay-for-performance system is used to incentivize improvement in cardiovascular outcomes, rather than using traditional separate AMI and HF quality measures, a single scoring system assessing clinical and administrative care activities for all cardiovascular disease patients may better facilitate the identification of concrete quality improvement targets and implementation of practice changes more universally across multiple disease states, and ultimately may improve overall patient outcomes (29).
Our study found only a modest correlation between a hospital's performance in AMI measures compared with its performance in HF measures. However, centers that excel at both AMI and HF measures have better performance than centers that excel at neither or at each alone, and these centers have significantly better patient outcomes. Further investigation and refining of quality improvement strategies is needed to optimize the consistency of hospital quality of cardiovascular care.
The authors thank Erin LoFrese for her editorial contributions to this manuscript. Ms. LoFrese did not receive compensation for her assistance, apart from her employment at the institution where the study was conducted.
For the hospital core quality measures for AMI and HF, please see the online version of this article.
Dr. Wang has received research grants from Bristol-Myers Squibb/Sanofi-Aventis Partnership, Merck/Schering-Plough, The Medicines Company, Heartscape, Canyon Pharmaceuticals, and Eli Lilly/Daiichi Sankyo Alliance; and is a consultant to Medco. Dr. Hernandez has received research grants from Johnson & Johnson, Merck & Co., and Proventys; and is a consultant to AstraZeneca, Corthera, Inc., and Medtronic, Inc. Dr. Bhatt has received research grants from Bristol-Myers Squibb, Eisai, Sanofi-Aventis, The Medicines Company, and AstraZeneca. Dr. Heidenreich has received research grants from Medtronic. Dr. Heidenreich has received a grant from Medtronic. Dr. Fonarow has received research grants from the National Heart, Lung and Blood Institutes and AHRQ; is a consultant to Novartis and Pfizer; and has received honorarium from Medtronic. Dr. Peterson has received research grants from Bristol-Myers Squibb/Sanofi-Aventis Partnership, Merck/Schering-Plough, Eli Lilly/Daiichi Sankyo Alliance, and Johnson & Johnson. All other authors have reported that they have no relationships to disclose. The Get With The Guidelines program is provided by the American Heart Association. The Get With The Guidelines-Heart Failure program is currently supported in part by Medtronic, Ortho-McNeil, and the American Heart Association Pharmaceutical Roundtable. The Get With The Guidelines-Heart Failure program has been funded in the past through support from GlaxoSmithKline and the Merck Schering-Plough Partnership. These industry sponsors did not participate in the design, analysis, preparation, review, or approval of this manuscript. The Get With The Guidelines-Coronary Artery Disease program was provided by the American Heart Association and was supported in part through the American Heart Association Pharmaceutical Roundtable and an unrestricted educational grant from Merck.
- Abbreviations and Acronyms
- acute myocardial infarction
- coronary artery disease
- Centers for Medicare & Medicaid Services
- Get With The Guidelines
- heart failure
- Received January 21, 2011.
- Revision received April 14, 2011.
- Accepted May 10, 2011.
- American College of Cardiology Foundation
- Centers for Medicare & Medicaid
- Curtis L.H.,
- Greiner M.A.,
- Hammill B.G.,
- et al.
- ↵Specifications Manual for National Hospital Inpatient Quality Measures, version 3.2c. The Joint Commission web site http://www.jointcommission.org/specifications_manual_for_national_hospital_inpatient_quality_measures. Accessed April 1, 2011.
- Peterson P.N.,
- Rumsfeld J.S.,
- Liang L.,
- et al.
- Hong Y.,
- LaBresh K.A.
- LaBresh K.A.,
- Fonarow G.C.,
- Smith S.C. Jr..,
- et al.
- Eagle K.A.,
- Montoye C.K.,
- Riba A.L.,
- et al.
- Fonarow G.C.,
- Abraham W.T.,
- Albert N.M.,
- et al.
- Jha A.K.,
- Orav E.J.,
- Li Z.,
- Epstein A.M.
- Silow-Carroll S.,
- Alteras T.,
- Meyer J.A.
- Glickman S.W.,
- Baggett K.A.,
- Krubert C.G.,
- Peterson E.D.,
- Schulman K.A.