Author + information
- Received September 29, 2010
- Revision received March 24, 2011
- Accepted March 29, 2011
- Published online July 19, 2011.
- Margaret C. Fang, MD, MPH⁎,⁎ (, )
- Alan S. Go, MD⁎,†,‡,
- Yuchiao Chang, PhD§,
- Leila H. Borowsky, MPH§,
- Niela K. Pomernacki, RD‡,
- Natalia Udaltsova, PhD‡ and
- Daniel E. Singer, MD§
- ↵⁎Reprint requests and correspondence:
Dr. Margaret C. Fang, Division of Hospital Medicine, University of California, San Francisco, 505 Parnassus Avenue, Box 0131, San Francisco, California 94143
Objectives The purpose of this study was to develop a risk stratification score to predict warfarin-associated hemorrhage.
Background Optimal decision making regarding warfarin use for atrial fibrillation requires estimation of hemorrhage risk.
Methods We followed up 9,186 patients with atrial fibrillation contributing 32,888 person-years of follow-up on warfarin, obtaining data from clinical databases and validating hemorrhage events using medical record review. We used Cox regression models to develop a hemorrhage risk stratification score, selecting candidate variables using bootstrapping approaches. The final model was internally validated by split-sample testing and compared with 6 published hemorrhage risk schemes.
Results We observed 461 first major hemorrhages during follow-up (1.4% annually). Five independent variables were included in the final model and weighted by regression coefficients: anemia (3 points), severe renal disease (e.g., glomerular filtration rate <30 ml/min or dialysis-dependent, 3 points), age ≥75 years (2 points), prior bleeding (1 point), and hypertension (1 point). Major hemorrhage rates ranged from 0.4% (0 points) to 17.3% per year (10 points). Collapsed into a 3-category risk score, major hemorrhage rates were 0.8% for low risk (0 to 3 points), 2.6% for intermediate risk (4 points), and 5.8% for high risk (5 to 10 points). The c-index for the continuous risk score was 0.74 and 0.69 for the 3-category score, higher than in the other risk schemes. There was net reclassification improvement versus all 6 comparators (from 27% to 56%).
Conclusions A simple 5-variable risk score was effective in quantifying the risk of warfarin-associated hemorrhage in a large community-based cohort of patients with atrial fibrillation.
Oral anticoagulants such as warfarin can substantially reduce the thromboembolic consequences of atrial fibrillation (1). However, anticoagulant-associated hemorrhage deters many clinicians from prescribing warfarin (2). Accurate risk stratification according to hemorrhage risk would facilitate the anticoagulation decision for individual patients, and could help control for varying hemorrhage risk across different studies or when comparing the safety of various antithrombotic agents. We describe the development and internal validation of a new hemorrhage risk stratification tool and compare its performance to other published hemorrhage risk schemes.
The ATRIA (Anticoagulation and Risk Factors in Atrial Fibrillation) study followed up 13,559 adults with nonvalvular, nontransient atrial fibrillation enrolled in Kaiser Permanente of Northern California, a large integrated healthcare system. Details of the cohort assembly have been described previously; briefly, subjects were identified by searching clinical databases for International Classification of Diseases-Ninth Revision, Clinical Modification (ICD-9) codes for atrial fibrillation between July 1, 1996, and December 31, 1997, and followed up through September 30, 2003 (3,4). Warfarin exposure was determined using a previously described and validated algorithm based on the number of days supplied per prescription, refill patterns, and intervening international normalized ratio measurements (3). Clinical variables were identified using ICD-9 codes, pharmacy prescriptions, and laboratory databases (3).
We identified 6 published, validated risk stratification schemes developed to predict warfarin-associated hemorrhage (Table 1) and searched for those specific risk factors in the ATRIA study cohort (5-10). Variables unavailable in the ATRIA cohort (e.g., patient genotype) or not directly applicable to atrial fibrillation (e.g., acute pulmonary embolism) were not included as potential variables. Prior bleeding history was defined as any prior outpatient or inpatient ICD-9 diagnosis code of hemorrhage, including by specific organ system (e.g., prior intracranial or gastrointestinal bleeding), in the aggregate (e.g., all-cause prior bleeding), and by timing (within 90 days or >90 days). High fall risk was defined as any prior hospitalization with a discharge diagnosis code indicating mechanical fall that occurred in either the inpatient or outpatient setting.
Clinical laboratory databases were used to identify anemia (hemoglobin <13 g/dl in men and <12 g/dl in women), thrombocytopenia (platelet count <90,000), and renal insufficiency (measured by serum creatinine and estimated glomerular filtration rate) (11). Abnormal laboratory values were considered abnormal from 3 months before to 1 year after the date of the measurement, censored by a preceding or subsequent normal test value. If results were unavailable within the time window, the test was assumed normal based on the assumption that tests would be ordered if there were clinical suspicions.
Clopidogrel and ticlopidine exposure was determined from pharmacy databases, and duration was defined from prescription start date to 2 months after the end of the medication supply. Accurate assessment of aspirin and nonsteroidal anti-inflammatory drugs exposure was not possible since these medications were predominantly obtained without prescription.
Major hemorrhage outcomes
We searched computerized databases for primary discharge ICD-9 codes for extracranial hemorrhages (i.e., gastrointestinal, genitourinary, retroperitoneal) and primary and secondary diagnoses of intracranial hemorrhage, including intracerebral, subarachnoid, or subdural hemorrhages (Online Appendix). Medical charts from potential hemorrhagic events were reviewed by a clinical outcomes committee using a formal study protocol. Only events that occurred during or within 5 days of preceding warfarin exposure were included. Hemorrhages not present on admission that occurred during the hospitalization or as a result of a procedure were excluded. We restricted the analysis to “major hemorrhages,” defined as fatal, requiring transfusion of ≥2 U packed blood cells, or hemorrhage into a critical anatomic site (e.g., intracranial, retroperitoneal).
All follow-up periods on warfarin were included in the analysis. Cox proportional hazards regression models using time-varying covariates were used to examine the relationships between potential risk factors and hemorrhage outcomes with time origins set at the beginning of each follow-up period. Risk factor values were updated over follow-up with the proviso that no values were changed within 7 days of an endpoint bleeding event.
The cohort was randomly divided into a split-sample “derivation” and “validation” cohort using a 2:1 ratio; models using time-varying covariates were developed in the derivation cohort and performance tested in the validation cohort. Covariates associated with major hemorrhage with a hazard ratio ≥1.5 were considered for potential inclusion in the final multivariable model. Since variable selection procedures may produce unstable results, we applied backward elimination selection on 1,000 bootstrap samples from the derivation set, with ≥0.05 the significance level set for removing a variable. Final model variables were those selected in >50% of bootstrap samples (12). Model discrimination was evaluated using the c-index (13), and calibration by the goodness-of-fit test. Variables from the final multivariable Cox regression model were converted to a risk score, with points assigned to each predictor approximately proportional to the magnitude of the regression coefficients rounded to the nearest integer.
The risk score was collapsed into “low,” “intermediate,” and “high” risk groups based on the observed major hemorrhage rate. Because there are no definitive or clinically determined cut-points for rates of major hemorrhage at which anticoagulation would be universally contraindicated, we chose thresholds in our point score that appeared to optimally aggregate low- and high-risk groups. We then applied the ATRIA study model and 6 other risk schemes using time-varying covariates to the ATRIA cohort to compare model performance using the c-index, risk stratification capacity (the proportion of the cohort assigned to clinically meaningful risk categories), and by a recently published extension of the net reclassification improvement metric (14). For net reclassification improvement calculations, all schemes were compared using a low/intermediate/high categorization to provide a common scale. This study was approved by the respective institutional committees on human research boards.
There were 9,186 subjects in the ATRIA study cohort, contributing 32,888 person-years of warfarin exposure (median warfarin duration 3.5 years [interquartile range: 1.2 to 6.0 years]). Because anticoagulated patients could discontinue warfarin and subsequently resume therapy, individual patients could contribute multiple periods on warfarin; 2,790 patients (30%) had >1 period on warfarin and 709 patients (8%) had >2 periods on warfarin.
We identified 461 validated incident warfarin-associated major hemorrhages, an annualized rate of 1.40% hemorrhages per year. The derivation cohort contained 307 major hemorrhages among 6,123 patients, and the validation cohort 154 major hemorrhages among 3,063 patients.
Table 2 compares the characteristics of subjects with and without major hemorrhage in the derivation cohort. Variables associated with major hemorrhage at a hazard ratio ≥1.5 on bivariate analysis were considered for the final model and tested in 1,000 bootstrap samples. Among the various definitions of renal disease and prior hemorrhage, “severe renal disease” (defined as estimated glomerular filtration rate <30 ml/min or dialysis dependent) and “any prior hemorrhage diagnosis (all-cause)” were selected over alternative definitions based on bootstrap analysis. Five final variables emerged in >50% of bootstrap samples: anemia, severe renal disease, age ≥75 years, any prior hemorrhage diagnosis, and diagnosed hypertension. Based on the final model's regression coefficients, anemia and severe renal disease were assigned 3 points, age ≥75 years 2 points, and prior hemorrhage and diagnosed hypertension 1 point each, resulting in a risk scheme with a possible range of 0 to 10 points (Table 3).
Model validation and performance
When applied to the validation set, the model generated regression coefficients similar to those in the derivation dataset, with good discrimination (c-index 0.74 [0.70 to 0.78]) and acceptable calibration by the goodness-of-fit test (p = 0.29). Bleeding rates in the combined cohort ranged from 0.4% to 17.3% per year (Table 4). The continuous risk score was collapsed to a 3-category scheme, where “low-risk” (0 to 3 points) patients had hemorrhage rates of <1% per year, and “high-risk” (5 to 10 points) patients had rates >5% per year (Table 4). The high-risk category effectively concentrated hemorrhage events such that 42% of hemorrhage events occurred in only 10.2% of cohort person-years. The vast majority of remaining patients and person-years were low risk (Fig. 1).
Compared with other risk schemes, the ATRIA study risk score had the highest c-index point estimates for both the full range of scores and the 3-category scale and identified a comparatively large proportion of the cohort as either low or high risk (Table 5). In contrast, other risk schemes either led to much smaller fractions of the cohort categorized as high risk or observed relatively low event rates in their high-risk category. The ATRIA study scheme led to sizable net reclassification improvement when compared with all other risk schemes, ranging from 27.7% to 56.6% improvement (Table 5).
Accurate prediction of hemorrhage risk on warfarin is vital to the anticoagulation therapy decision. Based on 5 easily available clinical variables, the ATRIA score reflects the experience of a large, diverse group of patients with atrial fibrillation assembled from community care and followed up for a longer period than prior studies. The model development used rigorous contemporary methods such as split-sample testing and bootstrap sampling approaches to underwrite internal validity.
When collapsed into a 3-category risk score, the ATRIA study risk scheme was able to identify sizable proportions of patients who fell into the most clinically meaningful categories, namely, low or high risk for hemorrhage. The low-risk category, accounting for 83% of follow-up, had an observed major hemorrhage rate of <1% per year. The high-risk category represented only 10.2% of patient follow-up yet accounted for 42% of the major bleeding events. The ATRIA study scheme led to improvements in accurate net reclassification when compared to alternative schemes. The c-index of 0.74, while not representing perfect discrimination, indicates good performance for a prediction model and compares favorably to other widely used risk stratification schemes such as the CHADS2 (congestive heart failure, hypertension, age >75 years, diabetes mellitus, and stroke) stroke risk index, which has a c-index of ∼0.6 (15). Certainly, identifying novel predictors of bleeding and improving current methods of risk stratification are important areas of further investigation.
The variables in our model have each been linked to increased hemorrhage risk in prior studies (5-10). Anemia was strongly associated with future bleeding risk. Although we were unable to determine the mechanism of association, anemia may reflect a predisposition to hemorrhage or recent subclinical hemorrhage. Severe renal disease was also a powerful predictor of hemorrhage risk. All-cause prior bleeding was associated with future bleeding, and presumably identifies patients with a potential bleeding lesion or diathesis. Finally, older age and hypertension were independently associated with hemorrhage risk. Similar to other hemorrhage risk schemes, this analysis focused on all-cause major hemorrhage, both intracranial and extracranial. Although intracranial hemorrhages are the most important outcomes, the rarity of such events makes their risk prediction challenging (16). High-quality models to predict intracranial hemorrhage are vitally needed.
Our risk model is clinically applicable when counseling patients about the relative benefits and harms of anticoagulation therapy. Particularly as newer, easier to administer anticoagulants become available, accurate estimates of hemorrhage risk will strongly influence the anticoagulation decision. Our risk score may not affect the anticoagulation therapy decision for most patients at high risk for stroke, because they derive a large benefit from anticoagulation. However, bleeding risk is significantly more influential in patients at moderate or lower stroke risk. Our bleeding risk estimates can be incorporated into formal decision-analysis models or can be used to counsel individual patients about their estimated risks of stroke and bleeding. For such patients, providing estimates of the risk of bleeding on anticoagulation can be a very informative addition to individualized patient decision making.
There are several limitations to our analysis. Our assessment of clinical risk factors was based on computerized databases that did not have information on several covariates such as measurements of blood pressure and genotype. We lacked information about nonprescription use of aspirin or nonsteroidal anti-inflammatory drugs. Although the hemorrhage rate in the ATRIA study was generally lower than that described by the other risk schemes, the rates are similar to some recent randomized trials (17). Finally, it will be important to test the ATRIA study risk scheme in a separate population. Although internal validation reduces the likelihood of chance playing a major role in development of our model, external validity needs to be tested empirically.
The risk of anticoagulant-associated hemorrhage is a major deterrent to more widespread use of anticoagulants. Risk stratification schemes can help clinicians estimate the magnitude of hemorrhage risk when prescribing or continuing anticoagulant therapy. Such schemes can also provide important information for comparing the hemorrhage risk of patients enrolled in clinical studies or when comparing the safety of different anticoagulation strategies (18).
For a list of ICD-9 codes for hemorrhage outcomes, please see the online version of this article.
This study was supported by the National Institute on Aging (R01 AG15478 and K23 AG028978), the National Heart, Lung, and Blood Institute (U19 HL91179 and RC2HL101589), the Eliot B. and Edith C. Shoolman Fund of the Massachusetts General Hospital (Boston, Massachusetts), and a research grant from Daiichi Sankyo, Inc. The funding sources had no role in study design, data collection, data analysis, data interpretation, or preparation of this paper. Dr. Go has received research support from Johnson & Johnson, Inc. Dr. Singer has consulted for Boehringer Ingelheim, Daiichi Sankyo, Inc., Johnson & Johnson, Inc., Merck and Co., Bayer Schering Pharma, and Sanofi-Aventis, Inc., and has received research support from Daiichi Sankyo, Inc. All other authors have reported that they have no relationships to disclose.
- Abbreviations and Acronyms
- hepatic/renal disease, ethanol abuse, malignancy, older age, reduced platelet count, rebleeding risk, hypertension, anemia, genetic factors, excessive fall risk, and stroke
- International Classification of Diseases-Ninth Revision, Clinical Modification
- Received September 29, 2010.
- Revision received March 24, 2011.
- Accepted March 29, 2011.
- American College of Cardiology Foundation
- Kremers W.K.
- Fang M.C.,
- Go A.S.,
- Chang Y.,
- Borowsky L.,
- Pomernacki N.,
- Singer D.E.