Author + information
- Gregory J. Dehmer, MD, Chair∗∗ (, )
- Jonathan Jennings, BS, RN†,
- Ruth A. Madden, MPH, RN‡,
- David J. Malenka, MD§,
- Frederick A. Masoudi, MD, MSPH‖,
- Charles R. McKay, MD¶,
- Debra L. Ness, MS#,
- Sunil V. Rao, MD∗∗,
- Frederic S. Resnic, MD, MSc††,
- Michael E. Ring, MD‡‡,
- John S. Rumsfeld, MD, PhD§§,
- Marc E. Shelton, MD‖‖,
- Michael C. Simanowith, MD¶¶,
- Lara E. Slattery, MHS¶¶,
- William S. Weintraub, MD##,
- Ann Lovett, RN, MA∗∗∗ and
- Sharon-Lise Normand, PhD∗∗∗
- ∗Division of Cardiology, Baylor Scott & White Health, Texas A&M Health Science Center, Temple, Texas
- †Hospital Corporation of America, Nashville, Tennessee
- ‡Department of Cardiovascular Medicine, Cardiac Electrophysiology and Pacing Section, Cleveland Clinic, Cleveland, Ohio
- §Cardiology Division, Dartmouth Hitchcock Medical Center, Lebanon, New Hampshire
- ‖Cardiology Division, University of Colorado, Anschutz Medical Campus, Denver, Colorado
- ¶Department of Medicine (Cardiology), Harbor-UCLA Medical Center, Torrance, California
- #National Partnership for Women and Families, Washington, DC
- ∗∗The Duke Clinical Research Institute, Durham, North Carolina
- ††Cardiology Division, Lahey Hospital and Medical Center, Tufts University School of Medicine, Burlington, Massachusetts
- ‡‡Providence Sacred Heart Medical Center, Spokane, Washington
- §§Cardiology Section, Denver VA Medical Center, Denver, Colorado
- ‖‖Prairie Cardiovascular, Springfield, Illinois
- ¶¶American College of Cardiology, Washington, DC
- ##Christiana Care Health Services, Center for Heart & Vascular Health, Newark, Delaware
- ∗∗∗Department of Health Care Policy, Harvard Medical School, and Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, Massachusetts
- ↵∗Reprint requests and correspondence:
Dr. Gregory J. Dehmer, Cardiology Division (MS-33-ST156), Baylor Scott & White Health, 2401 South 31st Street, Temple, Texas 76508.
Public reporting of health care data continues to proliferate as consumers and other stakeholders seek information on the quality and outcomes of care. Medicare’s Hospital Compare website, the U.S. News & World Report hospital rankings, and several state-level programs are well known. Many rely heavily on administrative data as a surrogate to reflect clinical reality. Clinical data are traditionally more difficult and costly to collect, but more accurately reflect patients’ clinical status, thus enhancing the validity of quality metrics. We describe the public reporting effort being launched by the American College of Cardiology and partnering professional organizations using clinical data from the National Cardiovascular Data Registry (NCDR) programs. This hospital-level voluntary effort will initially report process of care measures from the percutaneous coronary intervention (CathPCI) and implantable cardioverter-defibrillator (ICD) registries of the NCDR. Over time, additional process, outcomes, and composite performance metrics will be reported.
Public reports of health care data, quality metrics, and outcomes have been available for over a decade (1). The Federal government, some state departments of health, and numerous private organizations regularly report measures of health care–related quality including outcomes using publicly available financial, administrative, and descriptive data, and often apply proprietary methods. Administrative data are readily available to payers and thus are attractive sources of information. Unfortunately, many studies have shown that administrative data are payer and market specific, use old and sometimes nonactionable data, and may poorly reflect acute severity of illness, correct diagnosis, and clinical outcomes (2–5). Although considerable financial resources are spent on such reports, it is not clear how and to what extent individual consumers use this information (6).
Professional organizations including the American Medical Association, the Society of Thoracic Surgeons (STS), the American Heart Association, and the American College of Cardiology (ACC) have articulated key principles to guide public reporting initiatives, as have several public and private initiatives (7–9). Fundamental to these principles is the use of clinical data whenever possible and returning timely, benchmarked reports to participating institutions to support their quality improvement programs. Since 1997, the ACC has developed a suite of registries within the National Cardiovascular Data Registry (NCDR) program that collect, audit, benchmark, and report clinical data and outcomes on specific cardiovascular procedures and diagnoses to participating institutions (10). This has occurred in partnership with the Society for Cardiovascular Angiography and Interventions for the CathPCI Registry, focused on percutaneous coronary interventional (PCI) patient care, and the Heart Rhythm Society for the ICD Registry, focused on implantable cardioverter-defibrillator (ICD) patient care. The timeliness and clinical detail of these data and reports are superior to administrative data. Facilities submit clinical data to the NCDR and receive quarterly reports of their own data compared with aggregated national data for quality benchmarking. Traditionally, NCDR data have only been available directly to participant facilities, consortia, or health plans. Several scientific publications from the NCDR have reported aggregate data to identify quality gaps at the national level to stimulate efforts to improve care (11). Now, the ACC, in partnership with Society for Cardiovascular Angiography and Interventions and Heart Rhythm Society, has developed a pathway for participant institutions to voluntarily publicly report their NCDR hospital-level data (1,12).
Rationale for Public Reporting
The most compelling justification for public reporting is the right of an individual to know about the care that he or she is likely to receive. With the current national emphasis on the quality, accountability, and cost-effectiveness of health care, the various stakeholders and consumers of health care are eager to obtain information about health care facilities and providers. This has created a “market” for public reporting that, at present, is not well coordinated as different stakeholders have somewhat divergent goals and varying confidence in the utility of nonclinical data sources (Figure 1). Many public reports use data that are several years old, were not designed for clinical performance reporting, or are constructed using proprietary analytic methods that are difficult to reproduce or verify. This diverse reporting environment can confuse patients and purchasers, has the potential to misdirect our focus away from the rights of the individual patient, and has led to divergent public rankings of the same facility in different reporting systems (13).
Hospital-level public reporting, in its various formats, is now familiar to most clinicians. Public reporting of individual provider data is becoming more prevalent (14). However, physician-level reporting has additional challenges, such as attributing process and outcome of care metrics to specific providers and addressing variability in individual metrics in smaller practice groups or for individuals.
Public reporting is primarily based upon the belief that accessible, transparent high-quality information will affect decisions and behaviors of the various stakeholders, ultimately resulting in an improvement in health care delivery and outcomes. However, use of this information by various segments of the population is variable, and the effect of this information on patients’ decision-making is uncertain (15,16). Reporting efforts to define the “best of the best” can motivate an unnecessary performance-reporting race and may not provide the information most patients are seeking. Patients’ quality concerns seem more focused on access to empathetic, interactive providers and the availability of local common services that meet an acceptable standard of care (17–20). As public reporting efforts continue to grow, the ACC and its partnering organizations are committed to a leadership role in quality of care reporting, consistent with their principles of public reporting (7,11).
Public Reporting Programs and Data Sources
Many programs, such as the Centers for Medicare & Medicaid Services effort, reported on the Medicare website, utilize administrative data (billing claims) as surrogates for clinical information for some reported metrics as these data are readily available, are inexpensive, and cover broad populations (21). However, this approach has several shortcomings (2–5). First, invoices are financial instruments and are not principally designed to serve as indicators for clinical diagnoses, disease severity, or acuity of illness. Second, conditions and complications present before or during hospitalization have been difficult to verify, although this may now be mitigated with the new “present on admission” codes. Third, although claims data can identify tests that are ordered and billed, they do not include test results. Finally, clinical decisions or “appropriateness” are not documented. The Centers for Medicare & Medicaid Services now also reports total payments to physicians, drug prescribing behaviors and costs, and payments received by physicians from industry (22). Other programs, such as the Leapfrog Group, report survey data submitted voluntarily by hospitals regarding high-risk surgeries, maternity care, hospital-acquired infections, and compliance with safety measures (23). Numerous independent groups produce a variety of reports or promote websites where patients can report their experiences with hospitals or individual providers. The methodology for many of these programs is typically proprietary, is not explained, and oversight, if any, is not characterized. Some organizations focus on cost-profiling procedures at hospitals and by individual physicians without attention to quality, which is ultimately most important to patients (24).
In comparison, relatively few programs use clinical data to develop a public report. One of the first was the Northern New England Cardiovascular Disease Study Group, which maintains registries for all patients receiving coronary artery bypass grafting (CABG), PCI, and heart valve replacement surgery at 7 New England hospitals and provides a public report of CABG results on their website (25). More recently, the Clinical Outcomes Assessment Program reported clinical data from the NCDR and STS collected from 35 hospitals in Washington State (26). In Massachusetts, public reports utilizing NCDR and STS clinical data at all nonfederal acute care hospitals performing CABG surgery and PCI have been posted since 2002 and 2003, respectively, and the New York State Department of Health has issued CABG reports since 1992 and PCI reports since 1995 (27,28). The STS has maintained a clinical database on cardiac surgical procedures since 1989 and, in 2011, began a voluntary public reporting program in collaboration with Consumer Reports, a program that has been well received (29,30).
The Patient Protection and Affordable Care Act of 2010 includes a framework for quality improvement that embraces public reporting of health care quality information. More recently, the Medicare Access and Children’s Health Insurance Program Reauthorization Act of 2015 (known as MACRA) incorporates a merit-based incentive payment system that will encourage alternative payment models compared with the traditional fee-for-service model. Payment adjustments will be based on 4 performance categories (clinical quality, meaningful use, resource use, and clinical practice improvement), creating an environment with an untested effect on access and care quality, but rich with public reporting opportunities.
Benefits and Concerns Related to Public Reporting
The merits of public reporting are debated. Advocates argue that public reporting enables patients to identify the best physicians and hospitals, simultaneously giving clinicians and health care organizations incentives to improve quality (31). Some studies have shown associations between public reporting and higher quality of care (32,33). Opponents counter that data used in some measures lack adequate clinical granularity to accurately reflect quality or that outcomes reporting may encourage denial of care to the sickest patients who might benefit most from treatment, but are also at highest risk for poor outcomes (34–41). Risk adjustment is intended to correct for the inclusion of sicker patients (42,43), but in practice, risk adjustment is imperfect; variability in how high-risk features are documented in the medical record and then abstracted into registry data are common (44–46). For complex diseases, the same facility may be rated differently when reported and analyzed by various commercial risk models, even when using the same administrative data (13).
Although recognizing the challenges to developing accurate and meaningful reporting, the ACC and its partnering organizations believe that a thoughtful, measured public reporting program, which uses clinical data with scientifically open methodology, subject to iterative improvement and oversight by professional organizations, has benefits and hopefully can minimize the potential unintended consequences. Patients, payers, health care quality organizations, and the government all desire transparent and accurate reporting of the performance of cardiovascular programs. Clinicians and patients can benefit from access to this information as long as it is correct and provided in a fair and understandable format. The ACC believes it has a responsibility to move the profession toward acceptance of public reporting by using clinical data from the NCDR. Therefore, after careful study of the feasibility of public reporting using NCDR data, the ACC and its partnering organizations established the Public Reporting Advisory Group to oversee the implementation of the public reporting program and guide operational decisions necessary to achieve these goals. A summary of the key decisions made by the Public Reporting Advisory Group and the structure of the public reporting program are described in this paper.
Structure of the Public Reporting Program and the Initial Metrics Reported
Major operational decisions about the structure of the reporting program are summarized in Table 1. Hospital participation in public reporting of their NCDR data is voluntary. Because metrics reported in the NCDR are only at the hospital level, there is no physician-level reporting. Data will be available to the public on the ACC’s CardioSmart website, permitting consumers to search by hospital name, address, zip code, or cardiac services provided. The initial performance measures reported are from the CathPCI and ICD Registries and have been endorsed by the National Quality Forum, a requirement for NCDR public reporting at this time (Tables 2 and 3⇓⇓). Facilities will have 30 days to preview information on their individual, private, NCDR web-enabled dashboard before deciding to release their information publicly. Review of the STS public reporting effort shows that not only the top tier programs choose to report their results publicly, but also some facilities in the lowest tier.
Minimum Sample Size for Reporting
A small minimum sample size encourages reporting from each registry by most facilities. However, low-volume facilities will inevitably have more variation with wider confidence intervals around point estimates, making identification of truly superior (or inferior) hospitals difficult. For measures in the CathPCI Registry, the minimum number of records required to qualify for public reporting is 25 procedures annually. Analysis of national NCDR data indicated this threshold excluded approximately 3% of facilities, allowing 97% of facilities to be eligible for public reporting. For measures in the ICD Registry, the minimum number of records required to qualify for public reporting is 11 procedures annually. A minimum number of 25 cases was considered, but at that level, approximately 30% of facilities would be disqualified from reporting due to low volume. Reducing the threshold to 11 cases annually allows more than 80% of facilities to qualify for public reporting. Although this may seem low, the 11 case threshold is consistent with the requirements of the American College of Surgeons National Surgical Quality Improvement Program and the requirements for reporting on Hospital Compare.
Statistical Analysis and Display of Reported Variables
To address some of the uncertainty due to sample size, a model-based approach was adopted rather than using the actual observed fraction of eligible patients who receive a therapy as the reported value. The model provides an estimate of the hospital’s probability of providing a therapy among patients identified as eligible for the therapy. For example, if a recommended drug were given to 10 of 11 patients receiving an ICD at a facility, the computed (observed) performance score would be 91%. Because of the small sample size, the computed score of 91% could be an incorrect assessment of the facility’s true performance, and the resulting confidence intervals would be wide. The model-based approach described in the Online Appendix accounts for both the “noise” associated with a small sample size and the similarities among patients treated within the same hospital. The model provides a mechanism to pool information that is known about each hospital and what is known about all hospitals, avoiding the “regression to the mean” problem. Using this model, an individual hospital with a small sample size and a score above the mean of all hospitals is “pulled down” closer to the mean score, whereas an individual hospital with a small sample size and a score below the mean of all hospitals is “pulled up” closer to the mean score of all hospitals. The amount a score is adjusted up or down by the model depends on the number of cases it contributes. This method better predicts future performance than the point estimate provided by observed rates. Although more challenging to explain to hospitals and the public, this approach is commonly used in other reporting efforts such as Hospital Compare and adheres to standards for public reporting of outcome measures (9,47). This is consistent with the ACC’s position that the motivation behind public reporting should be the promotion of best practices and quality improvement rather than creating unjustified apparent differentiation across facilities (7).
The information provided in public reporting should be understandable and usable. The amount of information and manner in which it is displayed determine whether consumers can actually process and use it in decision-making (48,49). Information displays that aid consumers in quickly understanding the meaning of data increase consumer motivation to use the information, in contrast to a bewildering display that will quickly be dismissed as too complicated. On the basis of recommendations from several sources and the previously cited best practices for public reporting documents, we adopted a 1 to 4 star-rating scheme.
Once a facility’s performance score for a measure is estimated from the model, the score is converted to a star rating for public display. To avoid confusion, the same scheme for the star assignments of performance scores is used throughout the reporting program rather than using a different scheme for star rating assignment for individual metrics. Among the options considered to convert the modeled score to stars, an absolute scoring option was chosen. The primary benefit of absolute performance scoring is that performance scores are converted into star ratings based on cut-points that were deemed clinically relevant and that would yield meaningful clinical differences between groups. However, a potential drawback is the possibility to have no (or all) hospitals assigned to a specific star tier, and thus, this display could provide little discrimination between hospitals for the consumer. In addition, there is subjectivity to the thresholds, as not all may agree on clinically meaningful threshold values. The cut points chosen for star assignments were: 1 star for a performance score <75%, 2 stars for performance scores ≥75% but <90%, 3 stars for performance scores ≥90% but <95% and 4 stars for a performance score ≥95%.
Display of Uncertainty
The hierarchical model-based approach used in reporting will adjust for some of the uncertainty in the performance scores related to sample size. Because scoring model accuracy depends on the amount of data available, there will always be a degree of uncertainty in the performance score of a facility. The amount of uncertainty in a performance score is high when the sample size is small and decreases as the sample size increases. Uncertainty in a measurement is typically expressed by displaying confidence intervals that show the range of values within which the actual score is contained at a chosen level of probability. All of this information will be accessible to the reader through additional links available from the basic data display. A detailed discussion of the statistical methods is presented in the Online Appendix, and a mock-up of the proposed display of the metric performance page on CardioSmart is provided (Figure 2).
NCDR Data Accuracy
For any public reporting program to be successful and provide meaningful information to the public, it is essential for the reported information to be accurate and timely. Administrative data typically lags 1 to 2 years behind in reporting and may fail to accurately characterize the target population of procedures, resulting in the evaluation of a heterogeneous cohort with misleading case numbers and mortality rates (50,51). Critical clinical variables necessary for adequate risk adjustment may be lacking, and comorbidities may be confused with complications in administrative data (52–54).
All incoming data from facilities participating in the NCDR are subject to checks for data completeness and consistency (55). Completeness focuses on the proportion of missing data within fields, whereas consistency determines the extent to which logically related fields contain values consistent with other fields. Accuracy of data is checked by random independent chart audits to assess the agreement between submitted registry data and the contents of original charts from the hospitals submitting data. The accuracy of data entered is high, although the number of audits annually is relatively small compared with the volume of data being entered. Concerns that some variables, especially those used to characterize high-risk patients, are not captured persist (44,45). As such, the variables captured within the NCDR are routinely evaluated, with specific attention to variables reflective of elevated risk. Past NCDR risk models have demonstrated high performance among high-risk patient subsets (56).
The Future of NCDR Public Reporting
This first iteration of public reporting by the ACC follows the principles set forth in the NCDR public reporting program mission statement (Table 4). The NCDR acknowledges that this is only the initial step in our public reporting program. There are several important reasons for hospitals to participate in the NCDR public reporting programs. First, the public has a growing desire for this information; thus, providing these NCDR data demonstrates a good faith effort to deliver high-quality clinical data to assist patients’ health care decisions. Second, value-driven purchasing, which will include public access to provider performance, will be dominant within several years. Failure to understand and use clinical data now to improve operations may result in facilities falling subject to public judgments based on administrative or proprietary-derived data and falling behind in their adaptation to the changing health care environment (57). Finally, all major cardiovascular professional organizations support patient advocacy, serve members’ practice advancement, and promote quality education programs for their members, and many support clinical databases (8,10). These registries use standardized, timely, benchmarked reports to document best practices and outcomes and aid participants to improve operations and services. These data are also used for member recertification and the documentation of competencies. This approach harmonizes into the value-purchasing paradigm that is rapidly approaching.
The ACC recognizes that reporting alone is not sufficient to drive improvements in care delivery and will continue to build Clinical Toolkits as part of the Quality Improvement for Institutions initiative made available at no cost to every NCDR participating facility (10). NCDR data are used to identify gaps in care as priority topics for toolkit development, and the toolkit resources are focused to align specifically with the patient care goals being measured through NCDR.
Future public reporting using NCDR data may include use of validated quality metrics, risk-adjusted clinical outcomes, and composite quality measures across NCDR registries. For some of these outcome measures, it will be necessary to link NCDR data to external databases to derive outcome measures beyond hospital discharge. To that end, the NCDR conducted a pilot program to report 30-day risk-standardized readmission after PCI. The pilot required linking patients undergoing PCI with data from Medicare on readmissions following discharge from the index PCI. Among participating CathPCI Registry facilities, 22% volunteered to have their PCI readmission data displayed, including facilities with higher than expected readmission rates. Enhanced search functions are also planned in future updates.
The medical community continues to have an understandable degree of anxiety over the unintended consequences of public reporting and the ability of the public and others to misuse or misinterpret the results. The ACC is firmly committed to developing a cardiovascular public reporting program using high-quality clinical data that fairly and accurately characterizes the care provided while delivering usable and understandable information to the public. Voluntary participation in public reporting has the potential to enhance a health care facility’s standing and linkage to the community.
The authors are grateful to the Massachusetts Data Analysis Center (Mass-DAC) staff (Mathew Cioffi, Treacy Silbaugh, Robert Wolf, Caroline Wood, and Katya Zelevinsky) from the Department of Health Care Policy at Harvard Medical School for data management and programming expertise.
The NCDR Public Reporting Advisory Group is a collaborative effort of the American College of Cardiology (ACC), the Society for Cardiovascular Angiography and Interventions (SCAI), and the Heart Rhythm Society (HRS). Except as noted, all positions are voluntary. Dr. Resnic is the CathPCI Registry Steering Committee liaison. Dr. Malenka is the ICD Research and Publications Committee Liaison. Dr. Rao is the SCAI Representative. Ms. Madden is the HRS Representative. Drs. Ring and Shelton represent the ACC Board of Governors. Ms. Ness is an ex officio member for the ACC Board of Trustees. Dr. Masoudi is the Senior Medical Officer of the NCDR, and Dr. Rumsfeld is the NCDR Chief Science Officer; both are members of the ACC Board of Trustees. Dr. Ring is on the medical advisory board for Boston Scientific; is a proctor of Medtronic; and is on the speakers bureau for Amgen. Dr. McKay is a consultant for OrthoNet LLC. Dr. Normand is a paid statistical consultant to the Public Reporting Advisory Group. Dr. Simanowith and Ms. Slattery are employees of the ACC.
- Abbreviations and Acronyms
- American College of Cardiology
- coronary artery bypass graft
- implantable cardioverter-defibrillator
- National Cardiovascular Data Registry
- percutaneous coronary intervention
- Society of Thoracic Surgeons
- 2016 American College of Cardiology Foundation
- Dehmer G.J.,
- Drozda J.P. Jr..,
- Brindis R.G.,
- et al.
- Shahian D.M.,
- Silverstein T.,
- Lovett A.F.,
- Wolf R.E.,
- Normand S.L.
- Hammill B.G.,
- Curtis L.H.,
- Fonarow G.C.,
- et al.
- ↵Lansky D. Public reporting of health care quality: principles for moving forward. Health Affairs Blog. Available at: http://healthaffairs.org/blog/2012/04/09/public-reporting-of-health-care-quality-principles-for-moving-forward/. Accessed September 12, 2015.
- Drozda J.P. Jr..,
- Hagan E.P.,
- Mirro M.J.,
- et al.
- Krumholz H.M.,
- Brindis R.G.,
- Brush J.E.,
- et al.
- ↵American College of Cardiology. Quality improvement for institutions: data powering performance. Available at: http://cvquality.acc.org/NCDR-Home.aspx. Accessed September 12, 2015.
- ↵American College of Cardiology. Quality improvement for institutions: published research. Available at: http://cvquality.acc.org/en/NCDR-Home/Research/Published-Research.aspx. Accessed September 12, 2015.
- Zoghbi W.A.,
- Gillis A.M.,
- Marshall J.J.
- Austin J.M.,
- Jha A.K.,
- Romano P.S.,
- et al.
- ↵Centers for Medicare & Medicaid Services. Physician compare. Available at: https://www.medicare.gov/physiciancompare/search.html. Accessed September 1, 2015.
- ↵National Opinion Research Center at the University of Chicago. National survey examines perceptions of health care provider quality. Available at: http://www.norc.org/NewsEventsPublications/PressReleases/Pages/national-survey-examines-perceptions-of-health-care-provider-quality.aspx. Accessed September 12, 2015.
- ↵The Associated Press-NORC Center for Public Affairs Research. Finding quality doctors: how Americans evaluate provider quality in the United States: research highlights. Available at: http://www.apnorc.org/projects/Pages/HTML%20Reports/finding-quality-doctors.aspx. Accessed September 12, 2015.
- ↵Centers for Medicare & Medicaid Services. Hospital compare. Available at: https://www.medicare.gov/hospitalcompare/search.html. Accessed August 12, 2015.
- ↵Centers for Medicare & Medicaid Services. Medicare provider utilization and payment data: physician and other supplier. Available at: https://www.cms.gov/research-statistics-data-and-systems/statistics-trends-and-reports/medicare-provider-charge-data/physician-and-other-supplier.html. Accessed September 7, 2014.
- ↵The Leapfrog Group. Available at: http://www.leapfroggroup.org/. Accessed August 21, 2015.
- ↵The Healthcare Bluebook. Available at: https://healthcarebluebook.com. Accessed July 26, 2015.
- ↵Clinical Outcomes Assessment Program (COAP). Available at: http://www.COAP.org. Accessed August 29, 2015.
- Resnic F.S.,
- Welt F.G.
- Resnic F.S.,
- Normand S.L.,
- Piemonte T.C.,
- et al.
- Peterson E.D.
- Lilford R.,
- Pronovost P.
- Sherwood M.W.,
- Brennan J.M.,
- Ho K.K.,
- et al.
- Barringhaus K.G.,
- Zelevinsky K.,
- Lovett A.,
- et al.
- Krumholz H.M.,
- Wang Y.,
- Mattera J.A.,
- et al.
- ↵Hibbard J, Sofaer S. Best practices in public reporting no. 1: how to effectively present health care performance data to consumers. Available at: http://archive.ahrq.gov/professionals/quality-patient-safety/quality-resources/tools/pubrptguide1/pubrptguide1.html. Accessed August 1, 2015.
- ↵Hibbard J, Sofaer S. Best practices in public reporting no. 2: maximizing consumer understanding of public comparative quality reports: effective use of explanatory information. Available at: http://archive.ahrq.gov/professionals/quality-patient-safety/quality-resources/tools/pubrptguide2/pubrptguide2.html. Accessed August 1, 2015.
- Messenger J.C.,
- Ho K.K.,
- Young C.H.,
- et al.
- Brennan J.M.,
- Curtis J.P.,
- Dai D.,
- et al.
- Hibbard J.J.H.,
- Stockard J.,
- Tusler M.
- Rationale for Public Reporting
- Public Reporting Programs and Data Sources
- Benefits and Concerns Related to Public Reporting
- Structure of the Public Reporting Program and the Initial Metrics Reported
- Minimum Sample Size for Reporting
- Statistical Analysis and Display of Reported Variables
- Display of Uncertainty
- NCDR Data Accuracy
- The Future of NCDR Public Reporting