Author + information
- Robert C. Hendel, MD, FACC, Co-Chair, Methodology Writing Group,
- Bruce D. Lindsay, MD, FACC, Co-Chair, Methodology Writing Group,
- Joseph M. Allen, MA, Methodology Writing Group Member,
- Ralph G. Brindis, MD, MPH, MACC, Methodology Writing Group Member,
- Manesh R. Patel, MD, FACC, Methodology Writing Group Member,
- Leah White, MPH, Methodology Writing Group Member,
- David E. Winchester, MD, FACC, Methodology Writing Group Member and
- Michael J. Wolk, MD, MACC, Methodology Writing Group Member
Appropriate Use Criteria Task Force
John U. Doherty, MD, FACC, Co-Chair
Gregory J. Dehmer, MD, MACC, Co-Chair
Nicole M. Bhave, MD, FACC
Stacie L. Daugherty, MD, MSPH, FACC
Larry S. Dean, MD, FACC
Milind Y. Desai, MBBS, FACC
Claire S. Duvernoy, MD, FACC∗
Linda D. Gillam, MD, FACC
Praveen Mehrotra, MD, FACC
Ritu Sachdeva, MBBS, FACC
David E. Winchester, MD, FACC
Table of Contents
1. Introduction 936
2. AUC Evolution and Controversies 937
2.1. Progression of Terminology 937
2.2. Interpretation and Application of AUC 937
2.3. Level of Expertise on Rating Panels 938
2.4. Comprehensiveness Versus Selective Indications 938
2.5. Consensus Versus Modified Delphi Method 938
2.6. Timeliness and Relevance of AUC Documents 938
Table 1—Central Illustration: Summary of Changes in 2018 AUC Methodology 939
3. Methodology 939
3.1. Appropriate Use 939
Table 2—Appropriate Use Categories 940
3.2. Appropriate Use Categories 940
4. AUC Organizational Structure 940
4.1. AUC Task Force 940
4.2. Writing Group 940
4.3. Review Panel 940
4.4. Rating Panel 940
5. Relationships With Industry and Other Entities 941
6. AUC Developmental Process 941
6.1. Scope and Design of AUC 941
6.2. Indication Construction 941
6.3. External Review of Clinical Indications 942
6.4. Evidentiary (Systematic Literature) Review 942
6.5. Concordance With CPGs 942
6.6. Rating Process 942
Table 3—Level of Evidence Categories 943
6.7. Rating Tabulation 943
7. AUC Revisions and Focused Updates 944
Table 4—Parameters of AUC Annual Review 944
8. Cost and Value Implications 944
9. Implementation and Evaluation 945
10. Conclusions 945
Author Relationships With Industry and Other Entities (Relevant) 948
Rapid and extensive changes have occurred in the practice of cardiology, especially in the development and utilization of imaging, interventional, and electrophysiological procedures. Enhanced radionuclide imaging techniques; advances in echocardiography; the development of cardiac magnetic resonance and cardiac computed tomography techniques; as well as innovations such as drug-eluting stents, percutaneous valves, and cardiovascular implantable electronic devices have revolutionized how patients are diagnosed and treated. Although these developments have resulted in direct patient benefits, including improved survival and enhanced quality of life, they have been accompanied by increases in resource utilization and healthcare costs. The high growth rate for expenditures related to cardiovascular procedures has led payers to initiate utilization constraints to markedly reduce spending and reimbursement (1). Various payer initiatives, such as physician profiling, prior notification, and prior authorization, have led to costly administrative requirements (2). These programs are also, in part, driven by marked geographic variability in equipment and utilization of CV procedures, underscoring the need for further guidance regarding optimal patient selection for procedures (3,4). Professional efforts to better define quality have also highlighted the importance of matching procedures and patients (5).
In response to the imperative to improve the utilization of cardiovascular procedures in an efficient and contemporary fashion, the American College of Cardiology (ACC), along with other relevant organizations, developed Appropriate Use Criteria (AUC) for multiple procedures and testing modalities. The first AUC document was published in 2005 and focused on indications for radionuclide imaging (4). During the ensuing 12 years, 14 AUC documents have been published covering appropriateness in individual cardiac imaging procedures (radionuclide imaging, cardiac computed tomography, cardiac magnetic resonance imaging, echocardiography, and diagnostic catheterization). Recently, AUC documents have combined these diagnostic modalities into multimodality publications focused specifically on the diagnosis and evaluation of disease states, such as stable ischemic heart disease detection and risk assessment, chest pain evaluation in the emergency department, and cardiovascular imaging in heart failure. AUC documents have also covered transthoracic echocardiography in outpatient pediatric cardiology, peripheral vascular ultrasound and physiological testing, implantable cardioverter-defibrillators and cardiac resynchronization therapy, and coronary revascularization.
This growth in the AUC portfolio over the past 12 years has led to both transformation and maturation of many of the aspects of AUC methodology, which was initially defined in a 2005 publication (3). Since then, input from external stakeholders, including the payer community, state and national government regulators, and the Institute of Medicine, along with internal feedback from ACC’s Board of Governors, relevant professional societies, and the cardiovascular community, has substantially influenced AUC development. This feedback has helped to ensure that AUCs have a positive role in cardiovascular care delivery while minimizing negative unintended consequences. A 2013 update of the AUC methodology incorporated many of these recommendations (6).
The process of developing AUCs continues to mature as we deepen our understanding of how clinical practice, evidence, patient preferences, and clinician knowledge interact to personalize care while reducing unnecessary variability. In doing so, the Task Force has encountered several areas in which stakeholders have identified potential opportunities for continued improvement of AUC methods. These areas for improvement have necessitated further clarification or evolution of the published methodology. In this paper, the AUC Task Force addresses questions regarding
▪ Interpretation and application of the AUC;
▪ Expertise balance in the AUC rating process;
▪ Complexity of patient scenarios versus simple benchmarks for practice; and
▪ Approaches to updating AUC content.
After a discussion of these topics, this paper outlines the current AUC methodology, incorporating changes based on aforementioned areas for improvement. In addition, the pending mandate to use clinical decision support for Medicare patients requires the ACC to formalize and explicitly define the development processes that were previously implicit, such as evidence review and panel composition. These topics are therefore now incorporated into this paper. Changes to AUC methodology provided in this document are summarized in Table 1.
2 AUC Evolution and Controversies
2.1 Progression of Terminology
Clinical judgment versus quality assessment. The first series of AUC documents strictly adhered to the term “appropriateness” as established by RAND methodology (3,7,8); however, some stakeholders interpreted this label as making an ethical judgment on the individual clinician’s actions rather than an assessment of the likelihood of benefit of the service for a population of patients. More recent documents have adopted the term “Appropriate Use Criteria” (9–16). The change in terminology in the more recent documents was intended to demonstrate the function of the AUC in promoting informed risk-benefit decisions. Therefore, the current AUC should be viewed as an evaluation of the evidence base and rational use of cardiovascular technologies in patient populations, rather than as a judgment of those ordering and delivering the use of such technologies.
Discrete categories versus continuum of care options. The specific terminology used to define appropriate care was chosen to focus on patient populations, case mix over time, and quality improvement, while also informing, but not dictating, care for individual patients. The terminology has been reformed as a result of substantial debate. The 3 original categories (“Inappropriate,” “Uncertain,” and “Appropriate”) were initially intended to recognize that benefits and risks to various patient populations exist on a continuum. However, the categorical nature of AUC results in distribution of individual patient cases into 3 distinct groups. These categories have often been viewed externally by various stakeholders as an “absolute” for individuals, despite efforts to define a general population of patients who might or might not benefit from a procedure over time.
The AUC Task Force updated the terminology in 2013 on the basis of feedback from numerous stakeholders (7); these terms are restated in Section 3.1., “Appropriate Use.” The terms and definitions were amended to more closely reflect clinical practice applications, including an expected distribution between each AUC category for every population, methods for documenting exceptions, and proper application to individual patients.
2.2 Interpretation and Application of AUC
Not addressing underuse. The AUC Task Force maintains that the term “Appropriate” should not be construed to suggest that all tests and procedures with this rating must be, or even should be, performed in all clinical scenarios. Services rated as appropriate should be considered reasonable but not necessarily required. Methods have been developed to gauge the necessity of services and are useful for measurement of potential underuse. To date, the AUC Task Force has not pursued these methods but has instead encouraged the development of quality metrics based on ACC guidelines to identify areas of underuse.
Clarification of “May Be Appropriate.” Recommendations that fall into the “May Be Appropriate” category should not be construed as having a low risk–benefit ratio. Many were confused about the previously used AUC category “Uncertain,” which was grouped with either appropriate or inappropriate, depending on the perspective of the external stakeholder. A test or procedure can be rated as “May Be Appropriate” due to a limited quality or quantity of evidence for specific patients, even while the evidence may support use in some of these patients. Because of this, the AUC Task Force strongly recommends that individual coverage determinations not be made on the basis of a service being rated as “May Be Appropriate.” Rather, services in this category should be performed depending on individual clinical patient circumstances and patient and provider preferences, including shared decision making.
“Rarely Appropriate” does not equal “Always Inappropriate.” The term “Rarely Appropriate” was selected in the revised AUC terminology on the basis of substantial concern and misunderstanding about the past term “Inappropriate.” Misinterpretation led some to believe that all “Inappropriate” procedures should be avoided. One of the inherent limitations of AUC is their reliance on 3 to 4 simple characteristics to categorize individual patients, which typically ignores a multitude of subtle findings that might drive the decision making for an individual patient. The AUC Task Force believes that the newer term “Rarely Appropriate” better reflects the complexity of patient care, although physicians should be aware that procedures in this category should be justified by unique patient circumstances and that these should be documented adequately to justify use of “Rarely Appropriate” services. Caution is advised to avoid procedures that indicate potential patient harm if performed.
Individual patient coverage decisions. Unfortunately, AUC have also been implemented for purposes beyond guiding appropriate care of a population in general. Some utilization management companies and third-party payers have used AUC as the sole basis of their coverage determinations (17). Strict application of AUC to individual patients without appreciating the individual patient context (e.g., refusing to reimburse for a “Rarely Appropriate” service) is contrary to the spirit of the AUC. As the term connotes, a “Rarely Appropriate” test may be appropriate in a given clinical scenario. A more suitable application of AUC is to provide both an assessment of care decisions in aggregated patient populations and feedback to providers regarding how their individual care decisions match those from a larger population.
2.3 Level of Expertise on Rating Panels
The composition of rating panels has been challenged by some who believe that panels should be composed either exclusively or mostly by specialists within the field under evaluation, the rationale being that such subspecialists have a unique and greater understanding of the field (18). However, the AUC Task Force continues to emphasize that AUC should be as evidence-based as possible and that all members of the rating panel should be able to determine the balance of clinical benefits and risks of a particular procedure or technology on the basis of the relevant literature. To aid the rating panel in their assessments, panel members are provided with evidence tables based on systematic literature reviews, along with published, peer-reviewed clinical practice guidelines (CPGs). Procedural clinicians with other areas of expertise also serve on the rating panel and provide input.
Nonexpert individuals often represent the community of practitioners ordering any given test or procedure and provide an important perspective. As such, the AUC Task Force believes that the use of a majority of nonspecialists within a rating panel permits a diversity of perspectives and enhances external credibility. The ACC AUC process has maintained a policy of <50% representation by experts who perform the procedure as part of their usual clinical work to reduce the potential risk of excessive bias during the panel rating. This approach provides a more objective evaluation and potentially greater acceptance from external parties.
2.4 Comprehensiveness Versus Selective Indications
It was initially felt that the clinical scenarios should be designed to represent the highest possible proportion of patients seen by cardiovascular professionals and to focus on common and real-word situations. Many scenarios are the subject of CPGs, but others may not meet the criteria for guidelines because of a dearth of high-quality studies on a specific topic. Over time, external feedback regarding the early AUC documents highlighted gaps in clinical scenarios or confusing indication definitions, leading to marked expansion and revision of the indications being evaluated. Subsequent applications of revised AUCs show that this expansion has markedly improved the utility of the documents (19).
2.5 Consensus Versus Modified Delphi Method
The AUC process has maintained a modified Delphi approach to assigning ratings and determining the final category assignment. This method is based on the premise that group judgments have more validity than individual judgments. It depends on a qualified panel, anonymity, structuring of information flow, and feedback to the participants. After each round of voting, a facilitator summarizes the results with an anonymous collation of reasons for the experts’ responses. The panelists are encouraged to consider their individual responses based on previous discussion. They then participate in another voting round. Median scores and measures of dispersion are used to select the final category for an indication.
Some stakeholders have questioned whether 2 rounds of ratings can sufficiently clarify the indications and produce internally consistent results across the growing lists of AUC indications. Although multiple rounds of ratings could be beneficial for these reasons, the AUC process also attempts to protect against a process whereby any 1 individual or interest can influence the final category determination. The AUC Task Force has introduced additional rating rounds and live voting to assist in maintaining this balance and the integrity of the modified Delphi process.
2.6 Timeliness and Relevance of AUC Documents
Timeliness of updates. One of the challenges for the ACC is determining how often AUC documents should be updated to retain clinical relevance—similar to the dilemma posed for CPGs. CPG updates may affect the timing of AUC revisions and updates. This is especially important because AUCs are often used to make clinical decisions, and third-party payers consider AUC documents when making coverage decisions, although AUCs should not be used to support or deny individual patient reimbursement. Developing such a document places a significant demand on task force members’ time and consumes considerable ACC resources, with the time from development to final approval being more than 1 year. In the balance, the ACC must maintain the relevance and timeliness of its AUC documents while recognizing the constraints placed on a volunteer organization.
Evolving definitions and assumptions. As indications often rely on specific definitions, risk models, and assumptions, they can become outdated as new approaches are published. For instance, the assessment of risk in asymptomatic individuals has been modified to reflect the concept of global risk, which may be derived from any of the major literature-based risk scores, including the ASCVD (Atherosclerotic Cardiovascular Disease) risk score, modified Framingham risk score, or the Reynolds score, to leverage more recent data and include items such as family history in the evaluation (20). Likewise, the assumptions regarding the structure and performance of the procedure laboratory, catheterization laboratory, or operating suite are now more explicit, including the expectation for accreditation, as this has been often mandated for payment.
This section of the paper contains a comprehensive summary of the current ACC AUC methodology, incorporating recent changes that address previous deficiencies and controversies as well as all prior accepted approaches.
3.1 Appropriate Use
A consistent definition of appropriate use that includes consideration of risks and benefits is applied across technologies and procedures. Specific definitions of terms and surrounding assumptions are modified on the basis of the most clinically relevant aspects of the specific clinical topics. The basic definition is as follows:
An appropriate diagnostic or therapeutic procedure is one in which the expected clinical benefit exceeds the risks of the procedure by a sufficiently wide margin, such that the procedure is generally considered acceptable or reasonable care. For diagnostic imaging procedures, benefits include incremental information, which when combined with clinical judgment, augments efficient patient care. These benefits are weighed against the expected negative consequences (risks include the potential hazard of missed diagnoses, radiation, contrast, and/or unnecessary downstream procedures).
The risks and benefits correspond with the different classes of cardiovascular services. Benefits of therapeutic procedures, such as revascularization or cardiac defibrillator/cardiac resynchronization therapy, include survival and health outcomes (such as improved symptoms, functional status, and/or quality of life) and are weighed against the risks of the procedure and subsequent related care. Benefits of diagnostic procedures include enhanced risk estimation and beneficial alterations in the patient’s care plan, whereas risks include direct hazards posed by the test (radiation, complications) and undesired outcomes that may obfuscate the clinical situation or misdirect the plan of care (such as false positive or false negative test results).
3.2 Appropriate Use Categories
The current AUC category definitions are provided in Table 2.
4 AUC Organizational Structure
Nominations for the writing group, indication reviewers, and rating panel are solicited from a broad set of ACC members, collaborating organizations and societies, and those selected by the AUC Task Force. Relationships with industry (RWI) and potential procedural bias given clinical and professional expertise will be considered during the nomination and selection process. Further information about RWI can be found in Section 5, “Relationships With Industry.”
4.1 AUC Task Force
The AUC Task Force is composed of at least 7 ACC members proposed by the ACC Nominating Committee and subsequently approved by the Board of Trustees. Task Force members come from various cardiovascular disciplines and are charged with serving as the oversight body for the evaluation, development, and implementation of AUC. Topics for AUC are selected and then prioritized by the Task Force after careful review of current utilization patterns, stakeholder requirements, procedural volume and cost, and available evidence and feasibility. The AUC Task Force is also responsible for appointing individuals to the writing group and rating panel, as well as overseeing the timeline and entire process for AUC development, endorsement, and publication. The Task Force functions autonomously but reports directly to the Science and Quality Committee of the ACC. All RWI are disclosed annually by AUC Task Force Members, as described in Section 6, “Relationships With Industry.”
4.2 Writing Group
Writing group members are selected by the Task Force from a list of suggested individuals proposed by the ACC and other stakeholders/organizations. The writing group includes members with significant professional expertise and should broadly represent multiple stakeholders. This group should be composed of members (usually 5 to 9) from multiple societies and diverse organizations, allowing for broad representation across disciplines. A substantial proportion of writing group members remain experts in the technique(s) under consideration, thus ensuring that indications are constructed in a way that maximizes clinical applicability and defines the limitations of the technologies or therapies under consideration.
A cardiology trainee (i.e., a fellow-in-training) may be included in the writing group. Finally, an AUC Task Force member will be appointed to each writing group to serve as a liaison to provide methodological and operational oversight.
The entire AUC Task Force also serves to review and approve the document’s scope and relevant clinical indications; review literature summaries; provide guidance on methodological issues; ensure harmonization of indications, definitions, and assumptions across AUC documents; and foster completion of the AUC documents in a time-efficient fashion.
4.3 Review Panel
Reviewers are selected by the Task Force from a list of suggested individuals from within the ACC and also from ACC and other stakeholders/organizations. Indication reviewers are charged with providing feedback to the writing group on whether the indications are comprehensive and represent typical patients, and whether the reviewed document provides accurate definitions and assumptions, as well as acceptable evidence mapping. These reviewers constitute the only “external” review before voting on the appropriate use of a specific technology.
4.4 Rating Panel
The rating panel is responsible for the rating of each clinical scenario and comprises 7 to 17 members (a typical panel consists of 13 to 17 members). All rating panels must be composed of an odd number of individuals so that the final median score reflects the whole number score of an actual rating panel member. The rating panel will, at a minimum, comprise the following members:
1. A practicing physician(s) with expertise in the clinical topic being reviewed;
2. A practicing physician(s) with expertise in a closely related discipline;
3. A primary care physician(s);
4. An expert in statistical analysis; and
5. An expert in clinical trial design.
It is anticipated that rating panel members may serve more than 1 function, such as a primary care physician who also has expertise in clinical trial design or statistical analysis. Additionally, the Task Force may incorporate a public-sector member and/or a health services payer representative to serve on the rating panel. Finally, a fellow-in-training may be included to help support and facilitate the rating panel work.
The AUC Task Force has always attempted to maintain a balance on the rating panels (previously referred to as the technical panels) between specialists using the technology and other professionals who represent referring clinicians for a test or procedure, including general cardiologists, outcome specialists, and/or primary care physicians who care for germane patient populations. Specialists whose key area of interest is the primary focus of the specific set of AUC should be a minority of rating panel members. A review of the professional backgrounds of potential rating panel members is performed before formalizing their appointment to the panel. This information is used to ensure that the Task Force, which determines the composition of the rating panel, is able to consider an accurate description of the individuals' expertise, interests, and relationships before selecting such an individual to serve on the panel.
5 Relationships With Industry and Other Entities
The ACC and the AUC Task Force continue to focus considerable attention on avoiding real or perceived RWI and relationships with other entities that might affect the rating of a test/procedure (21). The ACC maintains a database that tracks all relevant relationships for all ACC members and persons who participate in ACC activities, including the development of AUC. All RWI publicly accompany the publication of all clinical policy documents, including AUC.
AUC writing groups must be chaired by a person with no relevant RWI. Although members of the writing groups play an important role in the development of the final published document for a given set of AUC, they do not have any role in the AUC rating process and therefore have limited impact on how the documents will guide clinical care. Accordingly, RWI restrictions are not applied to writing group members other than the chair. However, to avoid the potential for bias in the actual indication rating, fewer than 50% of rating panel members may have relevant RWI. Additionally, the moderator of the rating panel meeting must not have any relevant RWI.
6 AUC Developmental Process
6.1 Scope and Design of AUC
Central to the design of the AUC is defining the overall scope of the document. This is a necessary step to ensure that important and common indications are adequately covered in the document and, at the same time, avoid having the document become so inclusive that it becomes unwieldy. The initial AUC documents provided ratings evaluating a single procedure in several common scenarios. As there are often multiple imaging/procedural options available for given indications, the AUC documents assessing individual modalities are now incorporated into multimodality documents when feasible. The goal of these multimodality documents is determination of the range of modalities that may or may not be reasonable for a specific indication. Such documents are not intended to define the single best test or procedure for each indication or assert the superiority of one over the other. It is hoped that such a multimodality approach will be more helpful to the clinician than producing separate AUC documents for individual procedures.
6.2 Indication Construction
Indication building for AUC is done by the writing group in 3 general phases, as outlined in the following text.
Scope. First, the initial scope of the document developed by the Task Force is reviewed and further modified on the basis of the expertise of the writing group. This is a collaborative effort combining the work of the Task Force and writing group.
Assumptions and definitions. Second, the writing group begins to construct a set of relevant assumptions about the particular topic being evaluated. These assumptions are often aimed at the competency to perform procedures, definitions of terms to be used in the indications, and background assumptions around clinical care settings. For example, in the AUC for coronary artery revascularization (12,13), assumptions include a definition of what degree of coronary artery narrowing is considered “significant.”
Indication development. Finally, the writing group constructs the specific indications or clinical scenarios with an emphasis on consistency, clarity, and utility, and provides the foundation for meaningful evaluation. This process may start by listing all the relevant variables affecting decision making and providing a matrix for determining clinically reasonable scenarios. Subsequent review of the indications by the AUC Task Force and external peer reviewers is performed to further clarify the clinical scenarios.
In developing the indications, there is significant effort to develop consistent wording throughout the document, especially across different modalities/procedures. For imaging, indications are grouped under common headings that appear in all documents. This structured approach includes common definitions along with categories of risk assessment, symptomatology, prior testing, previous revascularization, and special clinical circumstances.
In addition to striving for consistency between AUC documents, the Task Force utilizes ACC/American Heart Association (AHA) CPGs for many definitions and assumptions, specifically, for the identification of key evidence-based recommendations. As these CPGs are updated, the writing groups are instructed to update AUC assumptions, indications, and definitions accordingly. New guidelines, such as the 2014 ACC/AHA Guideline on Perioperative Cardiovascular Evaluation and Management of Patients Undergoing Noncardiac Surgery (22), are then continuously incorporated into the AUC as they are written, often resulting in changes to the wording of clinical indications.
6.3 External Review of Clinical Indications
As noted in the previous text, the clinical indications are reviewed before their presentation to the rating panel. This review process includes a detailed evaluation by representatives from key participating professional organizations, often exceeding 25 reviewers for the draft set of indications for a technology. The external reviewers are selected from nominees proposed by relevant collaborating societies. Nominees may serve as contributors to the AUC document development process as well and may also represent specific expertise such as health services research. This process is crucial to indication development because revisions cannot be made following final rating panel voting without revoting, as this would violate the basic tenets of the modified Delphi methodology.
6.4 Evidentiary (Systematic Literature) Review
A comprehensive literature review is an essential part of the AUC development process. Whenever possible, AUC scenario adjudications should be consistent with published guidelines and should reference the guideline recommendation and Level of Evidence. Exceptions include changes in practice standards based on peer-reviewed literature that have not been updated in CPGs and specific scenarios that were not addressed by the guidelines. Although randomized multicenter clinical trials often provide the best data, their inclusion criteria are typically narrow and may not match the characteristics of some patients seen in daily practice. Sometimes clinical trials that match a common AUC scenario have not been performed. There may be ethical reasons why a randomized trial cannot be conducted or, in other instances, the study’s focus may be too narrow for it to be funded by government or industry grants. The optimal management of patients excluded from large trials may require a consensus based on more limited data. In such cases, AUC adjudications use valid observational studies or qualified retrospective analyses.
Evidentiary review based on systematic analysis of peer-reviewed publications that guide clinical judgment is an accepted approach (23,24). This approach takes into consideration the benefit, negative consequences, and cost of a procedure or treatment as well as a clear understanding of how it would affect clinical care. AUC documents that are well-supported by the literature meet these criteria. In the past, AUC documents have focused on indications based on benefits and risks supported by the literature but have also implicitly considered cost of care. This approach will be continued as fiscal considerations are of increasing importance and cost containment efforts demand optimal utilization of diagnostic studies and therapeutic interventions.
When applicable, AUC documents should refer to the Level of Evidence employed by CPGs. When scenarios are not addressed by the CPGs, peer-reviewed studies will be reviewed for quality. These studies will be judged on the basis of the ACC/AHA CPG Level of Evidence categories listed in Table 3 (25). All relevant evidence citations and Level of Evidence grades will be placed in a table for the use of rating panelists while they are completing the AUC rating process.
6.5 Concordance With CPGs
CPGs and AUCs have substantial areas of overlap. Therefore, consistency between them is crucial to avoid confusion among clinicians and payers. Concordance of CPGs and AUCs is a key directive provided to the writing group members at the outset of developing any AUC. During the development process, AUC indications are carefully mapped to CPGs and other publications (14) that may affect clinical practice. Additionally, oversight on the internal consistency of ACC/AHA CPGs is provided by the Clinical Policy Approval Committee under the supervision of the Science and Quality Committee.
6.6 Rating Process
In the first round of indication ratings, each rating panel member submits numerical ratings independently. The individual ratings are collated and analyzed to determine the median value and dispersion of the ratings. This is followed by a mandatory face-to-face meeting of the rating panel. In addition to the panel members, the AUC Task Force has standardized specific roles for several other individuals during this meeting. A moderator who has not been involved in indication construction establishes the goals and procedural rules and facilitates the meeting. This individual typically does not participate in rating the specific technology or procedure under review, and is not directly involved in the technology being rated. The moderator may serve as an alternate panel member if unforeseen circumstances prevent a panelist from completing the rating process. During the face-to-face meeting, a writing group liaison member is also present to answer questions specific to the indications and to assist in any modification or clarification of indications recommended by the rating panel. Lastly, a member of the AUC Task Force is also in attendance to specifically address methodological issues as they pertain to the AUC.
During the rating panel meeting, a standardized presentation is given, providing an overview of the AUC development process and a review of the assumptions and definitions along with an outline of the indication tables and key clinical parameters used in the document. Each panelist votes anonymously, and a tabulation of all votes is presented. All indications are discussed using a “round robin” discussion style to ensure that all panel members have the opportunity to lead and participate in the discussion. Attention is paid to indications with widely divergent ratings to ensure that there is a uniform understanding of the clinical scenario. The role of the moderator is particularly important. On the one hand, he/she should call on the expertise of panel members to inform others where their ratings may diverge from that of other thought leaders. On the other hand, the moderator should not allow members to advocate too strongly and inject bias. Following the face-to-face meeting, the panelists independently rescore all indications.
Once the ratings are completed, each indication is analyzed to meet statistical criteria, resulting in placement in a discrete category: Appropriate Care, May Be Appropriate Care, or Rarely Appropriate Care. Each rating round is reviewed for the following before being deemed final:
1. Wide dispersion of scores;
2. Further indication rewording would better clarify the clinical scenario; and
3. Misalignment of the indication score with evidence and/or CPGs.
Under most circumstances, 2 rating sessions are all that will be required, but on occasion, additional partial or complete rating sessions may be necessary to further refine indications for the reasons listed in the previous text. No more than 4 rating cycles will be permitted, with the median score for the final rating session defining the final indication rating.
To make the process of data collection and analysis more efficient, online rating tools are used during various stages of the rating process. Live online rating sessions are currently being used in rating panel meetings. The live online rating process collects indication ratings, notes to the writing group, and personal notes for the panelists to use during the second round of ratings. This allows rating panel members to capture their new preliminary ratings immediately after the discussion. When second ratings are collected via online survey, the preliminary votes from the live rating panel meeting are shared with the rating panelist, as are any notes from the discussions that occurred during the panel meeting. Any additional rounds of ratings are performed through a live online survey during an interactive educational conference.
6.7 Rating Tabulation
The final scores are reported in discrete categories—Appropriate Care, May Be Appropriate Care, and Rarely Appropriate Care—as well as with their numerical median rating (anonymized individual scores will be available in an online appendix to the AUC document).
In addition to the final indication score, all indications are assessed to measure the level of agreement among the panel, by applying the definitions for agreement from BIOMED Concerted Action on Appropriateness as previously described (3,6,8). This method examines distribution in the ratings and identifies when most ratings are grouped near the median (agreement) or clustered at opposite ends of the rating scale (disagreement). Through this methodology, all indications that have disagreement are assigned to the “May Be Appropriate” range even if the median is not in the category.
7 AUC Revisions and Focused Updates
The original writing group will be charged to review the AUC document annually to determine whether the document should be updated and then report their recommendations to the AUC Task Force. It is important for new evidence that would impact the relevancy of the AUC document to be added to the AUC in a timely manner, as with CPGs. In fact, an update or revision of a CPG should likely precipitate a partial or complete revision of an existing AUC document (Table 4). ACC policy requires that members of clinical document committees remain available to participate in revisions for 5 years after the initial publication. After this time frame, custody of AUC documents would revert to the AUC Task Force to consider whether the group should be asked to continue to serve or if new members should be appointed.
The recommendation for a focused update will be reported to the AUC Task Force, and the Task Force will make the final determination about the need for a focused update. The intent of a focused update is to address specific sections of the AUC document that must be revised to maintain relevance, as opposed to completing revision of the entire document. The writing group members of the original document will be reconvened to complete the focused update of the AUC.
As with development of the original document, when a major revision is deemed necessary, the AUC Task Force should authorize a de novo document that includes experts in the field and physicians with diverse practice backgrounds to avoid potential bias. The document’s conclusions must be based on a comprehensive review of the literature. The new writing group may include both past participants and new members at the discretion of the AUC Task Force.
8 Cost and Value Implications
With persistent concerns about rising healthcare costs, both governmental and commercial payers are moving from reimbursement strategies based on volume of service to strategies based on the value of services delivered (i.e., “volume to value”). With the passage of MACRA (Medicare Access and CHIP Reauthorization Act) (26) in 2015, payer implementation strategies have been focused on “bending the cost curve” using several strategies, including:
1. Alternative payment models;
2. Bundled payment models, like those used for comprehensive joint replacement, which are now being piloted for acute myocardial infarction and coronary artery bypass surgery; and
3. Disease management models focusing on, for example, diabetic care or, potentially, coronary artery disease.
AUCs provide infrastructure, based on clinical data and judgment, to assess the value of evaluation and treatment for reimbursement models (27). AUCs have central roles in accountable care organizations, alternative payment models, disease management programs, and bundled payment models, as AUC implementation helps minimize the overuse of tests and procedures, facilitating high value-for-money healthcare delivery. For example, SMARTCare, a Center for Medicare & Medicaid Innovation demonstration project run by the national ACC in concert with the Wisconsin and Florida ACC Chapters, is a disease management program for stable coronary artery disease that incorporates imaging and revascularization AUCs with the goal of decreasing inappropriate use, improving clinical outcomes, and decreasing healthcare delivery costs (28,29). The ACC and the AHA have collaborated to discuss the potential for implementation of the cost/value equation in cardiovascular CPGs and performance measures (30). The intersection between AUC and this cost/value includes a focus on:
1. Scarcity and opportunity costs;
2. Efficiency, cost-benefit analysis, and cost-effectiveness;
3. Initial and subsequent costs;
4. Patient-centered outcomes and quality-adjusted life-years; and
5. The use of cost effectiveness analysis in healthcare decision making.
9 Implementation and Evaluation
After the writing process is completed and collaborating organizations have approved the final manuscript, AUCs are published in leading journals and made available through the web sites of the ACC and additional organizations. These manuscripts include references to all key evidence used in the development of the AUC. The publication of the manuscripts, however, is not the end of the process. As noted previously, the AUC undergo frequent review and updates as new evidence becomes available. Study of the adoption and implementation of AUC into clinical practice is summarized in the following text based on the RE-AIM (Reach, Efficacy, Adoption, Implementation, Maintenance) implementation framework model (31).
“Reach” and “adoption” refer to the individual and institutional acceptance of a practice, respectively. AUC’s reach and adoption have been studied predominantly using surveys of individual providers and clinical practices. A survey of physicians and advanced practice providers (in general medicine and cardiology) found that over one-third are unfamiliar with AUC and only 12.5% reported using AUC regularly in clinical practice (32). A survey of nuclear cardiology laboratories conducted 6 years after the first publication of AUC for cardiac radionuclide imaging estimated that approximately one-half of laboratories used AUC in some fashion (33).
Other evidence of institutional adoption would include how AUCs are being collected through data registries and quality improvement activities. The ACC-sponsored National Cardiovascular Data Registries now collect or plan to collect data regarding appropriate use for a variety of procedures (34). The ImageGuide registry developed by the American Society of Nuclear Cardiology is another example of appropriate use data being collected by a registry. Finally, AUCs are being used to assess appropriateness as a major parameter in imaging laboratory accreditation (International Accreditation Commission Organizations) and will likely be a component in other accreditation efforts in the future (35).
Efficacy of AUC has been studied more widely. Many early investigations of AUC in clinical practice were predominantly retrospective in nature (36–38). These papers demonstrated the feasibility of applying AUC to clinical environments. Later studies established that AUC identified low-value care when ratings were linked to test results, downstream patient management, and clinical outcomes (39–41). The evaluation of invasive procedures with regard to appropriate use has also been reported (42,43). Using AUC data, temporal changes in the performance of tests and procedures may be tracked (44–46). Shifts in procedural volume have been described and appear to be correlated with the publication of AUC; however, causality is difficult to demonstrate (47,48).
As an example of implementation within the RE-AIM framework, data on the efficacy of AUC can be used to inform and refine the AUC process itself. Over time, the AUC have become more inclusive, now providing guidance for the vast majority of clinical scenarios. To facilitate reproducibility, AUC writing groups have been encouraged to develop algorithms, flowcharts, and simplified charts to summarize indications and appropriate use classifications. As the AUC has evolved, it has been demonstrated that only a small proportion of real-world clinical scenarios cannot be adequately rated using the AUC (19).
Using AUC in clinical practice for decision making is supported by reliable and reproducible data illustrating that AUCs identify low-value care. For example, clinical decisions may be informed by AUC when clinical registries are used to gather data about AUC at the point of care. A recent analysis of National Cardiovascular Data Registry data from the Cath/PCI Registry demonstrated that since the publication of AUC, a significant decrease of nonacute PCIs has been noted, with procedures deemed “Inappropriate” being reduced by about 50% (45). Other methods to implement AUC include education, audit and feedback, and decision support tools. Education has been applied with, at best, mixed effects. Although it appears that didactic presentations alone are insufficient to improve appropriate use, substantial improvement may be achieved when education is combined with audit and direct feedback for providers (49). Decision support tools appear to offer the most promise for reducing provision of rarely appropriate services. These products assist ordering clinicians at the point of care and, ideally, are integrated within the electronic health record. Examples include best practice order sets, ordering menus based on clinical indication rather than test/service, and graphical or text-based menus/flowcharts.
The process of AUC development and implementation continues to evolve on the basis of professional, societal, and regulatory needs. The ACC and its AUC Task Force have continued to respond to concerns and queries from the clinician and payer community so as to produce a fair, evidence-based, and practical means of guiding procedural utilization.
From the initial methodology derived from appropriateness documents involving the UCLA/RAND construct with a modified Delphi approach, the ACC has incorporated feedback from the community of medical professionals and from commercial and federal payers to improve the document development methodology. These modifications have continued to strengthen the clinical relevance of these documents, reflecting the needs of contemporary practice patterns and the developing evidence base within cardiovascular medicine. The focus of the AUC is to encourage optimal patient care via professional stewardship of technology utilization within cardiovascular medicine. The effort aims to join with all cardiovascular practitioners and stakeholders in providing optimal clinical decision making to foster high-quality cardiovascular care for patients, and to work toward patterns of care that both promote appropriate utilization and minimize use that lacks sufficient value whenever possible.
The AUC process and documents have been welcomed by many in the primary care and cardiovascular community, including physicians, patients, and policymakers, and have been successfully incorporated into processes of clinical care, including education, accreditation, and quality improvement programs. Efforts to promote AUC implementation continue, and it is becoming clear that providers can improve their performance provided they receive clear guidance and obtain feedback regarding their individual practice patterns. The AUCs are now having an impact on the performance of tests and procedures in specific patient populations, by providing a mechanism to achieve the goal of a substantial reduction in waste due to unnecessary tests and procedures.
AUCs are intended as guiding documents, with the final decision to proceed with testing or a procedure remaining at the bedside, where patient–physician interaction simply cannot be universally policy-based and must instead be made in the context of a discussion about treatment and patient goals.
The refinements to AUC methodology presented in this document reflect the ACC’s continued commitment to adapting and responding to the ever-evolving needs of cardiovascular practice. Over time, the College and AUC Task Force will continue to focus on reflecting the evolution of contemporary practice patterns and accruing scientific evidence, while remaining steadfast in their aim of ensuring patient-centered, professional stewardship of technology application within cardiovascular medicine.
Appendix Author Relationships With Industry and Other Entities (Relevant)—ACC Appropriate Use Criteria Methodology: 2018 Update
↵∗ Member’s term on the Appropriate Use Criteria Task Force ended during this writing effort.
This document was approved by the American College of Cardiology Clinical Policy Approval Committee in January 2018.
The American College of Cardiology requests that this document be cited as follows: Hendel RC, Lindsay BD, Allen JM, Brindis RG, Patel MR, White L, Winchester DE, Wolk MJ. ACC Appropriate Use Criteria methodology: 2018 update. J Am Coll Cardiol 2018;71:935–48.
Authors’ listing of relevant relationships with industry is disclosed in Appendix 1 of this document.
Copies: This document is available on the World Wide Web site of the American College of Cardiology (www.acc.org). For copies of this document, please contact Elsevier Inc. Reprint Department via fax (212-633-3820) or e-mail ( ).
Permissions: Multiple copies, modification, alteration, enhancement, and/or distribution of this document are not permitted without the express permission of the American College of Cardiology. Requests may be completed online via the Elsevier site (http://www.elsevier.com/about/policies/author-agreement/obtaining-permission).
- Wolk M.J.,
- Peterson E.,
- Brindis R.,
- et al.
- Patel M.R.,
- Spertus J.A.,
- Brindis R.G.,
- et al.
- Brindis R.G.,
- Douglas P.S.,
- Hendel R.C.,
- et al.
- Patel M.R.,
- Wolk M.J.,
- Allen J.M.,
- et al.
- Hendel R.C.,
- Patel M.R.,
- Allen J.M.,
- et al.
- ↵Fitch K, Bernstein SJ, Aguilar MD, et al. The RAND/UCLA Appropriateness Method User's Manual. Santa Monica, CA: RAND Corporation, 2001. Available at: http://www.rand.org/pubs/monograph_reports/MR1269.html. Accessed February 1, 2017.
- Hendel R.C.,
- Berman D.S.,
- Di Carli M.F.,
- et al.
- Douglas P.S.,
- Garcia M.J.,
- Haines D.E.,
- et al.
- Taylor A.J.,
- Cerqueira M.,
- Hodgson J.M.,
- et al.
- Patel M.R.,
- Calhoon J.H.,
- Dehmer G.J.,
- et al.
- Patel M.R.,
- Calhoon J.H.,
- Dehmer G.J.,
- et al.
- Campbell R.M.,
- Douglas P.S.,
- Eidem B.W.,
- et al.
- Bonow R.O.,
- Brown A.S.,
- Gillam L.D.,
- et al.
- Doherty J.U.,
- Kort S.,
- Mehran R.,
- et al.
- Feldman D.N.,
- Naidu S.S.,
- Duffy P.L.
- Jones D.,
- Guttmann O.,
- Wright P.,
- et al.
- ↵American College of Cardiology. Relationships with industry and other entities policy. Available at: http://www.acc.org/guidelines/about-guidelines-and-clinical-documents/relationships-with-industry-policy. Accessed December 12, 2016.
- Fleisher L.A.,
- Fleischmann K.E.,
- Auerbach A.D.,
- et al.
- ↵Department of Health and Human Services, Center for Medicare & Medicaid Services. Medicare program; revisions to payment policies under the physician fee schedule and other revisions to Part B for CY 2016; final rule. Available at: https://www.gpo.gov/fdsys/pkg/FR-2015-11-16/pdf/2015-28005.pdf. Accessed February 20, 2017.
- ↵Department of Health and Human Services, Center for Medicare & Medicaid Services. Medicare program; revisions to payment policies under the physician fee schedule and other revisions to Part B for CY 2017; Medicare Advantage bid pricing data release; Medicare Advantage and Part D medical loss ratio data release; Medicare Advantage provider network requirements; expansion of Medicare Diabetes Prevention Program Model; Medicare Shared Savings Program requirements. Available at: https://www.gpo.gov/fdsys/pkg/FR-2016-11-15/pdf/2016-26668.pdf. Accessed February 21, 2017.
- Halperin J.L.,
- Levine G.N.,
- Al-Khatib S.M.,
- et al.
- ↵Centers for Medicare & Medicaid Services. MACRA delivery system reform, Medicare payment reform. Available at: https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/MACRA-MIPS-and-APMs.html. Accessed December 16, 2016.
- ↵Centers for Medicare and Medicaid Services. Appropriate Use Criteria Program: priority clinical areas. Available at: https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Appropriate-Use-Criteria-Program/PCA.html. Accessed December 16, 2016.
- ↵Centers for Medicare and Medicaid Services. Health Care Innovation Awards round two: project profile. Available at: https://innovation.cms.gov/initiatives/Participant/Health-Care-Innovation-Awards-Round-Two/American-College-Of-Cardiology-Foundation.html. Accessed December 16, 2016.
- ↵SMARTCare: Implementation of ACC State Chapter initiatives supported by a major grant from the Center for Medicare and Medicaid Innovation. Available at: http://www.acc.org/about-acc/leadership/features/bog/2016/04/0415. Accessed December 16, 2016.
- Anderson J.L.,
- Heidenreich P.A.,
- Barnett P.G.,
- et al.
- Kline K.P.,
- Plumb J.,
- Nguyen L.,
- et al.
- ↵American College of Cardiology. Appropriate use criteria. Available at: https://cvquality.acc.org/NCDR-Home/about-ncdr/benefits-of-participating. Accessed March 7, 2017.
- ↵Intersocietal Accreditation Commission. IAC Standards and Guidelines for Nuclear/PET Accreditation. Available at: http://www.intersocietal.org/nuclear/seeking/nuclear_standards.htm. Accessed February 21, 2017.
- Gibbons R.J.
- Hendel R.C.,
- Cerqueira M.,
- Douglas P.S.,
- et al.
- Cortigiani L.,
- Bigi R.,
- Bovenzi F.,
- et al.
- Winchester D.E.,
- Chauffe R.J.,
- Meral R.,
- et al.
- Bradley S.M.,
- Bohn C.M.,
- Malenka D.J.,
- et al.
- Ko D.T.,
- Guo H.,
- Wijeysundera H.C.,
- et al.
- Ladapo J.A.,
- Blecker S.,
- O'Donnell M.,
- et al.
- Fonseca R.,
- Negishi K.,
- Otahal P.,
- et al.
- Elgendy I.Y.,
- Mahmoud A.,
- Shuster J.J.,
- et al.
- Desai N.R.,
- Parzynski C.S.,
- Krumholz H.M.,
- et al.
- Arbel Y.,
- Qiu F.,
- Bennell M.C.,
- et al.
- Chaudhuri D.,
- Montgomery A.,
- Gulenchyn K.,
- et al.
- Appropriate Use Criteria Task Force
- Table of Contents
- 1 Introduction
- 2 AUC Evolution and Controversies
- 3 Methodology
- 4 AUC Organizational Structure
- 5 Relationships With Industry and Other Entities
- 6 AUC Developmental Process
- 7 AUC Revisions and Focused Updates
- 8 Cost and Value Implications
- 9 Implementation and Evaluation
- 10 Conclusions
- Appendix Author Relationships With Industry and Other Entities (Relevant)—ACC Appropriate Use Criteria Methodology: 2018 Update