This is a chapter from "Methods Guide for Effectiveness and Comparative Effectiveness Reviews."
This report has also been published in edited form: Whitlock E, Lopez S, Chang S, et al. AHRQ Series Paper 3: Identifying, selecting, and refining topics for comparative effectiveness systematic reviews: AHRQ and the Effective Health Care Program. J Clin Epidemiol 2010; 63, 491-501.
Comparative Effectiveness Reviews are systematic reviews of existing research on the effectiveness, comparative effectiveness, and harms of different health care interventions. They provide syntheses of relevant evidence to inform real-world health care decisions for patients, providers, and policymakers. Strong methodologic approaches to systematic review improve the transparency, consistency, and scientific rigor of these reports. Through a collaborative effort of the Effective Health Care (EHC) Program, the Agency for Healthcare Research and Quality (AHRQ), the EHC Program Scientific Resource Center, and the AHRQ Evidence-based Practice Centers have developed a Methods Guide for Comparative Effectiveness Reviews. This Guide presents issues key to the development of Comparative Effectiveness Reviews and describes recommended approaches for addressing difficult, frequently encountered methodological issues.
The Methods Guide for Comparative Effectiveness Reviews is a living document, and will be updated as further empiric evidence develops and our understanding of better methods improves. Comments and suggestions on the Methods Guide for Comparative Effectiveness Reviews and the Effective Health Care Program can be made at https://effectivehealthcare.ahrq.gov/.
None of the authors has a financial interest in any of the products discussed in this document.
Suggested citation: Whitlock EP, Lopez SA, Chang S, et al. Identifying, selecting, and refining topics. In: Agency for Healthcare Research and Quality. Methods Guide for Comparative Effectiveness Reviews [posted April 2009]. Rockville, MD. Available at:
https://effectivehealthcare.ahrq.gov/products/cer-methods-guide/overview/.
This report has also been published in edited form: Whitlock EP, Lopez SA, Chang S, et al. Identifying, selecting, and refining topics. J Clin Epidemiol 2009. To be published.
Authors
Evelyn P. Whitlock, M.D., M.P.H.a,b
Sarah A. Lopez, B.A.c
Stephanie Chang, M.D., M.P.H.d
Mark Helfand, M.D., M.S., M.P.H.a,e,f
aOregon Evidence-based Practice Center, Portland, OR
bKaiser Permanente Center for Health Research, Portland, OR
cOregon Health & Science University, Portland, OR
dAgency for Healthcare Research and Quality, Rockville, MD
eDepartment of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, OR
fHospital and Specialty Medicine, Veterans Affairs Medical Center, Portland, OR
The findings and conclusions in this document are those of the authors, who are responsible for its contents; the findings and conclusions do not necessarily represent the views of AHRQ or the Veterans Health Administration. Therefore, no statement in this report should be construed as an official position of these entities, the U.S. Department of Health and Human Services, or the U.S. Department of Veterans Affairs.
Key Points
- AHRQ's Effective Health Care (EHC) Program seeks to:
- Align its research topic selection with the overall goals of the program.
- Impartially and consistently apply predefined criteria to potential topics.
- Involve stakeholders to identify high-priority topics.
- Be transparent and accountable.
- Continually evaluate and improve processes.
- A topic prioritization group representing stakeholder and scientific perspectives evaluates topic nominations for:
- Appropriateness (fit within the EHC Program).
- Importance.
- Potential for duplication of existing research.
- Feasibility (adequate type and volume of research for a new comparative effectiveness systematic review).
- Potential value and impact of a comparative effectiveness systematic review.
- As the EHC Program develops, ongoing challenges include:
- Ensuring the program addresses truly unmet needs for synthesized research, since national and international efforts in this arena are uncoordinated.
- Engaging a range of stakeholders in program decisions while also achieving efficiency and timeliness.
Introduction
Globally, people are struggling with the reality of limited resources to address the breadth of health and health care needs. Evidence has been recognized as the "new anchor for medical decisions,"1 and many consider systematic reviews to be the best source of information for making clinical and health policy decisions.2 These research products rigorously summarize existing research studies so that health and health care decisions by practitioners, policymakers, and patients are more evidence based. Yet, dollars for research—whether for systematic reviews, trials, or observational studies—are constrained, and are likely to be constrained in the future. Effective prioritization is clearly necessary in order to identify the most important topics for synthesized research investment that may help the U.S. health care system realize powerful and meaningful improvements in health status.
This paper discusses the identification, selection, and refinement of topics for comparative effectiveness systematic reviews within the Effective Health Care (EHC) Program of the Agency for Healthcare Research and Quality (AHRQ), which has been described in more detail elsewhere.3 In 2003, the U.S. Congress authorized AHRQ's Effective Health Care Program to conduct and support research on the outcomes, comparative clinical effectiveness, and appropriateness of pharmaceuticals, devices, and health care services. This program utilizes the AHRQ Evidence-based Practice Center (EPC) Program, with 14 designated centers throughout North America that conduct comparative effectiveness systematic reviews, among other research products of the program. AHRQ has designated a Scientific Resource Center (SRC), currently housed at the Oregon EPC, to support the EHC Program as a whole. The SRC has specific responsibilities, including assisting AHRQ with all aspects of research topic development (Figure 1), providing scientific and technical support for systematic reviews and outcomes research, and collaborating with EHC stakeholder and program partners.
It is not a simple process to select and develop good topics for research. Researchers' success depends in large part on their ability to identify meaningful questions, while funding agencies continually seek to maximize the return on their investment by funding research on important, answerable questions relevant to significant portions of priority populations. Some have criticized how well funders have actually achieved these results.4 However, there is little guidance for successfully developing a research program that generates the type of evidence necessary to improve the public's health.
Guiding Principles for Identifying and Selecting Topics
In order to derive guiding principles for selecting important comparative effectiveness systematic review topics, we considered what others have done when trying to select priority topics for any health-care-related activity. Over the last 18 years, the Institute of Medicine (IOM) and selected others have explored priority-setting models and approaches.5-10 Across a diverse set of international health- and health-care-related activities—including the development of guidelines by professional societies; clinical service and quality improvement priorities within health care organizations; and national health service guidance for health technologies, clinical practice, and public health—experts have tried to define clear-cut processes and criteria.9,11-13 Although the majority of this existing work has not focused on specific priority setting for comparative effectiveness systematic reviews, the lessons learned from this process are relevant.These experts have found there is no obviously superior approach to setting priorities and little objective analysis to compare the relative strengths and shortcomings of various approaches.10,14
However, across these activities, the EHC Program has found five consistent themes for selecting the highest priority topics (Table 1). The first of these is to clearly identify the overall goals/strategic purpose of the activityin order to align the goals for priority setting within the strategic purpose of the sponsoring program. In the instance of the EHC Program, since no single entity can undertake activities to address all health or health care research needs, priority-setting decisions must flow from the overall mission and strategic purposes of the program.
The second principle is to clearly define and apply criteria for prioritization among potential program activities. Although a relatively consistent set of criteria has been utilized across health-related priority-setting activities in the United States, United Kingdom, and Canada (Table 2), specific criteria will vary with the overall goals and the purpose of any given activity. For example, to determine the national and regional estimates of health care utilization and expenditures, the Medical Expenditure Panel Survey (MEPS) prioritized data collected by considering the prevalence of medical conditions and also how accurately households could report on data related to these.9 Similarly, to identify priority conditions for quality improvement research, the Veterans Health Administration's Quality Enhancement Research Initiative (QUERI) focused on prevalent diseases, but further prioritized prevalent diseases with evidence for both best practices and practice variation that could be improved to enhance quality.9 Thus, for comparative effectiveness systematic review prioritization, additional criteria promulgated by the National Institute for Health and Clinical Excellence (NICE) have been considered when selecting topics for evidence-based guidance. These criteria have pointed out the importance of taking into account whether proposed topics are subject to influence by the program.13 Additional NICE criteria consider whether new evidence-based products could be produced in a timely manner and the risk of inappropriate treatment in the absence of evidence-based guidance.13 This could also be considered as the opportunity cost associated with inaction.5,13 The process of decisionmaking in health-related priority-setting activities is complex, is context dependent, and involves social processes; therefore, priority-setting processes should be guided by ethical principles, including careful attention to conflicts of interest.14 A good priority-setting process that is fair and publicly accountable within a system that is capable of scrutiny, feedback, evaluation, and improvement is viewed as the best approach to gaining desirable outcomes.14
The third principle for priority setting addresses the need to involve stakeholders in the identification and/or prioritization process. Engaging stakeholders as key informants provides credibility and avoids prioritizing topics that have no relevance to real-world issues. Organizations engaged in health-care-related priority setting indicate that stakeholders must be made familiar with and understand the criteria by which topics will be prioritized.11 A recent report from the IOM on identifying highly effective evidence-based clinical services calls attention to the fact that different audiences have different needs from systematic reviews.10 Health care payers may be most interested in the comparative effectiveness of a treatment or intervention. Regulatory agencies may be interested in questions of safety and effectiveness. Clinicians and patients may be particularly interested in the applicability of research to their specific populations. The priorities for research topics and the questions these topics should answer clearly vary by audience.
Fourth is the need for transparency. Because priority setting is actually an allocation of limited resources among many desirable but competing programs or people,15 it is highly political and can be controversial. Some have asserted that priority setting in health care represents one of the most significant international health care policy questions of the 21st Century.14 Battista and Hodge state that documentation of the process leading to a particular topic being selected (e.g., for a clinical practice guideline) should be explicit and made available to stakeholders.5 The documentation should include the rationale that relates specific priority-setting decisions to priority-setting criteria, the evidence used when making these decisions, and any programmatic constraints that had a bearing on the process.11 Transparency requires not only that documentation be kept, but also that program decisions and their rationales be actively communicated to stakeholders.
Fifth is the need for any prioritization approach to undertake process evaluation and improvement measures. Since priority setting at present is inherently a subjective process based on ideals (e.g., fairness) and decisions are made by considering clusters of factors rather than simple trade-offs,14 there is a great need for ongoing process evaluation and improvement. As Battista and Hodge point out, process documentation forms the basis for process evaluation and improvement.5
These general themes provide a good framework for selecting topics for comparative effectiveness systematic reviews. However, more specific additional criteria for clinical and comparative effectiveness research were recently articulated in a 2008 IOM report.10 This report calls on us to consider how well potential comparative effectiveness research topics reflect the clinical questions of patients and clinicians and whether selected topics truly represent a potentially large impact on the clinical or other outcomes that matter most to patients. The IOM also emphasizes that topics for comparative effectiveness systematic reviews should be identified and prioritized using a system that aims to be "open, transparent, efficient, and timely," with sufficient input from key end users.10
Processes for Identifying and Selecting Systematic Reviews
As illustrated in Figure 2, the current EHC Program processes are designed to allow the consistent, broadly focused development of a portfolio of relevant comparative effectiveness systematic reviews. These processes are focused on engaging stakeholders, particularly during topic identification, but throughout the processes of research development and dissemination within the EHC Program. This focus on stakeholders is more intense now than it was in the initial years of the EHC Program. New and existing publicity avenues are being used to encourage nominations and engage in discussions with internal and external stakeholders interested in health care decisionmaking.
Although the EHC Program's initial mechanisms for topic identification included all of those recently cited by the IOM10 —such as an open ongoing process for public engagement; topic solicitations; internal processes (e.g., engaging Federal agencies, such as the Center for Medicare & Medicaid Services); and mandates—these approaches did not always produce products that met the needs of stakeholders. Nominations were often received through the Web site, but some of these nominations were insufficiently documented for consideration by the program. In addition, initial approaches did not always identify important topics that had not previously been systematically reviewed. Even when new, important systematic review topics were identified through topic nominations, these were not always developed into concise topics ideally suited for decisionmakers.
Thus, the EHC Program is currently implementing a revised system that has two important changes. First, the initial topic identification process involves more direct, focused conversations with stakeholders that represent the broad-based constituencies of the Program (Table 3). Stakeholders continue to be involved in other aspects of the program also, as described below. This direct interaction helps the EHC Program to better identify the populations, interventions, comparators, outcomes, timing, and settings of interest to the stakeholder, and to understand the current practice or health policy context underlying the need for synthesized research. A similar approach has been successfully undertaken by others.16 Second, more explicit attempts are being made to reduce potential duplication through consulting experts and the literature to ensure that nominated topics have not already been adequately systematically reviewed. Unlike the case of primary research, where replication of existing research can be desirable, conducting duplicate systematic reviews is not clearly advantageous when existing reviews are current and of high quality.
All fully articulated nominations are supported by issue briefs that provide data and contextual details addressing the EHC Program prioritization criteria (Table 4). Topic briefs are circulated before and presented during monthly or more frequent meetings of a topic prioritization group that represents stakeholder perspectives, scientific perspectives, and the programmatic authority vested in AHRQ. The topic prioritization group first considers objective information on the appropriateness of a topic and its fit within the mandate and priority conditions of the EHC Program. The priority conditions (Table 5) were determined through an open and transparent process and approved by the Secretary of Health and Human Services. The topic is then evaluated for its importance to the U.S. population and health care system. The available research basis on which a topic would build, including consideration of research activities already undertaken or underway by others, frames considerations of both the feasibility and desirability of a new systematic review for a nominated topic. Based on these objective data, the topic prioritization group engages in the more subjective discussions of the potential and relative value of commissioning a new systematic review for nominated topics. The group can request that final decisions regarding a topic nomination be deferred until further investigation is completed. Such investigations may involve outreach to nominators or other stakeholders, or further background research to determine answers to questions raised during presentation of the topic brief. At the end of the final topic prioritization discussion, the topic prioritization group can recommend that topics be sent for further refinement as a comparative effectiveness systematic review, be eliminated as outside the purview of the program, or be tabled due to other factors that affect their immediate priority. These recommendations are not binding, but are highly weighted in AHRQ's final decision as to which research topics are selected for comparative effectiveness systematic reviews.
Principles and Processes for Refining Selected Topics
Once topics are selected for comparative effectiveness systematic review, they are further focused into research questions. This process is designed to ensure that the research review results in a product that meets the needs of stakeholders. Key questions should reflect the uncertainty that decisionmakers, patients, clinicians, and others may have about the topic. Key questions guide the entire systematic review process, from the formulation of comprehensive search strategies and the selection of admissible evidence to the types of data abstracted, synthesized, and reported in the final effectiveness report. Developing clear, unambiguous, and precise key questions is an early and essential step in the development of a meaningful and relevant systematic review.
For a fully formulated comparative effectiveness systematic review topic, key questions in their final form concretely specify the patient populations, interventions, comparators, outcome measures of interest, timing, and settings (PICOTS) to be addressed in the review.17 Although the elements of the PICOTS construct are outlined in a general form at the topic identification phase, further focus and refinement of these parameters are generally required for a clear and transparent systematic review process (Tables 6 and 7). The processes to fully develop key questions are designed to carry forward the overall principles of the EHC Program of being relevant and timely, objective and scientifically rigorous, and transparent, with public participation.3
The EHC Program's current approach to key question development is largely based on past experiences from AHRQ's Evidence-based Practice Center (EPC) Program and from other experts in systematic review. Since the inception of the EPC Program in 1997, AHRQ has emphasized the importance of input from key stakeholder informants, technical experts, and patients to elucidate the important concerns and clinical logic or reasoning underlying potential questions for systematic reviews.18 A perfunctory set of questions or an incomplete problem formulation that outlines the general comparisons but does not specify the circumstances that are of most interest to decisionmakers clearly reduces the usability of the resulting review.17-21 Formulating questions that address dilemmas in real-world situations, coupled with an understanding of the context around these dilemmas, prevents the production of irrelevant systematic reviews that can result from key questions that focus only on interests pertinent to researchers without much (if any) public input.2
The EHC Program has extended the original EPC concept of involving key stakeholder informants by developing additional mechanisms for public input. Key informants representing key stakeholder groups may be consulted as part of the topic selection process or, once selected, as part of the topic refinement process. The EHC Program also convenes a group of key stakeholder informants (including patients) and technical experts to provide additional input to the EPC in finalizing key questions for the research review. The SRC, AHRQ, and the EPC conducting the research review work together with this group to refine the key questions for a given topic. Obtaining input from stakeholders on patients' preferences is essential to identifying pertinent clinical concerns that even expert health professionals may overlook.22
Incorporating a broad range of perspectives contributes to the objectivity and scientific rigor of a review by assisting EPC researchers in understanding the health care context, as well as clarifying the parameters of greatest interest when planning the research review (Table 6). These parameters are the basis for formulating good key questions and include focused determination of the most relevant populations, interventions, comparators, outcomes, timing, and setting (PICOTS). In focusing on outcomes that matter most to patients, key questions need to identify the overarching, long-range goals of interventions. It is insufficient for key questions to focus only on what is assumed to be true or what is presently studied in the literature; they must include the populations, comparisons, and outcomes that are important to patients, providers, and policymakers using health information in their decisionmaking.
Furthermore, beliefs about the advantages or disadvantages of various alternative treatments are an important target for exploration. Many beliefs about the advantages and disadvantages of a treatment are based on direct evidence about health outcomes from long-term comparative trials. However, some beliefs about comparative effectiveness are based on clinical theories that invoke understanding of the pathophysiology of a disease, assumptions about its course, or expectations about the health benefits associated with improvements in a surrogate measure of outcome. Often, experts and stakeholders can bring attention to the issues that underlie uncertainty about the comparative effectiveness of alternative tests or therapies.
Stakeholders and other technical experts also provide important insight to direct the search for evidence that is most relevant to current practice. First, they can clarify specific populations/subpopulations or interventions of greatest clinical or policy interest. Second, interviewing those with knowledge of current clinical practices can identify areas in which studies differ in ways that may reduce their applicability.
Consistent with the principle of transparency and public participation, the EHC Program solicits public comments on proposed key questions before finalizing the scope of a new systematic review. These public comments are reviewed by AHRQ, the SRC, and the EPC, and all parties agree on changes to be made to the existing key questions to reflect this public input. Final key questions that reflect public input, as well as key stakeholder and expert input, are posted on the AHRQ EHC Web site after a review begins.
Through the processes outlined for topic identification, selection, and refinement, the EHC Program attempts to develop a considerable number of important topics for comparative effectiveness systematic reviews consistent with the principles that have been outlined above. Each topic must have appropriately focused key questions to adequately frame the systematic review while also faithfully incorporating public feedback and perspectives. The EHC processes have been developed to reduce the amount of bias that individual investigators working in isolation could potentially introduce into a topic for systematic review. However, given the complexities of the process, those involved must keep foremost in their minds the overall goal for EHC topic development: producing critically important research that positively impacts all levels of audiences' health and health care decisionmaking in order to improve the health of the public.
Challenges
Because of issues of timeliness and cost, the EHC Program cannot engage all types of stakeholders at each step for every topic. Therefore, one of the main challenges the Program faces as it moves forward is to ensure that the most important perspectives are engaged. The goal is to continue to develop a system that fairly represents the range of interests of all stakeholders across all aspects of the program (Figure 2), yet results in timely and clear reports that are useful to decisionmakers and other audiences. The process for topic identification and refinement is complicated by the large range of potential stakeholder perspectives for any given topic, by the wide-reaching clinical breadth of potential topics for the EHC Program, and by very short timeframes that are inherent in a program seeking to be publicly responsive and accountable. This tension between maintaining the relevance and rigor of research while being responsive to questions in a timely manner is an ongoing challenge.
A related challenge is gaining sufficient detail from nominators and stakeholders to allow topics to be adequately defined in order to be prioritized. The Web-based nomination system (https://effectivehealthcare.ahrq.gov/) was revised recently, including definition of a minimum set of information that is necessary to understand a topic nomination sufficiently to develop it for explicit prioritization activities. This minimum set of information includes the populations, interventions, comparators, and outcomes of interest to the nominator, as well as the policy and/or clinical context. If any of these components is not clear in the nomination, the Program must have the ability to contact the nominator for more information. Since many Web-based nominations occur anonymously and since resource constraints prevent AHRQ from contacting every nominator to clarify all unclear topics, some good nominations may be missed simply because they are unclear.
Another challenging area is the relatively subjective nature of decisionmaking around topic prioritization and the sometimes highly political ramifications of these decisions. When one ventures into the realm of relative value or worth, considerations become less objective and more subject to bias. To address this challenge, the EHC Program has structured the topic prioritization process so that the same program criteria are considered for every potential topic in the same hierarchical order. Objective evidence is considered and used as a basis for the more subjective aspects of the prioritization process. However, only process evaluation will allow determination of whether this approach helps in fairly selecting topics for research among viable and valuable candidates. Further experience in making this process and its results more transparent will undoubtedly raise unforeseen challenges as AHRQ seeks to balance the range of perspectives that are likely to be expressed, and to do so while minimizing conflicts of interest.
Prioritization of research is a necessity from a practical and a societal perception standpoint. There must be a commitment to target scarce research dollars and efforts to those areas where there will be the greatest impact and where there is a gap in needed research. There is a high level of interest in evidence-based policy and practice and the volume of uncoordinated effort internationally. Therefore, the EHC Program is working to more closely trackthe systematic review and policy-related activities of other programs, Federal agencies, and researchers. Enhanced coordination with others involved in setting topic priorities or in conducting analogous research is intended to reduce the opportunities for duplication. Such efforts would be greatly assisted by international registries of planned, in process, and completed comparative effectiveness and other systematic reviews.
Setting research priorities is still not a precise science. However, attempting to standardize and evaluate a structured process of setting research priorities for comparative effectiveness systematic reviews will further the goal of linking research to the actual needs of health care decisionmakers. It is necessary to find innovative and effective ways to increase the participation of health care decisionmakers in priority setting and the research process in order to bring a real-world perspective and findings that are increasingly relevant to the needs of decisionmakers.
References
1 Eddy DM. Evidence-based medicine: a unified approach. Health Aff (Millwood) 2005;24:9-17.
2 Laupacis A, Straus S. Systematic reviews: time to address clinical and policy relevance as well as methodological rigor. Ann Intern Med 2007;147:273-74.
3 Slutsky J, Atkins D, Chang S, et al. Comparing medical interventions: AHRQ and the effective health-care program. J Clin Epidemiol 2008 Sep 30. [Epub ahead of print]
4 Gross CP, Anderson GF, Powe NR. The relation between funding by the National Institutes of Health and the burden of disease. N Engl J Med 1999;340:1881-87.
5 Battista RN, Hodge MJ. Setting priorities and selecting topics for clinical practice guidelines. CMAJ 1995;153:1233-37.
6 Institute of Medicine. National priorities for the assessment of clinical conditions and medical technologies: report of a pilot study. Washington: The National Academy Press; 1990.
7 Institute of Medicine. Setting priorities for health technology assessment: a model process. Washington: The National Academy Press; 1992.
8 Institute of Medicine. Setting priorities for clinical practice guidelines. Washington: The National Academy Press; 1995.
9 Institute of Medicine. Priority areas for national action: transforming health care quality. Washington: The National Academy Press; 2003.
10 Institute of Medicine. Knowing what works in health care: a roadmap for the nation. Washington: The National Academies Press; 2008.
11 Gibson JL, Martin DK, Singer PA. Setting priorities in health care organizations: criteria, processes, and parameters of success. BMC Health Serv Res 2004;4:25.
12 Oxman AD, Schunemann HJ, Fretheim A. Improving the use of research evidence in guideline development: 2. Priority setting. Health Res Policy Syst 2006;4:14.
13 National Institute for Health and Clinical Excellence. Guide to the topic selection process—interim process manual. London; November 15, 2006.
14 Martin D, Singer P. A strategy to improve priority setting in health care institutions. Health Care Anal 2003;11:59-68.
15 McKneally MF, Dickens BM, Meslin EM, et al. Bioethics for clinicians: 13. Resource allocation. CMAJ 1997;157:163-67.
16 Drug Effectiveness Review Project. Process. Available at: https://www.ohsu.edu/evidence-based-practice-center/derp-reports. Accessed September 4, 2007.
17 Counsell C. Formulating questions and locating primary studies for inclusion in systematic reviews. Ann Intern Med 1997;127:380-87.
18 Woolf SH, DiGuiseppi CG, Atkins D, et al. Developing evidence-based clinical practice guidelines: lessons learned by the US Preventive Services Task Force. Annu Rev Public Health 1996;17:511-38.
19 Atkins D, Fink K, Slutsky J. Better information for better health care: the Evidence-based Practice Center program and the Agency for Healthcare Research and Quality. Ann Intern Med 2005;142:1035-41.
20 Bravata DM, McDonald KM, Shojania KG, et al. Challenges in systematic reviews: synthesis of topics related to the delivery, organization, and financing of health care. Ann Intern Med 2005;142:1056-65.
21 Matchar DB, Westermann-Clark EV, McCrory DC, et al. Dissemination of Evidence-based Practice Center reports. Ann Intern Med 2005;142:1120-25.
22 Santaguida PL, Helfand M, Raina P. Challenges in systematic reviews that evaluate drug efficacy or effectiveness. Ann Intern Med 2005;142:1066-72.
Tables
Principles for priority-setting in health-related programs | Applied principles for comparative effectiveness systematic review topic selection | Guidelines and processes used during comparative effectiveness systematic review topic selection |
---|---|---|
1 108th Congress. Medicare Prescription Drug, Improvement, and Modernization Act of 2003. Public Law 108-173. Section 1013. 2 Institute of Medicine. Knowing what works in health care: a roadmap for the nation. Washington: The National Academies Press; 2008. 3 Martin D, Singer P. A strategy to improve priority setting in health care institutions. Health Care Anal 2003;11:59-68. |
||
Align priority setting with the overall strategic purpose of the program |
|
|
Apply clear and consistent criteria for prioritization of potential program activities |
|
|
Involve stakeholders |
|
|
Conduct program prioritization activities with adequate transparency to allow public accountabronh> |
|
|
Engage in ongoing self-evaluation/process improvement |
|
|
Criterion | Definition |
---|---|
1 Institute of Medicine. Knowing what works in health care: a roadmap for the nation. Washington: The National Academies Press; 2008. 2 Battista RN, Hodge MJ. Setting priorities and selecting topics for clinical practice guidelines. CMAJ 1995;153:1233-7. 3 National Institute for Health and Clinical Excellence. Guide to the topic selection process—interim process manual. London; November 15, 2006. |
|
Disease burden |
|
Public or provider interest | |
Controversy | |
Variation in care | |
Cost |
|
Sufficient evidence | |
New evidence |
|
Potential impact |
|
Table 3. Stakeholder categories for the Effective Health Care Program
- Clinicians
- Consumers/patients, including consumer/patient organizations
- Employers and business groups
- Federal and State partners
- Health care industry representatives
- Payers, health plans, policymakers
- Researchers
Appropriateness |
|
---|---|
Importance |
|
Desirability of new research/ duplication | Potential for redundancy (i.e., whether a proposed topic is already covered by an available or soon-to-be available high-quality systematic review by AHRQ or others) |
Feasibility | Effectively utilizes existing research and knowledge by considering:
|
Potential value | Potential for significant health impact:
Potential risk from inaction:
Addresses inequities, vulnerable populations (including issues for patient subgroups) |
Table 5. Priority conditions for the Effective Health Care Program
- Arthritis and nontraumatic joint disorders.
- Cancer.
- Cardiovascular disease, including stroke and hypertension.
- Dementia, including Alzheimer's Disease.
- Depression and other mental health disorders.
- Developmental delays, attention-deficit hyperactivity disorder, and autism.
- Diabetes mellitus.
- Functional limitations and disability.
- Infectious diseases, including HIV/AIDS.
- Obesity.
- Peptic ulcer disease and dyspepsia.
- Pregnancy, including preterm birth.
- Pulmonary disease/asthma.
- Substance abuse.
Table 6. PICOTS parameters for both topic nominations and key questions
PICOTS Parameters:1
- Population
- Condition(s), disease severity and stage, comorbidities, patient demographics.
- Intervention
- Dosage, frequency, and method of administration.
- Comparator
- Placebo, usual care, or active control.
- Outcome
- Health outcomes: morbidity, mortality, quality of life.
- Timing
- Duration of followup.
- Setting
- Primary, specialty, inpatient; co-interventions.
Policy or Practice Context
What are the current issues in health policy or clinical practice that define and frame the important questions to be answered?
1 Counsell C. Formulating questions and locating primary studies for inclusion in systematic reviews. Ann Intern Med 1997;127:380-7.
Table 7. Issues that technical expert groups address during topic development
1. Focusing research questions for systematic review
- Who are the populations and clinical subgroups of interest?
- Why might clinical variation exist, especially if evidence-based guidelines are readily available?
- What specific patient characteristics may affect outcomes?
- Which interventions should be compared (leading to an understanding of why)?
- What is the potential impact of intervention on patients?
- What are the therapeutic aims of treatment?
- Which outcomes (intended and unintended effects) are relevant, including timing?
2. Clarifying clinical theories and beliefs underlying practice variation
"…[E]very review, just like every intervention, is based on a theory…Systematic reviews gather evidence to assess whether the expected effect of an intervention does indeed occur." (Cochrane Manual)1
Understanding the clinical logic underlying claims about comparative effectiveness is an important goal of topic development. Interviews with technical experts aim to answer questions such as:
- Why do proponents of one or another treatment believe it is better?
- When and for whom?
- What characteristics of the alternative treatments are likely to drive choices?
The following examples illustrate how beliefs are linked to clinical theories:
- Belief
- Newer antisecretory drugs are likely to be better for glycemic control of diabetes than are sulfonylureas.
- Theory
- Sulfonylureas have been disappointing, and their use has not brought about a meaningful reduction in the risk of macrovascular complications. They may, in fact, be implicated in progression of diabetes, and they make it difficult to lose weight. Newer classes of drugs may result in better long-term outcomes because they have a better metabolic profile.
- Context
- Proponents of the new drugs do not base their claim of superiority on evidence about short-term glycemic control. The belief that the new drug will have an advantage is based on the understanding of how diabetes progresses; how the new drug works; and evidence from short-term efficacy trials about effects on lipid levels, weight gain, and other metabolic markers.
- Belief
- A new long-acting opioid drug for relief of pain is likely to play an important role in chronic pain treatment.
- Theory
- Because of tolerance and individual differences in response, chronic pain patients may have more consistent and prolonged symptom relief when several long-acting opioid medications are used in rotation.
- Context
- The belief that the new drug has an advantage is based on the fact that it has a long half-life, rather than on how the likelihood and degree of pain relief and the frequency and severity of side effects compare with alternatives. The review may want to focus on evidence about how this drug performs as a part of an opioid rotation regimen rather than as the sole or initial treatment for chronic pain.
1 Higgins JT, Green S, editors. Cochrane handbook for systematic reviews of interventions 4.2.6 [updated September 2006]. The Cochrane Library. Chichester, UK: John Wiley & Sons, Ltd; 2006.
Figures