Empirical evaluation of a methodology for determining when a comparative drug effectiveness review has become out-of-date

Article type
Authors
Peterson K, Chan B, McDonagh M
Abstract
Background: An important concern for producers of systematic reviews is the lack of a gold standard for determining when a systematic review has become out-of-date. Recently, the University of Ottawa Evidence-based Practice Center (EPC) evaluated characteristics of signals for updating based on a retrospectively assembled cohort of 100 quantitative systematic reviews. However, little other research has involved empirical evaluation of data collected prospectively during real-life implementation of a surveillance methodology for determining when a systematic review has become out-of-date. Objectives: Provide empiric data addressing the following questions about determining time to outdatedness of comparative drug effectiveness reviews and the factors affecting survival times: 1. How do annual literature growth rate parameters differ across various clinical pharmacotherapy areas? 2. How do different clinical areas compare in ‘mean time to outdatedness’? 3. How do various triggers for need to update compare in their impact on ‘mean time to outdatedness’? 4. What is the average annual screening burden for new citations per review based on use of a standardized evidence surveillance methodology? 5. How often did full updates contribute to significant modifications of previous conclusions? Methods: By September 2008, the Oregon EPC will have completed evidence surveillance for 44 comparative drug effectiveness reviews for the Drug Effectiveness Review Project (DERP). DERP participating organizations will have reviewed all new evidence findings and determined which reviews have become out-of-date. These reviews will then undergo full updates. Data being collected include: characteristics of clinical pharmacotherapy area, context, nature of new evidence and timing. Depending on data type, multiple statistical approaches will be used to analyze judgments of outdatedness. Results: Our surveillance methodology consists of (1) electronic literature searches for randomized controlled trials published since the previous searches, and (2) identification of newly approved drugs, new indications and new safety alerts. Complete details of our evidence surveillance methodology and key findings from our analyses will be presented at the Colloquium. Conclusions: There is a need for more research on issues regarding updating systematic reviews. Our findings will help address existing knowledge gaps in the field of updating systematic reviews.