Article type
Year
Abstract
Background: Randomized controlled trials (RCTs) are usually considered the standard for determining the effectiveness of medical treatments. In some areas of health care, however, most evidence for the effectiveness of clinical or policy interventions rests on studies that are not of randomized design.
Objectives: To examine the inclusion, rationale, quality assessment, and impact of nonrandomized studies in Evidence Practice Center (EPC) reports.
Methods: EPC reports are systematic reviews of clinically relevant topics, sponsored by the U.S. Government. Reports were identified which were released between the first report (1999) and September 2004, examined at least one question of efficacy or effectiveness of a clinical intervention, and included evidence from study designs other than RCTs.
Results: Of 107 reports, 49 fulfilled inclusion criteria, of which 44 included RCTs as well as other study designs. Among the reports including nonrandomized studies, we observed inconsistency in the following areas: 1) study design terminology; 2) the rationale for including nonrandomized studies; 3) quality assessment instruments; and 4) whether and how quality assessment of studies was incorporated into data presentation and conclusions. We noted a number of reasons that nonrandomized studies were incorporated into EPC reports: 1) difficulty conducting RCTs to address a research question; 2) examination of long-term outcomes; 3) exploration of applicability of findings from RCTs; 4) information and outcomes that are relevant to healthcare consumers; and 5) identification of research gaps to guide further research.
Conclusions: In reviews where nonrandomized studies are considered for inclusion, we make the following recommendations. Potential sources of bias for the specific review question should be considered, along with whether the biases can be minimized with well-conducted nonrandomized studies. An explicit rationale should be provided for the inclusion or exclusion of specific nonrandomized study designs. The quality of individual studies should be assessed using a validated instrument and the effect of quality on the conclusions discussed. The potential impact of including various study designs on the conclusions of the review should be discussed.
Objectives: To examine the inclusion, rationale, quality assessment, and impact of nonrandomized studies in Evidence Practice Center (EPC) reports.
Methods: EPC reports are systematic reviews of clinically relevant topics, sponsored by the U.S. Government. Reports were identified which were released between the first report (1999) and September 2004, examined at least one question of efficacy or effectiveness of a clinical intervention, and included evidence from study designs other than RCTs.
Results: Of 107 reports, 49 fulfilled inclusion criteria, of which 44 included RCTs as well as other study designs. Among the reports including nonrandomized studies, we observed inconsistency in the following areas: 1) study design terminology; 2) the rationale for including nonrandomized studies; 3) quality assessment instruments; and 4) whether and how quality assessment of studies was incorporated into data presentation and conclusions. We noted a number of reasons that nonrandomized studies were incorporated into EPC reports: 1) difficulty conducting RCTs to address a research question; 2) examination of long-term outcomes; 3) exploration of applicability of findings from RCTs; 4) information and outcomes that are relevant to healthcare consumers; and 5) identification of research gaps to guide further research.
Conclusions: In reviews where nonrandomized studies are considered for inclusion, we make the following recommendations. Potential sources of bias for the specific review question should be considered, along with whether the biases can be minimized with well-conducted nonrandomized studies. An explicit rationale should be provided for the inclusion or exclusion of specific nonrandomized study designs. The quality of individual studies should be assessed using a validated instrument and the effect of quality on the conclusions discussed. The potential impact of including various study designs on the conclusions of the review should be discussed.
PDF