Fitting a square peg in a round hole: Developing a quality assessment tool for non-randomized studies in public health

Article type
Authors
Thomas H, Micucci S, O'Brien M
Abstract
Background: In the field of public health where randomised controlled trials (RCTs) may be difficult to implement, inappropriate, unethical or of uncertain advantage, the effectiveness of interventions is many times measured by other designs. Some evidence suggests that non-randomised studies (NRS) provide over-estimates of intervention effectiveness. Other evidence suggests that it is the quality of the studies, not the design per se, which results in the difference in effect sizes. The challenge of assessing effectiveness in public health interventions is compounded by the variation in interventions directed at a particular outcome (e.g. smoking cessation includes individual, group and community-wide strategies as well as policy and legislative advocacy). As well, multiple outcomes, often including "proxy" measures, are frequently used in public health intervention evaluation. Given the state of the research to date, accurately calculating effect sizes and performing meta-analyses for many studies is difficult in some situations and conceptually non-sensible in others. Given that non-randomized studies (NRS) are sometimes the only designs available to public health, the objectives of this study are to establish the difference in treatment effects resulting from the difference in study design, and to develop an appropriate quality assessment tool for all studies measuring public health interventions.

Methods: Using the literature and the tools developed by two groups who have done over 20 reviews related to public health practice, a list of relevant quality assessment components will be devised. Using the data from the studies included in the reviews, quality assessment components that have an influence on the effect estimates will be identified.

Results: An empirically-based quality assessment tool to rate NRS for use in systematic reviews in public health will be the available for others for further testing.

Conclusions: Systematic reviews of the effectiveness of public health interventions are sorely needed to guide policy-makers and program decision-makers. When RCTs are not available it is imperative to use the available empirical evidence to assist in making these decisions. This tool will provide a standardized method for assessing the relevant studies.