Article type
Year
Abstract
Abstract: Study bias is the difference between what a study is actually estimating, and what the study is meant (or thought) to be estimating. Of course, what the study is meant to be estimating might not be what is needed, or useful, clinically, but that is another matter. If we knew exactly how large the bias was in a particular study, we could simply subtract it from the biased estimate and thus have an unbiased estimate. The problem with bias is thus not its size, but the amount of uncertainty about its size. All studies are subject to bias. For well designed and executed randomised trials the potential bias is small (unless the trial is being used to estimate an effect in a different population &/or under different conditions). In trials with poorly concealed randomisation, however, it appears that there is a tendency for bias in favour of experimental treatments, and that the size of the bias varies considerably from trial to trial. In analytical observational studies, such as case control studies, and in surveys, the scope for bias is considerable, as is presumably the variation from study to study. Further empirical research into study-bias is desperately needed. However, we argue that even now systematic reviewers should attempt, as best they can, using 'intelligent guesswork' where necessary, to explicitly incorporate potential bias into their reviews. To ignore it, certainly the easiest option in the absence of data on how study-biases are distributed, is to assume that there is no potential for bias. Since the effect of uncertainty about study-bias is to reduce, sometimes dramatically, the information conveyed by a study, results of reviews ignoring bias will be over-precise as well as, obviously, potentially biased. Conclusions that are robust to different assumptions about bias should be more reliable. Inclusion of potential bias in reviews will, in most circumstances, also tend to reduce apparent heterogeneity between studies - since some of the apparent heterogeneity will be due to (differing) biases. We will present a worked example based on the recent Breast Screening 'controversy'.