Evidence of design-related bias among studies validating clinical prediction rule: a meta-epidemiological study

Article type
Authors
Ban J1
1University of Oxford, UK and Providence Health and Services, USA
Abstract
Background: Proper validation is needed for the performance of clinical prediction rule to be trusted. Validating clinical prediction rule using inadequate methodology may result in biased estimation of predictive performance.

Objectives: This study aims to examine the association between design deficiencies in validation studies of clinical prediction rule and estimates of predictive performance.

Methods: MEDLINE, EMBASE, the Cochrane library and the Medion Database were searched for systematic reviews of clinical prediction rule studies published between 2006 and 2010. Data were extracted from all validation studies included in the systematic reviews that allowed for the construction of a diagnostic 2 by 2 table. A meta-analytic approach was used to evaluate the influence of design deficiencies. First, meta-regressions were conducted in each meta-analysis for selected design features. Then, the natural logarithms of relative diagnostic odds ratios (RDOR) from meta-regressions were meta-analyzed to estimate the summary RDOR.

Results: A total of 287 validation studies of clinical prediction rule were collected from 15 systematic reviews and 31 meta-analyses. Validation studies using case-control design produced the largest summary relative diagnostic odds ratio (RDOR) of 2.2 (95% confidence interval: 1.2–4.3) in multivariable analysis under random-effects assumption between meta-analyses (Fig. 1). The summary RDOR of studies using differential verification was 2.0 (95% confidence interval: 1.2–3.1) and the summary RDOR of studies using inadequate sample size was 1.9 (95% confidence interval: 1.2–3.1). Narrow validation, validation studies conducted in similar settings or with similar patients to derivation, produced the summary RDOR of 1.8 but the 95% confidence interval was 0.8–4.4.

Conclusions: Case-control design, differential verification and inadequate sample size are associated with the overestimation of predictive performance in validation studies. The results of validation studies should be interpreted with caution if design deficiency is detected.