Study design features affect estimates of sensitivity and specificity, but effects may vary

Article type
Authors
Rutjes A, Smidt N, Di Nisio M, Lijmer J, Mol B, Van Rijn J, Bossuyt P, Reitsma J
Abstract
Objectives: To examine the influence of study design features on estimates of sensitivity, specificity, and diagnostic odds ratio in a series of meta-analyses. Methods: Meta-epidemiologic approach, including 49 meta-analyses with 705 primary studies. A bivariate multivariable regression model was used to estimate the relative change in sensitivity and specificity between studies with specific design features and studies of the same test without these design features. The design features evaluated were type of design (case-control versus cohort), timing of data collection (prospective versus retrospective), patient selection (consecutive or random sample versus non-consecutive inclusion), test result interpretation (double blinded versus single or not blinded), and verification procedure (complete versus partial verification; and verification by the preferred reference standard versus verification with different reference standards). Results: Studies using differential verification reported significantly higher estimates of specificity (1.4 [95% CI:1.0 to 1.9]) and odds ratio (1.8 [95% CI:1.0 to 3.1]) compared to studies using a single reference standard in the verification of test results. The associations between other design features and estimates of diagnostic accuracy were not statistically significant and varied substantially between different meta-analyses. Conclusions: Design features can affect estimates of the sensitivity, specificity and diagnostic odds ratio but the direction and magnitude of the biasing effects vary between meta-analyses and are difficult to predict. As a result, the average effect of design features will become diluted in any exploration of heterogeneity through regression modelling in a metaepidemiologic approach.