By how much does publication bias affect the results of systematic reviews of diagnostic test accuracy?

Tags: Oral
Deeks J, Macaskill P, Irwig L

Background: Little is know about the degree and mechanisms of selective reporting of studies of diagnostic test accuracy, nor the impact which selective reporting has on the results of systematic reviews of diagnostic accuracy.

Objectives: To evaluate the impact of publication bias arising through four different selection mechanisms on the results of meta-analyses of diagnostic test accuracy.

Methods: Studies of a diagnostic test of moderate accuracy (average DOR=38) with varying threshold and heterogeneity in accuracy were created by simulation. The probability of publication for each study was computed according to four different selection mechanisms, probabilities decreasing with (a) low sensitivity, (b) low specificity, (c) low values of Youdens Index and (d) low diagnostic odds ratios. Meta-analyses were created selecting studies according to their probability of publication. The strength of the publication bias mechanism was adjusted creating meta-analyses where (i) 10%, (ii) 25% and (iii) 50% of studies were censored. Study results were combined first by independent pooling of sensitivity and specificity, and secondly by combining diagnostic odds ratios using the Moses method. Values obtained from meta-analyses where all studies were published were used for comparison.

Results: Where studies were combined using the Moses method, selective publication introduced using all four selection mechanisms had little practical impact on the results of the systematic review. In the worst scenario, when 50% of studies were censored using mechanism (d) the average diagnostic odds ratio was 53 rather than 37, which corresponds to a small increase in the sensitivity=specificity point on the ROC curve from 86% to 88%. When studies were combined by independent pooling of sensitivities and specificities bias was introduced for censoring mechanisms (a) and (b) which act unequally on sensitivity and specificity. Censoring 50% studies according to sensitivity increased average sensitivity from 87% to 94% and decreased specificity from 86% to 75%.

Conclusions: The impact of publication bias depends on both the mechanism by which the bias acts and the method of meta-analysis. In a typical scenario where there is variation in threshold between studies, pooling diagnostic odds ratios reduces the impact of publication bias to a level where it may be negligible. Independent pooling of sensitivities and specificities is likely to be more susceptible to publication bias.

Acknowledgements: This work was undertaken by the STEP programme based at the University of Sydney funded by NHMRC grant no 211205.