Discrepancy in relative test performance due to modelling strategy in comparative diagnostic meta-analysis: A case study

Article type
Authors
Takwoingi Y1, Abba K2, Garner P2, Deeks J1
1University of Birmingham, UK
2University of Liverpool, UK
Abstract
Background: Hierarchical models for diagnostic test accuracy meta-analysis include study-specific random effects to account for between-study heterogeneity. In a comparative meta-analysis, equality of variance parameters can be assumed for different tests whilst allowing other model parameters to depend on each test.

Objectives: To demonstrate discrepancy in summary accuracy measures when differences in heterogeneity exist between studies of different tests and equal variances are assumed across tests.

Methods: A Cochrane review on rapid diagnostic tests (RDTs) for Plasmodium falciparum malaria included an assessment of the relative accuracy of different types of tests. Type 1 and Type 4 tests were compared by including parameters in the hierarchical summary ROC (HSROC) model to allow each type to have a different threshold, accuracy and summary ROC shape. The impact on the variability of random effects of accuracy and threshold was also investigated, and separate variance parameters included if required based on likelihood ratio (LR) tests. Summary sensitivities and specificities were derived from the models. The analyses were done using SAS Proc NLMIXED.

Results: Sixty-five study cohorts evaluated Type 1 tests and were more heterogeneous than the 16 that assessed Type 4 tests. Between models, little or no difference in specificities for either test type was observed (table1). The sensitivity of Type 4 tests derived from the model with equal variances (model 1) did not reflect the data, and differed from that of the model with separate variances (model 2) by 5.5%. The difference in sensitivity between Type 1 and Type 4 tests was statistically significant (p<0.001) in model 1 but not in model 2 (p = 0.20).

Conclusions: Modelling assumptions can affect conclusions of a comparative meta-analysis. In this case, the more complex and appropriate assumptions led to more conservative differences. The effect of each test on model parameters should be investigated when feasible.