Direct versus indirect comparisons in systematic reviews of test accuracy studies: an IPD case study in ovarian reserve testing

Article type
Authors
Wang J1, Bossuyt PM1, Geskus R1, Leeflang MM1
1Academic Medical Center, University of Amsterdam, The Netherlands
Abstract
Background: Comparative systematic reviews of diagnostic test accuracy compare relative accuracy of two or more tests. Direct comparisons evaluate all tests in the same study, even in the same patients, are most valid and regarded as the reference approach. Indirect comparisons are more prone to bias than direct comparisons, but excluding them may lead to a loss in precision in the summary estimates.

Objectives: To investigate the difference of indirect comparisons compared with the results of direct comparisons in meta-analysis; to develop appropriate methods of adjusting indirect comparisons to improve their comparability.

Methods: A dataset from Individual Patient Data (IPD) meta-analysis on the test accuracies of Anti-Mu¨ llerian Hormone (AMH), Antral Follicle Count (AFC) and Follicle Stimulation Hormone (FSH) in relation to ovarian response was used in this case study. Test accuracies were measured by the area under the ROC curves (AUCs) and compared in each pair of tests under direct and indirect comparisons. Inconsistencywas defined as statistical significant difference in comparative results between the direct and indirect evidence.

Results: 32 studies were included with IPD from 4762 women undergoing IVF. By comparing AUCs, the difference between AFC and FSH (0.0948, p < 0.001) is significant in direct comparison but not significant (0.0678, p=0.09) in indirect comparison; while the difference between AFC and AMH is significant (−0.0830, p < 0.001) in indirect comparison but not significant (−0.0176, p=0.29) in direct comparison. Adjusting for indirectness by considering covariate effect could improve the comparability but these differences still existed after covariate-adjustment.

Conclusions: Comparative results of test accuracy obtained through indirect comparisons are not always consistent with those obtained through direct comparisons. There is no straight forward way to make indirect comparisons more comparable. Evidence from indirect comparisons should be assessed carefully and combined with direct comparisons after adequate assessment of the consistency and with adjustment.