Article type
Year
Abstract
Background: In studies of diagnostic test accuracy of ordinal tests, results are sometimes only reported for cut-off thresholds that generate desired results in a given study (e.g. high combined sensitivity and specificity). When combining results in meta-analyses, selective cut-off reporting may result in biased accuracy estimates. One way to overcome this bias is via individual participant data meta-analysis (IPDMA). Another approach is to use published results, but model by missing cut-off data using statistical techniques.
Objectives: To compare IPDMA of data from all studies and cut-offs to three approaches for estimating accuracy using published data in the context of missing cut-off data: conventional meta-analysis using bivariate random-effects meta-analysis, and modeling missing cut-off data using multiple cut-off models developed by Steinhauser and colleagues and Jones and colleagues.
Methods: We analyzed data collected for an IPDMA of Patient Health Questionnaire-9 depression screening tool accuracy. We compared sensitivity and specificity estimates from conventional meta-analysis of published results, the two modelling approaches, and IPDMA. The modeling approaches were applied to the published dataset blind to IPDMA results.
Results: We analyzed 15,020 participants (1972 cases) from 45 studies. All methods produced similar specificity estimates. Compared to IPDMA, conventional bivariate meta-analysis underestimated sensitivity for cut-offs < 10 and overestimated sensitivity for cut-offs > 10 (mean absolute difference: 6%). For both modeling approaches, sensitivity was slightly underestimated for all cut-offs (mean underestimation: 2%).
Conclusions: IPDMAs are the gold standard for evidence synthesis, but are labor intensive. In the context of missing cut-off data, applying modeling approaches to published data is more efficient than IPDMA and produces accuracy estimates that more closely resemble IPDMA than not modeling. However, applying modeling approaches to published data resulted in a slight underestimation of sensitivity in our case study and precludes the possibility of assessing accuracy in participant subgroups.
Patient or healthcare consumer involvement: There was no consumer involvement in this project.
Objectives: To compare IPDMA of data from all studies and cut-offs to three approaches for estimating accuracy using published data in the context of missing cut-off data: conventional meta-analysis using bivariate random-effects meta-analysis, and modeling missing cut-off data using multiple cut-off models developed by Steinhauser and colleagues and Jones and colleagues.
Methods: We analyzed data collected for an IPDMA of Patient Health Questionnaire-9 depression screening tool accuracy. We compared sensitivity and specificity estimates from conventional meta-analysis of published results, the two modelling approaches, and IPDMA. The modeling approaches were applied to the published dataset blind to IPDMA results.
Results: We analyzed 15,020 participants (1972 cases) from 45 studies. All methods produced similar specificity estimates. Compared to IPDMA, conventional bivariate meta-analysis underestimated sensitivity for cut-offs < 10 and overestimated sensitivity for cut-offs > 10 (mean absolute difference: 6%). For both modeling approaches, sensitivity was slightly underestimated for all cut-offs (mean underestimation: 2%).
Conclusions: IPDMAs are the gold standard for evidence synthesis, but are labor intensive. In the context of missing cut-off data, applying modeling approaches to published data is more efficient than IPDMA and produces accuracy estimates that more closely resemble IPDMA than not modeling. However, applying modeling approaches to published data resulted in a slight underestimation of sensitivity in our case study and precludes the possibility of assessing accuracy in participant subgroups.
Patient or healthcare consumer involvement: There was no consumer involvement in this project.