Article type
Abstract
Background: Recent work has clarified the GRADE (Grading of Recommendations Assessment, Development and Evaluation) definition of the certainty of evidence, and its application for interventions (1). A clarification of how these concepts apply to certainty ratings of diagnostic test accuracy is needed. This is especially important as it relates to frequent lack of direct evidence assessing the effect of tests on important patient outcomes.
Objectives: To define and clarify possible approaches to judging certainty of evidence for diagnostic test accuracy within a systematic review, health technology assessment, or clinical practice guideline when only test accuracy results are available.
Methods: After initial brainstorming, the investigators iteratively refined and clarified the approaches using input from workshops and discussions at GRADE Working Group meetings.
Results: We propose the application of the same approaches for rating the certainty of evidence for diagnostic test accuracy results as those previously described for intervention effects (Table 1). The key challenges of applying these approaches on evidence of test accuracy were identified and include rating the certainty of evidence when no direct comparison is available, considering the downstream consequences of the test results (for example, impact of false-positive results on important patient outcomes), and setting a clinically meaningful threshold in the contextualised setting. We illustrate how these challenges can be addressed using real-life systematic reviews and we will show examples at the Summit.
Conclusions: The application of the GRADE certainty of evidence concepts on evidence of test accuracy will provide a useful framework when assessing, presenting, or making decisions based on the certainty of evidence for diagnostic test accuracy.
Reference
1. GRADE ratings of certainty of evidence: clarifying the conceptual framework; Hultcrantz et al., under consideration by JCE.
Objectives: To define and clarify possible approaches to judging certainty of evidence for diagnostic test accuracy within a systematic review, health technology assessment, or clinical practice guideline when only test accuracy results are available.
Methods: After initial brainstorming, the investigators iteratively refined and clarified the approaches using input from workshops and discussions at GRADE Working Group meetings.
Results: We propose the application of the same approaches for rating the certainty of evidence for diagnostic test accuracy results as those previously described for intervention effects (Table 1). The key challenges of applying these approaches on evidence of test accuracy were identified and include rating the certainty of evidence when no direct comparison is available, considering the downstream consequences of the test results (for example, impact of false-positive results on important patient outcomes), and setting a clinically meaningful threshold in the contextualised setting. We illustrate how these challenges can be addressed using real-life systematic reviews and we will show examples at the Summit.
Conclusions: The application of the GRADE certainty of evidence concepts on evidence of test accuracy will provide a useful framework when assessing, presenting, or making decisions based on the certainty of evidence for diagnostic test accuracy.
Reference
1. GRADE ratings of certainty of evidence: clarifying the conceptual framework; Hultcrantz et al., under consideration by JCE.