Article type
Abstract
Background:
Evidence quality appraisal tools are designed to guide evidence-based practitioners and systematic reviewers in the process of identifying limitations in a research study. However, many tools address a mix of constructs in addition to risk of bias or internal validity, such as quality of reporting, external validity, and imprecision.
Objective:
The objectives of this study are to map existing quality appraisal tools to a study design and conceptual quality domains and evaluate their scoring approach.
Methods:
We performed a systematic search to identify quality appraisal tools across all disciplines in human health. Tools designed specifically to evaluate reporting quality were excluded. Potentially eligible tools were screened independently and in duplicate. We categorized tools according to conceptual quality domains then addressed and recorded their scoring methods.
Results:
The review included 124 tools published from 1998 to 2023. A flow diagram (Figure 1) provides additional details around the screening and selection process. Forty six percent of the tool were accessed through a peer-reviewed journal article. Table 1 provides the frequency of tools developed for each specific study design. Ninety-four percent of tools addressed concepts other than risk of bias or study limitations, with 71% including at least one item written in a way that assessed reporting quality. Other domains frequently evaluated were the appropriateness of statistical analysis, indirectness/external validity, imprecision or adequacy of sample size, and ethical considerations, including conflict of interest and funding sources. Table 2 provides the distribution of tools developed based on domains other than risk of bias. Twenty-one percent of tools used a numerical scoring system.
Conclusion:
Currently available study quality assessment tools are not explicit about the conceptual domains addressed by their items or signaling questions and usually address multiple domains in addition to risk of bias. Many tools use numerical scoring systems which can be misleading. Limitations of the existing tools make the process of rating the certainty of evidence more difficult. Clear guidance about which tools to use and which one to avoid when assessing risk of bias is needed.
Evidence quality appraisal tools are designed to guide evidence-based practitioners and systematic reviewers in the process of identifying limitations in a research study. However, many tools address a mix of constructs in addition to risk of bias or internal validity, such as quality of reporting, external validity, and imprecision.
Objective:
The objectives of this study are to map existing quality appraisal tools to a study design and conceptual quality domains and evaluate their scoring approach.
Methods:
We performed a systematic search to identify quality appraisal tools across all disciplines in human health. Tools designed specifically to evaluate reporting quality were excluded. Potentially eligible tools were screened independently and in duplicate. We categorized tools according to conceptual quality domains then addressed and recorded their scoring methods.
Results:
The review included 124 tools published from 1998 to 2023. A flow diagram (Figure 1) provides additional details around the screening and selection process. Forty six percent of the tool were accessed through a peer-reviewed journal article. Table 1 provides the frequency of tools developed for each specific study design. Ninety-four percent of tools addressed concepts other than risk of bias or study limitations, with 71% including at least one item written in a way that assessed reporting quality. Other domains frequently evaluated were the appropriateness of statistical analysis, indirectness/external validity, imprecision or adequacy of sample size, and ethical considerations, including conflict of interest and funding sources. Table 2 provides the distribution of tools developed based on domains other than risk of bias. Twenty-one percent of tools used a numerical scoring system.
Conclusion:
Currently available study quality assessment tools are not explicit about the conceptual domains addressed by their items or signaling questions and usually address multiple domains in addition to risk of bias. Many tools use numerical scoring systems which can be misleading. Limitations of the existing tools make the process of rating the certainty of evidence more difficult. Clear guidance about which tools to use and which one to avoid when assessing risk of bias is needed.