Article type
Year
Abstract
Background: Studies of prevalence provide essential information for estimating the burden of mental health conditions, which can inform research and policymaking. The Coronavirus Disease 2019 (COVID-19) pandemic has generated a large volume of literature on the prevalence of various conditions, including those related to mental health. Biases affect how certain we are about the available evidence. It is one of the essential steps when conducting a systematic review; however, no standard tool for assessing the risk of bias (RoB) in prevalence studies exists.
Objectives: For the purposes of a living systematic review on prevalence of mental health disorders during the COVID-19 pandemic, we developed a RoB tool to evaluate prevalence studies in mental health (RoB-PrevMH) and tested its interrater reliability.
Methods: We reviewed existing RoB tools for prevalence studies until September 2020 to develop a tool for prevalence studies in mental health. We tested the reliability of assessments by different users of RoB-PrevMH in 83 studies stemming from two systematic reviews of prevalence studies in mental health. We assessed the interrater agreement by calculating the proportion of agreement and Kappa statistic for each item.
Results: RoB-PrevMH consists of three items that address selection bias and information bias. Introductory and signaling questions guide the application of the tool to the review question. The interrater agreement for the three items was 83%, 90%, and 93%. The weighted kappa was 0.63 (95% CI 0.54 to 0.73), 0.71 (95% CI 0.67 to 0.85) and 0.32 (95% CI –0.04 to –0.63), respectively.
Conclusions: RoB-PrevMH can determine if selection or information biases are present when the prevalence of mental health disorders is measured. Our tool aims to approach bias accurately by excluding reporting questions. The tool’s validity, reliability, and applicability should be assessed in future projects.
Objectives: For the purposes of a living systematic review on prevalence of mental health disorders during the COVID-19 pandemic, we developed a RoB tool to evaluate prevalence studies in mental health (RoB-PrevMH) and tested its interrater reliability.
Methods: We reviewed existing RoB tools for prevalence studies until September 2020 to develop a tool for prevalence studies in mental health. We tested the reliability of assessments by different users of RoB-PrevMH in 83 studies stemming from two systematic reviews of prevalence studies in mental health. We assessed the interrater agreement by calculating the proportion of agreement and Kappa statistic for each item.
Results: RoB-PrevMH consists of three items that address selection bias and information bias. Introductory and signaling questions guide the application of the tool to the review question. The interrater agreement for the three items was 83%, 90%, and 93%. The weighted kappa was 0.63 (95% CI 0.54 to 0.73), 0.71 (95% CI 0.67 to 0.85) and 0.32 (95% CI –0.04 to –0.63), respectively.
Conclusions: RoB-PrevMH can determine if selection or information biases are present when the prevalence of mental health disorders is measured. Our tool aims to approach bias accurately by excluding reporting questions. The tool’s validity, reliability, and applicability should be assessed in future projects.