Reliability and feasibility of the Health Evidence™ quality assessment tool for systematic reviews on the effectiveness of public health interventions

Article type
Authors
Read K1, Belitae E2, Sully E1, Dang N1, Dobbins M1
1National Collaborating Centre for Methods and Tools (NCCMT)
2McMaster University
Abstract
Background: Health Evidence™ aims to make it easier for decision-makers to use evidence in their programs and policies. We provide access to over 6,000 quality-appraised systematic reviews on the effectiveness of public health interventions. Each review is rated independently by two reviewers using the Health Evidence™ quality assessment tool and dictionary and consensus is achieved through discussion. The tool includes 10 questions to help assess the methodological quality of public health relevant reviews. The development of this tool has been previously published.

Objectives: To assess the inter-rater reliability and feasibility of the Health Evidence™ quality assessment tool for systematic reviews on the effectiveness of public health interventions.

Methods: Three reviewers independently assessed a sample of 60 systematic reviews of public health interventions from the Health Evidence™ registry. All systematic reviews were 1) relevant to public health or health promotion practice, 2) examined the effectiveness of an intervention, 3) include raw data on outcomes, and 4) described a search strategy. Reviewers had different levels of experience with critical appraisal generally, and previous use of the Health Evidence tool™ specifically (novice, intermediate, expert). Reliability between the three raters was assessed with the intraclass correlation coefficient (ICC) using a two-way random-effects model, absolute agreement. The average measure was used to report the results, with an ICC of >0.75 classified as good reliability. The time to complete the assessment form was also tracked to look at the feasibility of using this tool in practice.

Results: Reviewers conducted quality appraisals on 20 articles each month over June, July, and August 2019. After each set, the team met to resolve conflicts prior to completing the next month’s set. Overall agreement between all three raters showed good to excellent reliability (ICC = 0.898; CI 0.843-0.936) reporting on average measures. In general, time to complete a single quality assessment was under 15 minutes, indicating that the tool is also feasible to apply.

Conclusions: The results of this study suggest that the Health Evidence™ quality assessment tool is reliable for assessing the methodological quality of systematic reviews on the effectiveness of public health interventions. Next steps will be to compare a selection of reviews using this tool to other comparable critical appraisal tools in the field to identify similarities and differences