Key issues and countermeasures of the evidence quality grading for public health decision-making: a Delphi survey

Article type
Authors
Li X1, Liang S, Yang C, Yang K
1Centre for Evidence-Based Social Science/Center for Health Technology Assessment, School of Public Health, Lanzhou University, Lanzhou, China
Abstract
Background: Evidence quality grading tools can transform massive data into effective evidence and are the core of scientific decision-making. Existing evidence quality grading tools have many problems in their application in the public health field and cannot meet the needs of public health decision-makers.
Objective: To sort out the key issues and countermeasures in the application of evidence quality grading methods for public health decision-making
Methods: Based on the initial item pool of 14 questions formed by the team's preliminary research, the Delphi survey method was carried out, and 2 rounds of correspondence inquiries were conducted with 17 experts. The key questions were selected by calculating the authority coefficient, the opinion coordination coefficient, the average score of each question item, the coefficient of variation, and the perfect score. Semistructured interviews were used to develop interview outlines and propose countermeasures on key issues.
Results and conclusion: Two rounds of correspondence showed that indicators such as the expert authority coefficient were statistically significant (P < 0.05). The study shows 2 key issues that ultimately form the method for grading the quality of evidence for public health decision-making: "observational studies lacked grading, and the starting evidence levels for different types of observational studies were low and identical, making it difficult to reflect the true quality of evidence in this area," and "complex intervention studies were frequently downgraded due to heterogeneity, indirectness, and study limitations, making it difficult to reflect the true quality of evidence in this area." Five Delphi experts participated in semistructured interviews and came up with 2 countermeasures: refine the starting level of evidence to distinguish observational studies with different designs and develop and implement additional escalation criteria.