The development and testing of the Scientific, Transparent and Applicable Rankings tool (STAR) for clinical practice guidelines

Article type
Authors
Yang N1, Liu Y2, Sun Y2, Ren M2, Chen Y3
1Evidence-Based Medicine Center, School of Basic Medical Sciences, Lanzhou University
2School of Public Health, Lanzhou University
3WHO Collaborating Centre for Guideline Implementation and Knowledge Translation
Abstract
Background: Clinical practice guidelines are a critical tool for guiding physicians in clinical practice. Guidelines have been evaluated from different perspectives using various tools. However, the existing evaluation tools have several limitations. First, these tools do not address some key elements of guideline quality, such as guideline applicability, transparency of the processes and methods used for development, and prospective registration. Second, some of the evaluation tools have not been adequately assessed for reliability and validity. Third, most evaluation tools have a limited scope such as methodological quality, reporting quality, or implementation. Thus, a comprehensive evaluation of a guideline is time-consuming because it requires the use of multiple tools encompassing different dimensions. In addition, there may be an overlap of items across different evaluation tools. Fourth, interpreting the results of multiple tools in combination and comparing them across different guidelines is challenging.
Objectives: To overcome these barriers and to improve the quality of Chinese guidelines, we formed a working group to develop a unified, comprehensive, and practical evaluation tool named STAR.
Methods and Results: A scoping review was developed to formulate the initial items related to the three dimensions of scientificity, transparency, and applicability; two rounds of Delphi expert survey resulted in a total of 39 items grouped into in 11 domains. Based on a hierarchical analysis of the results of the importance survey, each domain and each item were given weights that reflect their relative importance. Finally, a consensus on the final STAR rating tool was reached in an expert consensus meeting. The tool was tested and found to be reliable, valid, and easy to use.
Conclusions: STAR may be the first tool that uses scientific approach to assign different weights to the domains or items of guideline evaluation, and it is applicable to registered guidelines that have voluntarily applied to the research center for ranking and providing relevant supporting materials.