Article type
Abstract
Background: Many evidence-based clinical practice guidelines fail to translate to improvement of daily health care practice and patient health outcomes. However, benchmarking and comparison between guidelines, institutions, and regions and targeted interventions are usually not feasible due to a lack of a generalized tool to assess the success of guideline implementation.
Objective: To develop a generalized tool to evaluate the success of guideline implementation.
Methods: Four process stages were employed: project launch (a core team and expert advisory group were established), evidence review (a systematic review of existing tools and items for evaluating success of guideline implementation, from which a list of candidate items was summarized according to the Reach Effectiveness Adoption Implementation Maintenance framework), experts meeting (experts in clinical practice and evidence-based medicine commented and advised on the framework and items), modified Delphi exercises (2 online Delphi surveys to reach agreement (>70%) on items), and developed a general tool to evaluate the success of guideline implementation.
Results: Through the evidence review, we drafted 23 candidate items from 208 literatures. A group of 17 experts commented and advised on candidate items, left 21 items, and raised 2 new items. Thirteen experts participated in the first-round Delphi survey, reached agreement on 20 items, and removed 3 items. Eleven number experts attended the second-round survey and agreed to include 20 items after revision. The Guideline Implementation Effect Evaluation Tool contains 5 dimensions and 20 items, including Reach (3 items), Adoption (4 items), Implementation (2 items), Effectiveness (5 items), and Maintenance (3 items), with 2 items of overall evaluation, affiliated with barriers and facilitators of guideline implementation (1 item).
Conclusions: The tool for the evaluation of success of guideline implementation from both clinician and patient perspectives was established, providing a generalized, comprehensive tool for evaluating, benchmarking, and cross-comparing between guidelines, facilities, and regions and informing targeted intervention design for improving guideline implementation.
The public and/or consumers were not involved in the study.
Objective: To develop a generalized tool to evaluate the success of guideline implementation.
Methods: Four process stages were employed: project launch (a core team and expert advisory group were established), evidence review (a systematic review of existing tools and items for evaluating success of guideline implementation, from which a list of candidate items was summarized according to the Reach Effectiveness Adoption Implementation Maintenance framework), experts meeting (experts in clinical practice and evidence-based medicine commented and advised on the framework and items), modified Delphi exercises (2 online Delphi surveys to reach agreement (>70%) on items), and developed a general tool to evaluate the success of guideline implementation.
Results: Through the evidence review, we drafted 23 candidate items from 208 literatures. A group of 17 experts commented and advised on candidate items, left 21 items, and raised 2 new items. Thirteen experts participated in the first-round Delphi survey, reached agreement on 20 items, and removed 3 items. Eleven number experts attended the second-round survey and agreed to include 20 items after revision. The Guideline Implementation Effect Evaluation Tool contains 5 dimensions and 20 items, including Reach (3 items), Adoption (4 items), Implementation (2 items), Effectiveness (5 items), and Maintenance (3 items), with 2 items of overall evaluation, affiliated with barriers and facilitators of guideline implementation (1 item).
Conclusions: The tool for the evaluation of success of guideline implementation from both clinician and patient perspectives was established, providing a generalized, comprehensive tool for evaluating, benchmarking, and cross-comparing between guidelines, facilities, and regions and informing targeted intervention design for improving guideline implementation.
The public and/or consumers were not involved in the study.