Article type
Year
Abstract
Background: Information on implementation is required in systematic effectiveness reviews to facilitate the translation and uptake of evidence. To capture whether and how implementation is assessed in reviews a checklist for implementation (Ch-IMP) was developed and piloted in a cohort of systematic reviews on provider-based prevention and treatment interventions targeting children and youth.
Objectives: This presentation reports on the inter-rater reliability and feasibility of the Ch-IMP and outlines reasons for discrepant ratings.
Methods: Checklist domains were informed by a framework for program theory. Items in four domains (i.e. environment, process evaluation, action model, change model) were generated from a literature review. The checklist was pilot-tested on 27 reviews targeting children and youths. Two raters independently extracted information on 47 items, which included fidelity, dose and reach. Inter-rater reliability was evaluated using percentage agreement and kappa coefficients. Reasons for discrepant ratings were content analysed.
Results: Kappa coefficients ranged from 0.37 to 1.00 and were not influenced by one-sided bias. Most kappa values were classified as excellent (n = 20) or good (n = 17) with a few items categorised as fair (n = 7) or poor (n = 1). Prevalence-adjusted kappa coefficients indicate good or excellent agreement for all but one item. Four areas contributed to scoring discrepancies: 1) clarity or sufficiency of information provided in the review; 2) information missed in the review; 3) issues encountered with the tool; and 4) issues encountered at the review-level. Use of the tool demands a time investment. As such, adjustment is required to improve its feasibility for wider use.
Conclusions: Results suggest that the Ch-IMP is a promising checklist for assessing whether reviews of provider-based programs targeting children and youth consider the impact of implementation variables. Used by authors and editors, the checklist may improve the quality of systematic reviews. Furthermore, the checklist shows promise as a pedagogical tool to facilitate the extraction and reporting of implementation characteristics.
Objectives: This presentation reports on the inter-rater reliability and feasibility of the Ch-IMP and outlines reasons for discrepant ratings.
Methods: Checklist domains were informed by a framework for program theory. Items in four domains (i.e. environment, process evaluation, action model, change model) were generated from a literature review. The checklist was pilot-tested on 27 reviews targeting children and youths. Two raters independently extracted information on 47 items, which included fidelity, dose and reach. Inter-rater reliability was evaluated using percentage agreement and kappa coefficients. Reasons for discrepant ratings were content analysed.
Results: Kappa coefficients ranged from 0.37 to 1.00 and were not influenced by one-sided bias. Most kappa values were classified as excellent (n = 20) or good (n = 17) with a few items categorised as fair (n = 7) or poor (n = 1). Prevalence-adjusted kappa coefficients indicate good or excellent agreement for all but one item. Four areas contributed to scoring discrepancies: 1) clarity or sufficiency of information provided in the review; 2) information missed in the review; 3) issues encountered with the tool; and 4) issues encountered at the review-level. Use of the tool demands a time investment. As such, adjustment is required to improve its feasibility for wider use.
Conclusions: Results suggest that the Ch-IMP is a promising checklist for assessing whether reviews of provider-based programs targeting children and youth consider the impact of implementation variables. Used by authors and editors, the checklist may improve the quality of systematic reviews. Furthermore, the checklist shows promise as a pedagogical tool to facilitate the extraction and reporting of implementation characteristics.