Article type
Abstract
Background: The Critical Appraisal Tool for Experimental Interventions (CAT-EI) was developed by the American Physical Therapy Association. This risk of bias tool was designed with both appraisers and educators in mind. The tool’s 3 sections include; background information on the article, 12 questions on the study design rigor, and 8 questions on up to 5 different outcome measures. This design helps users see why outcome measures within one article may receive different evidence levels due to rigor or study application.
Objectives: To introduce the CAT-EI as a friendly risk of bias tool for experimental studies; present its reliability data, and strategies for its use in teaching critical appraisal.
Methods: Content validation and reliability were established with 46 guideline developers and physical therapy clinicians whose CAT-EI answers were compared to a key. Revisions to the tool included placing scoring definitions before the appraisal template. A 2nd round of reliability testing was conducted after revision.
Results: Content validation: 73% of participants rated the questions as Clear, 14% as Somewhat Clear, 77% felt the questions were Appropriate to Include and 10% were Neutral. After the tool restructuring, ICCs for one article were ICC(3,k) = .84 (CI=.767-.902) for individual items and .98 (CI=.973-.990) for the average of all items. The second article had an ICC(3,k) = .84 (CI=.778-.906) for individual items and .98 (CI=.979-.992) for the average of all items, indicating that the changes made in Phase 2 improved inter-rater reliability. Survey comments included that the ability to separately appraise up to 5 outcome measures to determine separate levels of evidence was helpful. To date, 7 published, 1 in press and 3 in-progress clinical practice guidelines have used the tool successfully to appraise intervention studies.
Conclusions: Content validity and strong reliability was established for the CAT-EI, including with clinicians who had not had significant training in critical appraisal or systematic review methods. The tool layout supports teaching article dissection and appraisal, with differential analysis of multiple outcome measures and their impact on assigning levels of evidence. Guideline developers have used it successfully with many clinicians who volunteer as study appraisers.
Objectives: To introduce the CAT-EI as a friendly risk of bias tool for experimental studies; present its reliability data, and strategies for its use in teaching critical appraisal.
Methods: Content validation and reliability were established with 46 guideline developers and physical therapy clinicians whose CAT-EI answers were compared to a key. Revisions to the tool included placing scoring definitions before the appraisal template. A 2nd round of reliability testing was conducted after revision.
Results: Content validation: 73% of participants rated the questions as Clear, 14% as Somewhat Clear, 77% felt the questions were Appropriate to Include and 10% were Neutral. After the tool restructuring, ICCs for one article were ICC(3,k) = .84 (CI=.767-.902) for individual items and .98 (CI=.973-.990) for the average of all items. The second article had an ICC(3,k) = .84 (CI=.778-.906) for individual items and .98 (CI=.979-.992) for the average of all items, indicating that the changes made in Phase 2 improved inter-rater reliability. Survey comments included that the ability to separately appraise up to 5 outcome measures to determine separate levels of evidence was helpful. To date, 7 published, 1 in press and 3 in-progress clinical practice guidelines have used the tool successfully to appraise intervention studies.
Conclusions: Content validity and strong reliability was established for the CAT-EI, including with clinicians who had not had significant training in critical appraisal or systematic review methods. The tool layout supports teaching article dissection and appraisal, with differential analysis of multiple outcome measures and their impact on assigning levels of evidence. Guideline developers have used it successfully with many clinicians who volunteer as study appraisers.