Developing the CAT HPPR - a critical appraisal tool to assess the quality of systematic-, rapid-, and scoping-reviews investigating interventions in health promotion and prevention

Article type
Authors
Heise TL1, Seidler A2, Girbig M2, Freiberg A2, Alayli A3, Fischer M3, Haß W3, Zeeb H1
1Leibniz Institute for Prevention Research and Epidemiology—BIPS, Bremen
2Institute and Policlinic of Occupational and Social Medicine, Faculty of Medicine, Technische Universität Dresden, Dresden
3Federal Centre for Health Education—BZgA, Cologne
Abstract
Background:
For over three decades researchers have developed critical appraisal tools (CAT) for assessing the scientific quality of research overviews. Most established CATs for reviews in evidence-based medicine and evidence-based public health (EBPH) focus on systematic reviews (SR) with experimental interventions or exposure (e.g. AMSTAR 2, healthevidence.org CAT). For rapid evidence synthesis and research field exploration EBPH oriented organisations, however, often seek access to and fund rapid reviews (RRs) or scoping reviews (ScRs)—a subclass of reviews with potential differences in research questions (e.g. ScRs: population, concept, context), applied methods (e.g. RRs: single screening), or included data sources (e.g. ScRs: project reports) compared to common intervention SRs. Until now, no CAT is available to assess the quality of SRs, RRs, and ScRs following a unified approach.

Objectives:
The primary goal was to develop a pragmatic CAT for assessing the scientific and reporting quality of SRs, RRs, and ScRs. The work was initiated by the German Statutory Health Insurance Alliance for Health (“GKV-Bündnis für Gesundheit”) and is aligned with the establishment of a review database project with focus on health promotion and prevention in different settings.

Methods:
The development process of the Critical Appraisal Tool for Health Promotion and Prevention Reviews (CAT HPPR) included: (i) definition of important review formats and complementary approaches, (ii) identification of relevant CATs, (iii) prioritisation, selection and adaptation of quality criteria using a consensus approach, (iv) development of the rating system and guidance documents, (v) piloting/optimisation of the CAT with experts in the field, and (vi) approval of the final CAT.

Results:
We used a pragmatic search approach and established reporting guidelines/standards (n=4) as well as guidance documents (n=16) to develop working definitions for SRs, RRs, ScRs, and other review-types (e.g. those defined by statistical methods or included data sources). We successfully identified 13 relevant CATs, predominantly for SRs, and extracted 46 items. Following consensual discussions 15 individual criteria were included in our CAT and tailored to the review types of interest. The CAT was piloted with 14 different reviews which were eligible to be included in the database.

Conclusions:
The newly developed CAT HPPR follows a unique uniformed approach to assess a set of heterogeneous review formats. Feedback of external experts showed general feasibility and satisfaction with the tool. Current limitations may arise from lack of formal testing of validity due to time constraints (3 months) in the tool development process. Besides the application to larger review sets, this could be the focal point for future research projects.

Patient or healthcare consumer involvement:
No patients or healthcare consumers were directly involved in the tool development. External reviewers (incl. potential end-users) provided feedback which led to optimisation of the overall tool.