The mixed methods appraisal tool for assessing studies with diverse designs: example from a systematic mixed studies review on the key processes and outcomes of participatory research with health organizations

Article type
Authors
Pluye P1, Bush P1, Macaulay A1, Loignon C2, Haggerty J1, Granikov V1, Repchinsky C3, Parry S4, Brown B5, Bartlett G1, Wright M6, Pelletier J7
1McGill University, Canada
2Universite de Sherbrooke, Canada
3Canadian Pharmacists Association, Canada
4YMCA Quebec, Canada
5Saint Mary’s Hospital, Canada
6International Collaboration for Participatory Health Research, Germany
7Universite de Montreal, Canada
Abstract
Background: Participatory Research with Health Organizations (PRO) is conducted with organization members, blending research and action to improve organizational practice. No systematic review of the PRO literature exists. There is a need to identify evidence about benefits and pitfalls of PRO. This will entail reviewing all types of evidence. Problem: Appraising the methodological quality of evidence derived from diverse study designs remains challenging. The Mixed Methods Appraisal Tool (MMAT) is the only critical appraisal tool for assessing the most common types of study designs, including mixed methods (tool tested for efficiency and reliability).

Objective: To illustrate how the MMAT can help overcome the challenges associated with appraising the quality of studies with diverse study designs. Design: Participatory systematic mixed studies review on the key processes and outcomes of PRO. Type of studies: Qualitative, quantitative, and mixed methods PRO studies. Eligibility criteria: Participatory health research; Health organizations; English and French. Critical appraisal: Using the MMAT, two independent reviewers will appraise included studies. Data extraction and analysis: The MMAT checklist includes 2 screening questions and 19 questions corresponding to 5 types of studies: qualitative research, randomized controlled trials, non-randomized studies, quantitative descriptive, and mixed methods studies. For each included study, reviewers will code for all MMAT items. When disagreements between reviewers cannot be easily resolved, a third party will do arbitrage. For each item, pre-discussion inter-reviewer reliability will be estimated (kappa statistic).

Results: We will present the current version of the MMAT (checklist and tutorial) and results of the critical appraisal of studies included in our systematic review.

Conclusion: Results will illustrate the MMAT’s utility to overcome the stated challenges, inform subsequent research to explore its ease of use, and develop a user-friendly online manual. Ultimately, we anticipate our partners will disseminate our synthesis results, and implement them to improve organizational practices.