Article type
Year
Abstract
Background: Systematic reviews (SRs) are being published at an accelerated rate. Decision-makers are often faced with the challenge of comparing and choosing between multiple SRs on the same topic.
Objectives: We surveyed individuals in the healthcare field to understand what criteria they use to compare and select one or more SRs from multiple on the same topic.
Methods: We developed a survey with 21 open and closed questions. We disseminated it through social media and professional networks.
Results: Of the 684 respondents, 25% were health practitioners, 9% were policymakers, 39% were researchers, and 13% were students. Policymakers, practitioners, and researchers frequently sought out SRs (98.1%) as a source of evidence to inform decision-making. They frequently (97.7%) found more than one SR on a given topic of interest to them. Half (50%) struggled to choose the most valid and trustworthy SR amongst multiple. These difficulties related to lack of time (55.2%), insufficient skills/experience in quality appraisal (27.7%), and difficulty comparing different SRs (54.3%). Respondents compared SRs based on the following: relevance to their question of interest, its methodological quality, and recency of SR search date.
Conclusions: The exponential growth in the number of SRs leads to duplication of research on similar questions and compounds the problem of identifying which evidence is of the highest quality for decision-makers. Failure to critically appraise and choose the highest quality SRs means that practice and policy decisions may not reflect the best evidence, the implementation of better intervention strategies is delayed, and patients may unduly suffer.
Patient, public, and/or healthcare consumer involvement: Patients participated in and responded to the survey.
Objectives: We surveyed individuals in the healthcare field to understand what criteria they use to compare and select one or more SRs from multiple on the same topic.
Methods: We developed a survey with 21 open and closed questions. We disseminated it through social media and professional networks.
Results: Of the 684 respondents, 25% were health practitioners, 9% were policymakers, 39% were researchers, and 13% were students. Policymakers, practitioners, and researchers frequently sought out SRs (98.1%) as a source of evidence to inform decision-making. They frequently (97.7%) found more than one SR on a given topic of interest to them. Half (50%) struggled to choose the most valid and trustworthy SR amongst multiple. These difficulties related to lack of time (55.2%), insufficient skills/experience in quality appraisal (27.7%), and difficulty comparing different SRs (54.3%). Respondents compared SRs based on the following: relevance to their question of interest, its methodological quality, and recency of SR search date.
Conclusions: The exponential growth in the number of SRs leads to duplication of research on similar questions and compounds the problem of identifying which evidence is of the highest quality for decision-makers. Failure to critically appraise and choose the highest quality SRs means that practice and policy decisions may not reflect the best evidence, the implementation of better intervention strategies is delayed, and patients may unduly suffer.
Patient, public, and/or healthcare consumer involvement: Patients participated in and responded to the survey.