Article type
Year
Abstract
Background: Systematic reviews (SRs) employ high methodological standards to summarize primary research, and offer the most reliable and valid support for health policy decision making and guideline development. SRs frequently take longer than a year to complete and, consequently, often do not meet the needs of those who need to make decisions rapidly. Rapid reviews (RRs) are knowledge syntheses that abbreviate certain methodological aspects of SRs to produce information faster; these are a pragmatic alternative to SRs. However, RRs may produce less reliable results than SRs. Incomplete or inaccurate information from RRs could lead to an increased risk of making incorrect or inferior decisions/recommendations that may impact patients, practice, and policies.
Objective: To determine the degree of risk of getting a wrong answer that guideline developers and decision makers are willing to accept in exchange for faster evidence-synthesis.
Methods: We designed and pilot-tested an online-survey that asked participants to assign a value to the maximum risk of getting a misleading answer (wrong or inaccurate) that they are willing to accept in exchange for a rapid evidence synthesis. We will use a non random purposive sample of decision makers, contacted through email. All responses will be anonymous. We will administer the survey in two stages:
1. contacting individual decision makers who use evidence-syntheses identified through our professional networks and associations sending them a link to the survey; and
2. circulating a broad notice to targeted email distribution lists in order to enhance recruitment.
Survey enrollment is expected to be from April to July 2016 with reminder notifications sent at 2, 4, and 6 weeks.
Results: We will present our results at the Colloquium. Findings will provide insight into decision maker attitudes towards the potentially lower reliability of results from RRs. We will use results to establish a non-inferiority margin for an upcoming methods project that aims to test whether different methods of abbreviated search strategies are non-inferior to comprehensive, systematic literature searches.
Objective: To determine the degree of risk of getting a wrong answer that guideline developers and decision makers are willing to accept in exchange for faster evidence-synthesis.
Methods: We designed and pilot-tested an online-survey that asked participants to assign a value to the maximum risk of getting a misleading answer (wrong or inaccurate) that they are willing to accept in exchange for a rapid evidence synthesis. We will use a non random purposive sample of decision makers, contacted through email. All responses will be anonymous. We will administer the survey in two stages:
1. contacting individual decision makers who use evidence-syntheses identified through our professional networks and associations sending them a link to the survey; and
2. circulating a broad notice to targeted email distribution lists in order to enhance recruitment.
Survey enrollment is expected to be from April to July 2016 with reminder notifications sent at 2, 4, and 6 weeks.
Results: We will present our results at the Colloquium. Findings will provide insight into decision maker attitudes towards the potentially lower reliability of results from RRs. We will use results to establish a non-inferiority margin for an upcoming methods project that aims to test whether different methods of abbreviated search strategies are non-inferior to comprehensive, systematic literature searches.