Article type
Year
Abstract
Background: Systematic reviews (SR) are the cornerstone of evidence based medicine and guide the development of guidelines, policy decisions and clinical decision making. However, a high-quality SR takes significant resources and time. Methods to improve the efficiency of conducting SRs are needed.
Objectives: To evaluate the effectiveness of using dual monitors (i.e. two screens for each computer) on speed (measure of efficiency) and inter-rater agreement among the two reviewers (measure of possible adverse effect of speed).
Methods: A cohort of reviewers before and after using dual monitors was compared to a control group that did not use dual monitors between 2009 and 2013. The outcomes of interest were time spent for abstract screening, full-text screening and data extraction, and inter-rater agreement measured by Cohen’s Kappa. We adopted difference-in-differences linear regression models by adjusting number of studies eligible for each step, number of questions for data extraction, rate of complicated questions, and reviewer’s experience in SRs and content expertise.
Results: A total of 57 reviewers and 59 SRs were included in the analysis. Compared to the control group, we found a significant reduction of time spent on data extraction (median time difference = −22.23 minutes, 95% CI: −42.98, −1.48, p = 0.04). No significant change was found in time spent on abstract screening, full text screening, or inter-rater agreement (Table 1).
Conclusions: Using dual monitors in SR reduces time spent on data extraction without affecting inter-rater agreement.
Objectives: To evaluate the effectiveness of using dual monitors (i.e. two screens for each computer) on speed (measure of efficiency) and inter-rater agreement among the two reviewers (measure of possible adverse effect of speed).
Methods: A cohort of reviewers before and after using dual monitors was compared to a control group that did not use dual monitors between 2009 and 2013. The outcomes of interest were time spent for abstract screening, full-text screening and data extraction, and inter-rater agreement measured by Cohen’s Kappa. We adopted difference-in-differences linear regression models by adjusting number of studies eligible for each step, number of questions for data extraction, rate of complicated questions, and reviewer’s experience in SRs and content expertise.
Results: A total of 57 reviewers and 59 SRs were included in the analysis. Compared to the control group, we found a significant reduction of time spent on data extraction (median time difference = −22.23 minutes, 95% CI: −42.98, −1.48, p = 0.04). No significant change was found in time spent on abstract screening, full text screening, or inter-rater agreement (Table 1).
Conclusions: Using dual monitors in SR reduces time spent on data extraction without affecting inter-rater agreement.
PDF