Article type
Year
Abstract
Background: Information and communication technologies offer opportunities to improve the efficiency and user experience of systematic review, but require evaluation for benefits and harms prior to widespread use. Tools to support crowd sourcing of review tasks could significantly improve efficiency of review development and the ability of authors to keep reviews up to date.
Objectives: The objective of this study was to assess the sensitivity, specificity, efficiency and user experience of citation screening by medical students using alternative screening technologies when compared to the decisions of experienced Cochrane review authors.
Methods: In this pilot randomized trial forty medical students were randomized 1:1:1:1 to screen 500 title and abstracts using either paper printouts; EndNote reference management software; ‘Regroup’, a web-based software prototype; or ‘Screen2Go’, an iPhone application. Search results from a single Cochrane systematic review were used and the screening decisions of an experienced pair of authors were defined as the reference standard. Screening decisions within each study arm were compared to the reference standard to generate estimates of sensitivity and specificity. In the primary analysis the performance of each screening modality was compared with the reference standard based on independent screening by a single reviewer. Secondary analyses assessed screening decisions derived from multiple independent reviewers without consensus decision making. Screening efficiency was assessed using automated and self-reported data on time to completion of screening. In depth interviews with a random subgroup of trial participants was used to explore user experience.
Results: Final results of the pilot randomized trial will be presented, including primary and secondary analyses of sensitivity and specificity, assessments of screening efficiency and results of the qualitative investigation of user experience.
Conclusions: Evaluation of screening technology is an important step towards crowd sourcing contributions to systematic review.
Objectives: The objective of this study was to assess the sensitivity, specificity, efficiency and user experience of citation screening by medical students using alternative screening technologies when compared to the decisions of experienced Cochrane review authors.
Methods: In this pilot randomized trial forty medical students were randomized 1:1:1:1 to screen 500 title and abstracts using either paper printouts; EndNote reference management software; ‘Regroup’, a web-based software prototype; or ‘Screen2Go’, an iPhone application. Search results from a single Cochrane systematic review were used and the screening decisions of an experienced pair of authors were defined as the reference standard. Screening decisions within each study arm were compared to the reference standard to generate estimates of sensitivity and specificity. In the primary analysis the performance of each screening modality was compared with the reference standard based on independent screening by a single reviewer. Secondary analyses assessed screening decisions derived from multiple independent reviewers without consensus decision making. Screening efficiency was assessed using automated and self-reported data on time to completion of screening. In depth interviews with a random subgroup of trial participants was used to explore user experience.
Results: Final results of the pilot randomized trial will be presented, including primary and secondary analyses of sensitivity and specificity, assessments of screening efficiency and results of the qualitative investigation of user experience.
Conclusions: Evaluation of screening technology is an important step towards crowd sourcing contributions to systematic review.