Many hands make light work—or do they? Results of two pilot studies looking at the effects of crowdsourcing

Article type
Authors
Noel-Storr A1, Struthers C2, Cullum S3, McShane R1, Creavin S4, Davis D5, Huckvale K6
1Cochrane Dementia and Cognitive Improvement Group
2Cochrane Training
3University of Bristol
4North Bristol NHS Trust
5Cambridge University
6Imperial College London
Abstract
Background: The production and maintenance of Cochrane systematic reviews is no small undertaking. It is a process made up of many steps and frequently author teams stall or lose momentum. These two studies sought to assess whether it is feasible to recruit individuals to perform one task necessary to Cochrane Reviews and CENTRAL in a way that would enable the task to be performed successfully, in a timely way and with no compromise to the methodological rigours required for Cochrane Reviews.

Methods: Two studies were conducted. Each looked at using ‘a crowd’ for the completion of a task vital to the maintenance of Cochrane Reviews and CENTRAL. Trial Blazers gave participants 200 citations to screen for potential inclusion in CENTRAL. The gold standard: whether those citations selected were indeed reports of randomised or controlled trials (thereby suitable for inclusion in CENTRAL); the second study gave a different cohort of subject 250 citations to screen for potential inclusion within a Cochrane diagnostic test accuracy (DTA) review. Gold standard: those studies that went on for inclusion within the review as selected by the expert author team. Outcomes: efficacy—were randomised trials and DTAs correctly identified and were those which were not trials or diagnostic test accuracy studies, correctly discarded? Participant motivation to take part; participant perceptions of the task in terms of difficulty or perceived skills required, and an assessment of ease of performing the task using the technology provided (a mobile screening tool for one, tradition PC based bibliographic software for the other study).

Results: Sensitivity and specificity of both studies was very good for those who completed the task. However, drop-out was high raising questions around participant incentive and motivation to perform the task, the importance of accessible support and guidance provided and the vital need to provide a smooth and user-intuitive pathway regarding the use of mobile technologies to perform the task.

Conclusions: Non-traditional contributors can be recruited and can successfully perform this task which is vital for both Cochrane Reviews and for CENTRAL. However, crowdsourcing is not an easy option. High drop-out is to be expected but chance of success on a large scale will rely heavily on reliable technologies, accessible guidance and ultimately an excellent understanding of user incentives and motivations.