The opinions of guideline developers towards automation in health research: a thematic analysis using Rogers’ Diffusion of Innovation

Article type
Authors
Arno A1, Thomas J1, Elliott J2, Wallace B3
1Institute of Education, University College London
2School of Public Health and Preventive Medicine, Monash University
3College of Computer and Information Science, Northeastern University
Abstract
Background: there is evolving discussion about the use of advanced technology to aid the completion of systematic reviews, including the use of machine learning and crowd-sourcing. One of the key aims of producing high-quality health evidence is to use it to develop guidelines. Therefore, guideline developers are key gatekeepers in the acceptance and use of evidence produced using machine-learning and crowd-sourcing. There has not yet been research regarding the attitudes of guideline developers towards the use of machine-learning and crowd-sourcing.

Objectives: the objective of this study was to describe and analyse the attitudes of guideline developers towards the use of machine learning and crowd-sourcing in evidence synthesis for health guidelines through the Diffusion of Innovation Framework (2010). This well-established theory posits five dimensions which affect the adoption of novel technologies: Relative advantage, Compatibility, Complexity, Trialability, and Observability.

Methods: we recruited and interviewed individuals who were currently working or had previously worked in guideline development. After transcription, we used a multiphase mixed deductive and grounded approach to analyse the data. First we coded transcripts with a deductive approach using Rogers’ Diffusion of Innovation as the top level themes. Within each of these themes, we used a second phase, with a grounded approach, to identify contributing sub-themes.

Results: participants were consistently most concerned with the theme of Compatibility (the extent to which an innovation is in line with current values and practices). Respondents were also concerned with Relative advantage and Observability, which were discussed in approximately equal amounts. In particular for the latter, participants expressed a desire for transparency in methodology of automation softwares. Participants were not especially interested in Complexity and Trialablity, which were discussed infrequently. These results were reasonably consistent across all participants.

Conclusions: if machine learning and other automation technologies are to be used in the field of systematic reviews and guideline development, it is important to maximize the transparency of these methods in order to address the concerns of these key stakeholders. It will also be crucial to ensure new technologies are in line with current values of research practice.