Article type
Year
Abstract
Background: There are several tools used for many steps of the systematic review; however, there is significant variability in their psychometric properties
Objectives: The objective of this presentation is to identify the reliability and validity of the available tools, their limitations, and any recommendations to further improve the use of these tools.
Methods: A scoping review methodology was followed to map the literature published on the challenges and solutions of conducting evidence synthesis using the JBI scoping review methodology.
Results:
A total of 47 publications were included in the review. The current scoping review identified that LitSuggest, Rayyan, Abstractr, BIBOT, R software, RobotAnalyst, DistillerSR, ExaCT, and NetMetaXL have potential to be used for the automation of systematic reviews. However, they are not without limitations. The review also identified other studies that employed algorithms that have not yet been developed into user friendly tools. Some of these algorithms showed high validity and reliability, but their use is conditional on user knowledge of computer science and algorithms.
Conclusions: Abstract screening has reached maturity; data extraction is still an active area. Developing methods to semi-automate different steps of evidence synthesis via machine learning remains an important research direction. Also, it is important to move from the research prototypes currently available to professionally maintained platforms.
Patient, public, and/or healthcare consumer involvement: None to declare.
Objectives: The objective of this presentation is to identify the reliability and validity of the available tools, their limitations, and any recommendations to further improve the use of these tools.
Methods: A scoping review methodology was followed to map the literature published on the challenges and solutions of conducting evidence synthesis using the JBI scoping review methodology.
Results:
A total of 47 publications were included in the review. The current scoping review identified that LitSuggest, Rayyan, Abstractr, BIBOT, R software, RobotAnalyst, DistillerSR, ExaCT, and NetMetaXL have potential to be used for the automation of systematic reviews. However, they are not without limitations. The review also identified other studies that employed algorithms that have not yet been developed into user friendly tools. Some of these algorithms showed high validity and reliability, but their use is conditional on user knowledge of computer science and algorithms.
Conclusions: Abstract screening has reached maturity; data extraction is still an active area. Developing methods to semi-automate different steps of evidence synthesis via machine learning remains an important research direction. Also, it is important to move from the research prototypes currently available to professionally maintained platforms.
Patient, public, and/or healthcare consumer involvement: None to declare.