Article type
Year
Abstract
Background:
Rapid reviews often use methods not considered the ‘gold standard’ to expedite evidence reviews. Increasingly, systematic review software, such as DistillerSR, and online reviewing tools offer automated approaches to systematic review tasks that are traditionally conducted by human reviewers. The intent of automation is to speed up routine tasks (e.g., title and abstract screening); thus, freeing up the human capacity for tasks that require it. Automation also provides other approaches to improving the quality of reviews. In practice, reviewers can face practical and philosophical hurdles to using machines for systematic review tasks. Therefore, we ask, can machines help us as members of a rapid review team?
Objectives:
• Explore how machine learning can help with rapid reviews
• Describe a case study of using machine learning in a rapid review environment
Methods:
Using Google Scholar, we searched for recent abstracts and published papers on the use of DistillerSR’s artificial intelligence (AI) capacity in systematic reviews. We focused on papers published since 2019 to ensure the results were as applicable to the current version as possible.
For the case study, we selected a rapid review conducted using our standard methods (i.e., a single reviewer with quality assurance from a senior researcher) and assessed how we could use DistillerSR’s AI in the rapid review process. We also explored the potential benefits and limitations of our approach.
Results:
DistillerSR has been evaluated as an automated tool in screening for a range of publication types, including for randomized controlled trials. However, one evaluation of the use of DistillerSR suggested that there are limitations to replacing human screening with automated screening alone. Based on the published evaluations, we decided that in our context, DistillerSR would be a useful second screener to quickly exclude references at the title and abstract screening stage. We also explored how DistillerSR could be used to ‘check’ the decisions of the human reviewer at the full-text stage. We will present our experience of using DistillerSR’s AI tool, including the practical challenges we faced.
Conclusions:
Automated processes for screening is increasingly promoted as an effective and efficient way to improve decision accuracy and reduce review time. While we found that machine screening can be useful in providing another level of certainty in reviewer decision, we also encountered practical challenges. Challenges included understanding strengths and limitations of machine selection and applying processes in real life. We also faced challenges of reviewer confidence in results. Are machines our friend? We think so, but we would like to get to know them better.
Patient or healthcare consumer involvement:
Not relevant for this submission
Rapid reviews often use methods not considered the ‘gold standard’ to expedite evidence reviews. Increasingly, systematic review software, such as DistillerSR, and online reviewing tools offer automated approaches to systematic review tasks that are traditionally conducted by human reviewers. The intent of automation is to speed up routine tasks (e.g., title and abstract screening); thus, freeing up the human capacity for tasks that require it. Automation also provides other approaches to improving the quality of reviews. In practice, reviewers can face practical and philosophical hurdles to using machines for systematic review tasks. Therefore, we ask, can machines help us as members of a rapid review team?
Objectives:
• Explore how machine learning can help with rapid reviews
• Describe a case study of using machine learning in a rapid review environment
Methods:
Using Google Scholar, we searched for recent abstracts and published papers on the use of DistillerSR’s artificial intelligence (AI) capacity in systematic reviews. We focused on papers published since 2019 to ensure the results were as applicable to the current version as possible.
For the case study, we selected a rapid review conducted using our standard methods (i.e., a single reviewer with quality assurance from a senior researcher) and assessed how we could use DistillerSR’s AI in the rapid review process. We also explored the potential benefits and limitations of our approach.
Results:
DistillerSR has been evaluated as an automated tool in screening for a range of publication types, including for randomized controlled trials. However, one evaluation of the use of DistillerSR suggested that there are limitations to replacing human screening with automated screening alone. Based on the published evaluations, we decided that in our context, DistillerSR would be a useful second screener to quickly exclude references at the title and abstract screening stage. We also explored how DistillerSR could be used to ‘check’ the decisions of the human reviewer at the full-text stage. We will present our experience of using DistillerSR’s AI tool, including the practical challenges we faced.
Conclusions:
Automated processes for screening is increasingly promoted as an effective and efficient way to improve decision accuracy and reduce review time. While we found that machine screening can be useful in providing another level of certainty in reviewer decision, we also encountered practical challenges. Challenges included understanding strengths and limitations of machine selection and applying processes in real life. We also faced challenges of reviewer confidence in results. Are machines our friend? We think so, but we would like to get to know them better.
Patient or healthcare consumer involvement:
Not relevant for this submission