Article type
Year
Abstract
Background: Artificial intelligence (AI) to improve the efficiency of time-consuming tasks such as title/abstract screening has been difficult to implement for many groups who conduct systematic reviews. DistillerSR provides review authors a cloud-based tool to screen, assess, and abstract studies and offers three types of AI: Re-Ranking, AI Screening, and Classifiers. Re-Ranking and AI Screening are easy to implement and require no knowledge of coding or the need to train large sets of data. Re-Ranking aims to “bubble up” the most relevant results during screening to identify potentially relevant studies faster while AI Screening can act as a second reviewer and reduce the title/abstract screening burden on humans. But are these tools actually saving researchers time?
Objectives: To measure how much time was saved, or could have been saved, by using DistillerSR Re-Ranking and AI Screening for title/abstract screening.
Methods: We examined all scoping, rapid, and systematic reviews conducted by our organization since the introduction of the DistillerSR AI tools regardless of whether AI tools were used. We calculated the total time humans spent on title/abstract screening and the average time spent per record. We employed DistillerSR’s Re-Ranking Simulation to determine the predicted portion of abstracts needed to screen to identify all abstracts that were selected for inclusion (i.e., included study plateau). Finally, we estimated the human time saved with AI Screening by multiplying the number of records screened with AI by the average time spent per record by human screeners.
Results: We found that substantial review time can be saved by using these tools together during title/abstract screening, with additional time savings possible when used in conjunction with human judgment of when the included study plateau has likely been reached.
Conclusions: Easy-to-implement AI tools like those offered through DistillerSR can substantially reduce the time researchers spend screening, particularly in rapid review environments. Those time savings can be better used in the analysis and write-up stages of a review.
Patient, public and/or healthcare consumer involvement: This study has indirect relevance to patients. Improved efficiencies in identifying relevant studies can free up researcher time for more robust analysis and write-up.
Objectives: To measure how much time was saved, or could have been saved, by using DistillerSR Re-Ranking and AI Screening for title/abstract screening.
Methods: We examined all scoping, rapid, and systematic reviews conducted by our organization since the introduction of the DistillerSR AI tools regardless of whether AI tools were used. We calculated the total time humans spent on title/abstract screening and the average time spent per record. We employed DistillerSR’s Re-Ranking Simulation to determine the predicted portion of abstracts needed to screen to identify all abstracts that were selected for inclusion (i.e., included study plateau). Finally, we estimated the human time saved with AI Screening by multiplying the number of records screened with AI by the average time spent per record by human screeners.
Results: We found that substantial review time can be saved by using these tools together during title/abstract screening, with additional time savings possible when used in conjunction with human judgment of when the included study plateau has likely been reached.
Conclusions: Easy-to-implement AI tools like those offered through DistillerSR can substantially reduce the time researchers spend screening, particularly in rapid review environments. Those time savings can be better used in the analysis and write-up stages of a review.
Patient, public and/or healthcare consumer involvement: This study has indirect relevance to patients. Improved efficiencies in identifying relevant studies can free up researcher time for more robust analysis and write-up.