The utilization of a machine learning tool to screen sources in a rapid scoping review

Article type
Authors
Cormac G1, Grossman M1, Langman E2, Macdonald M2, Moody E2, Pham B3, Tricco A4, Weeks L2
1University of Waterloo, Waterloo, Canada
2Aligning Health Needs and Evidence for Transformative Change: A JBI Centre of Excellence; School of Nursing, Faculty of Health, Dalhousie University, Nova Scotia, Canada
3University of Toronto, Toronto, Canada
4Unity Health Toronto, Toronto, Canada
Abstract
Background: The volume of systematic reviews has grown, exponentially catalyzed by the COVID-19 pandemic. The demand for the best available evidence as soon as possible created the need for rapid reviews. Such reviews necessitate modification of standard systematic and scoping review steps and, in the context of this review, the use of a machine learning tool, Computer Assisted Learning (CAL), to accommodate screening of a high volume of sources of evidence.
Objective: To describe and discuss the usage of CAL for title and abstract screening in a rapid scoping review of recent public initiatives to improve long-term care coverage, quality, financial protection, and financial sustainability for those aged 60 years and older
Methods: JBI scoping review methods informed a rapid review. MEDLINE, CINAHL (Cumulative Index to Nursing and Allied Health Literature), Embase, EconLit, ClinicalTrials.gov, WHO International Clinical Trial Registry Platform, ProQuest Dissertations and Theses, and resources within Canada’s Drug and Health Technology Agency Grey Matters database were all searched for sources in any language published between 2017 and 2022. Search results were uploaded to CAL for title and abstract screening against inclusion and exclusion criteria by 3 independent reviewers using a simplified version of the search strategy as the seed text. All records included were exported to Covidence for a second round of title and abstract screening by 2 reviewers and then full-text screening and data extraction.
Results: The searches produced 71,981 titles and abstracts; 20,252 duplicates were removed, CAL marked 47,661 as ineligible, and human reviewers marked 3179 as ineligible, with 889 screened by reviewers, 809 excluded, 80 full-text reviews completed, and 24 research reports meeting the inclusion criteria.
Conclusion: The use of a machine learning tool provided a dataset of relevant studies and a manageable screening time (6 weeks). The conduct of rapid reviews is essential to provision of evidence in a timely manner for clinicians, decision-makers, and policymakers and to optimal patient care, and CAL can be a key facilitator of the conduct of rapid reviews.