How should we handle predatory journals in evidence synthesis?

Article type
Authors
Barker T1, Pollock D1, Stone J2, Klugar M3, Scott A4, Stern C2, Wiechula R5, Shamseer L6, Aromataris E2, Ross-White A7, Munn Z1
1Health Evidence Syntehsis, Recommendations and Impact, School of Public Health, The University of Adelaide
2JBI, Faculty of Health and Medical Sciences, The University of Adelaide
3Czech National Centre for Evidence-Based Healthcare and Knowledge Translation (Cochrane Czech Republic, Czech EBHC: JBI Centre of Excellence, Masaryk University GRADE Centre)
4Institute for Evidence-Based Healthcare, Bond University
5Adelaide Nursing School, Faculty of Health and Medical Sciences
6Knowledge Translation Program, Li Ka Shing Knowledge Institute, Unity Health Toronto
7Queen's University Library and Queen's Collaboration for Health Care Quality (QcHcQ): JBI Centre of Excellence, Queen's University
Abstract
Background:
Synthesizers of evidence are increasingly likely to encounter studies published in predatory journals during the evidence synthesis process. Predatory journals and the studies published within them have caused significant disruption across the scientific landscape and present unique concerns within academia. This is due to the lack of transparent editorial oversight they employ, which may increase the risk that the studies published within these journals are erroneous or fraudulent.

Objectives:
Under the broader aim to develop methodological guidance for the use of studies published within predatory journals in evidence syntheses, the objective of this research was to explore the attitudes, opinions and experiences of experts in the synthesis of evidence regarding predatory journals.

Methods:
A global, descriptive survey-based cross-sectional study was carried out between April 1 2021 and June 1 2021. The survey was sent to prominent bodies in the field of evidence-based healthcare and systematic reviews, including JBI, Cochrane, Guidelines International Network (GIN), Campbell Collaboration and GRADE. The study utilized a self-administered questionnaire that was coded and disseminated online through Survey Monkey for data collection from the target participant. Because the target audience was those who had experience in evidence synthesis, survey logic was used to remove responses from respondents who lacked this experience.

Results:
Two hundred and sixty-four evidence synthesis experts responded to this survey. Most respondents agreed with the definition of a predatory journal (86%); however, several (19%) responded that this definition was difficult to apply practically. Many respondents believed that studies published in predatory journals are still eligible for inclusion into an evidence synthesis project. However, this was only after the study had been determined to be ‘high-quality’ (39%) or if the results were validated (13%). Only 32% of the respondents previously used a checklist or tool to identify a predatory journal.

Conclusions:
The results of this project identify that there is a need for more consensus-based guidance regarding the inclusion of studies published within predatory journals into an evidence synthesis project. While critical appraisal of these studies is an expected quality control method, evidence synthesis authors are urged to consider additional steps in their future evidence synthesis projects.