Project TAKE 5—transforming the antecedent ‘know-how’ for evidence synthesis: 5 factors to consider when conducting systematic reviews!

Article type
Authors
Jadotte Y1, Holly C1, Salmond S1
1University of Medicine and Dentistry of New Jersey
Abstract
Background: The need to increase the number of high quality systematic reviews published annually is clear. Federally funded evidence-based practice centers in the US and worldwide bodies that promote and coordinate evidence synthesis activities, such as the Cochrane Collaboration and Joanna Briggs Institute, struggle to increase the production of high quality systematic reviews. While research has already shown that the conduct of systematic reviews in a supervised environment, as defined and guided by these and similar organizations, increases the quality of reviews when compared those published outside of this setting, the factors that determine the efficiency of methods for conducting systematic reviews are understudied.

Objectives: This project seeks to elucidate the factors that influence the efficiency of systematic review production methods, and more specifically, to identify those that limit or increase the rate at which high quality systematic reviews are conducted and published.

Methods: This study uses a comparative, mixed methods assessment of the systematic review-related activities of one US-based center for evidence synthesis. The center consists of two physically distinct sites, each located in a different academic institution. Known quality-related factors (ex. training of reviewers, editorial support for on-going systematic reviews) are identical across sites.

Results: The systematic review production rate appears to be heavily influenced by institutional factors such as research expectations and policies related to the publication of systematic reviews, faculty perceptions, graduate student involvement, and type of mentorship.

Conclusions: This pilot study suggests five factors that could be modulated to increase the systematic review production rate of evidence synthesis centers. More comparative effectiveness research is needed to identify other factors and to test their impact on institutional evidence synthesis capacity.