Background: Data extraction forms link systematic reviews with the primary research and provide the foundation for appraising, analysing, summarizing and interpreting the body of evidence. This makes their development, pilot testing and application a crucial part of the systematic reviews process. Several studies have shown that data extraction errors are frequent in systematic reviews, especially regarding outcome data. Despite this, data extraction methods receive relatively little attention in the literature.
Objectives: We sought to review the guidance that is available to systematic reviewers for the development, pilot testing and application of data extraction sheets.
Methods: We reviewed four types of sources: 1) methodological handbooks of major systematic review organisations (SRO); 2) textbooks on conducting systematic reviews; 3) methods documents from health technology assessment (HTA) agencies and 4) published journal articles on the use of data extraction sheet in systematic reviews. Documents were retrieved in February 2019. We extracted recommendations on the development, pilot testing and application of extraction forms. Items were chosen based on iterative reading of relevant guidance until saturation was reached and personal experience in conducting systematic reviews. One author extracted the data and a second author checked it for accuracy. We will summarize the results of our findings descriptively.
Results: We analysed 4 SRO handbooks, 11 textbooks and 6 HTA documents. Database searches for journal articles are currently being conducted. Preliminary results show that the most common recommendations on form development is that reviewers should plan in advance which data to extract; develop or adapt an extraction form custom to their review question; provide instructions on use and make sure to link multiple reports of the same study. While piloting the sheet is often recommended, little information is provided on how this should be done. Regarding the data extraction process, the most frequent recommendation is that data should be extracted by two reviewers (mostly independently) and that procedures to deal with disagreements should be in place. Few sources made recommendations on the expertise of the reviewers involved, training and reliability assessments.
Conclusions: Overall, our preliminary results suggest a lack of comprehensiveness and consistency of recommendations in many of the reviewed documents. This may be particularly problematic for less experienced reviewers. Limitations of our method are the scoping nature of the review and that we did not analyse internal documents of health technology agencies.
Patient or healthcare consumer involvement: Because this is a descriptive methodological analysis, we did not involve patients or healthcare consumers.