Extraction error or interpretation: A case study of event data extraction in systematic reviews from three high-impact, high-quality peer-reviewed sources

Article type
Authors
Carroll C1, Scope A1, Kaltenthaler E1
1University of Sheffield, UK
Abstract
Background: Data extraction is a crucial but under-researched area of systematic review process and methods. Data extraction errors may occur in all variables extracted for a review, but outcomes appear to generate the most errors.

Objectives: To assess differences in the event data extracted and analysed for three key outcomes in three systematic reviews on the same topic published within 18 months of one another in the following sources: The Cochrane Library (CL); the British Medical Journal BMJ); and the authors' own Health Technology Assessment review.

Methods: We compared event data from reviews assessing the effectiveness of total hip replacement versus hemiarthroplasty for the treatment of displaced intracapsular hip fracture. We analysed the event data extracted and analysed for the outcomes of dislocation rates, one-year mortality rates, and revision rates, that were common across all three reviews, from those trials common to all three reviews. Differences and possible bias were investigated.

Results: Across the three outcomes, extraction errors accounted for between 8% and 25% of the event data extracted and analysed by the BMJ review and between 0% and 17% in the CL review, respectively. A further 8-33% differences in the BMJ review and 8-17% in the CL review might be explained by issues of interpretation, eg. applying apparent 'best case’ scenario analysis, but not justifying or explaining such choices. These differences did lead to small differences in meta-analysed relative risks, but none was significant.

Conclusions: Systematic reviews require full implementation of its required processes to offer a robust method of evidence review and synthesis. The presence of errors in key outcome data suggests that full and proper implementation is not always achieved, even with apparent 'double data extraction’ within high-quality sources of systematic reviews. Secondly, reviewers should make every effort to clarify or explain their choice of data.