Article type
Year
Abstract
Objectives: data abstractors’ level of experience may affect the accuracy of data abstracted for systematic reviews. Using data from a randomized cross-over trial that compared different data abstraction approaches, we examined the association between abstractors’ level of experience and accuracy of data abstraction.
Methods: in the trial, we classified individuals as 'more experienced' if they had authored at least three published systematic reviews, and 'less experienced' otherwise. The current analysis used data from one approach tested in the trial that involved more and less experienced abstractors abstracting data independently. Each abstractor abstracted data related to study design, baseline characteristics, and outcome/results from six articles describing clinical trials into an online data system. To evaluate accuracy, we determined 'errors' by comparing abstracted data with an 'answer key' generated by two investigators (TL and IJS). We considered two types of errors: 1) incorrect abstractions, that is, items abstracted differed from the 'answer key' (any difference was considered an error); and 2) omissions, that is, items missed by the abstractors. To estimate the proportion of errors by level of experience, we used a binomial, generalized, linear, two-level, mixed model adjusting for the design factors.
Results: across all types of data, the error proportions were lower for more experienced abstractors (19%) than for less experienced abstractors (21%). Most errors pertaining to outcomes/results were due to omissions (45%) rather than incorrect abstractions (5%). Compared with less experienced abstractors, and after adjusting for design factors, more experienced abstractors had lower odds of making errors in items related to outcome/results (adjusted odds ratio (OR) 0.53, 95% confidence interval (CI) 0.34 to 0.82) and lower odds of making errors in items related to study design (adjusted OR 0.83, 95% CI 0.64 to 1.09), but higher odds of errors in items related to baseline characteristics (adjusted OR 1.42, 95% CI 0.97 to 2.06) (Table 1).
Conclusions: experience with data abstraction does matter. Evidence suggests that more experienced abstractors may abstract complex data items more accurately than less experienced abstractors. This indicates the need for better training on such aspects of data abstraction.
Patient or healthcare consumer involvement: this study suggests the importance of training and education of patients or healthcare consumers during systematic reviews.
Methods: in the trial, we classified individuals as 'more experienced' if they had authored at least three published systematic reviews, and 'less experienced' otherwise. The current analysis used data from one approach tested in the trial that involved more and less experienced abstractors abstracting data independently. Each abstractor abstracted data related to study design, baseline characteristics, and outcome/results from six articles describing clinical trials into an online data system. To evaluate accuracy, we determined 'errors' by comparing abstracted data with an 'answer key' generated by two investigators (TL and IJS). We considered two types of errors: 1) incorrect abstractions, that is, items abstracted differed from the 'answer key' (any difference was considered an error); and 2) omissions, that is, items missed by the abstractors. To estimate the proportion of errors by level of experience, we used a binomial, generalized, linear, two-level, mixed model adjusting for the design factors.
Results: across all types of data, the error proportions were lower for more experienced abstractors (19%) than for less experienced abstractors (21%). Most errors pertaining to outcomes/results were due to omissions (45%) rather than incorrect abstractions (5%). Compared with less experienced abstractors, and after adjusting for design factors, more experienced abstractors had lower odds of making errors in items related to outcome/results (adjusted odds ratio (OR) 0.53, 95% confidence interval (CI) 0.34 to 0.82) and lower odds of making errors in items related to study design (adjusted OR 0.83, 95% CI 0.64 to 1.09), but higher odds of errors in items related to baseline characteristics (adjusted OR 1.42, 95% CI 0.97 to 2.06) (Table 1).
Conclusions: experience with data abstraction does matter. Evidence suggests that more experienced abstractors may abstract complex data items more accurately than less experienced abstractors. This indicates the need for better training on such aspects of data abstraction.
Patient or healthcare consumer involvement: this study suggests the importance of training and education of patients or healthcare consumers during systematic reviews.