Article type
Year
Abstract
Introduction/Objective: Systematic reviews intend to summarize the results from original research in order to draw conclusions about the efficacy of treatments. Many systematic reviews include measurement of the methodological quality of the individual trials because variation in quality may affect the overall conclusions. We evaluated the impact of trial design attributes, like a proper randomization procedure and blinding, on effect sizes, using the individual items provided by different criteria lists.
Methods: A data set of 44 trials on the efficacy of conservative interventions in patients with an acute lateral ankle sprain is used. Quality assessment was performed using a combined criteria list with all items from the Maastricht list, the Jadad list and the Delphi list The quality scores of the studies were determined independently by two of the authors (AFV, AFL) followed by a consensus meeting. To investigate the impact of design attributes, trial results were aggregated by subgroup, whereafter pooled effect sizes were compared.
Results: Quality scores vary from rather low to reasonably good. Comparison between the three criteria lists, show a moderate to good correlation. Only 26 of the 44 studies allowed calculation of effect sizes, for one or more outcome measures. When the randomization procedure is unknown, or when blinding is not reported, the pooled effect sizes are lower compared with studies with a concealed or appropriate randomization, or when blinding is reported.
Discussion: Proper randomization and blinding had impact on effect sizes. There was no difference between the three criteria lists on this point. The direction of the influence of the design attributes was contrary to other empirical research.
Methods: A data set of 44 trials on the efficacy of conservative interventions in patients with an acute lateral ankle sprain is used. Quality assessment was performed using a combined criteria list with all items from the Maastricht list, the Jadad list and the Delphi list The quality scores of the studies were determined independently by two of the authors (AFV, AFL) followed by a consensus meeting. To investigate the impact of design attributes, trial results were aggregated by subgroup, whereafter pooled effect sizes were compared.
Results: Quality scores vary from rather low to reasonably good. Comparison between the three criteria lists, show a moderate to good correlation. Only 26 of the 44 studies allowed calculation of effect sizes, for one or more outcome measures. When the randomization procedure is unknown, or when blinding is not reported, the pooled effect sizes are lower compared with studies with a concealed or appropriate randomization, or when blinding is reported.
Discussion: Proper randomization and blinding had impact on effect sizes. There was no difference between the three criteria lists on this point. The direction of the influence of the design attributes was contrary to other empirical research.