Article type
Year
Abstract
Objective:
To provide the reviewer with a background to the problem of outcome selection bias and how it might lead to misleading conclusions, to demonstrate how a reviewer might identify such bias in their review, and to present techniques for assessing the robustness of the meta-analysis to such bias.
Summary:
Within-study selective reporting bias has been defined as the selection, on the basis of the results, of a subset of the analyses undertaken to be included in a study publication. Sources of such bias will be described. The workshop will focus on outcome selection bias. The effect of within study selective reporting of outcomes will be demonstrated.
Direct empirical evidence for the existence of outcome selection bias is accumulating. In a meta-analysis it is often the case that a total number of k eligible studies are identified but only n report the data of interest. The reviewer needs to examine the remaining (k-n) studies to establish whether the outcome of interest has been collected but not reported. This should ideally involve contact with the original trialists which may result in missing data being made available or it may confirm that the outcome data were not recorded. However it is likely that in a subset of these studies, m (<; k-n) say, no such information is forthcoming.
It is important to assess the level of suspicion that selective non-reporting has occurred in these m studies. Methods for the identification of within-study selective reporting in a meta-analysis and an individual study will be described and illustrated using examples.
If the level of suspicion is high, a useful first stage is to undertake a sensitivity analysis to assess the robustness to selective reporting. Two methods for such an analysis will be illustrated and compared using examples.
Participants will be encouraged to undertake such assessments for examples provided and to discuss the issues for their own reviews.
To provide the reviewer with a background to the problem of outcome selection bias and how it might lead to misleading conclusions, to demonstrate how a reviewer might identify such bias in their review, and to present techniques for assessing the robustness of the meta-analysis to such bias.
Summary:
Within-study selective reporting bias has been defined as the selection, on the basis of the results, of a subset of the analyses undertaken to be included in a study publication. Sources of such bias will be described. The workshop will focus on outcome selection bias. The effect of within study selective reporting of outcomes will be demonstrated.
Direct empirical evidence for the existence of outcome selection bias is accumulating. In a meta-analysis it is often the case that a total number of k eligible studies are identified but only n report the data of interest. The reviewer needs to examine the remaining (k-n) studies to establish whether the outcome of interest has been collected but not reported. This should ideally involve contact with the original trialists which may result in missing data being made available or it may confirm that the outcome data were not recorded. However it is likely that in a subset of these studies, m (<; k-n) say, no such information is forthcoming.
It is important to assess the level of suspicion that selective non-reporting has occurred in these m studies. Methods for the identification of within-study selective reporting in a meta-analysis and an individual study will be described and illustrated using examples.
If the level of suspicion is high, a useful first stage is to undertake a sensitivity analysis to assess the robustness to selective reporting. Two methods for such an analysis will be illustrated and compared using examples.
Participants will be encouraged to undertake such assessments for examples provided and to discuss the issues for their own reviews.