Article type
Year
Abstract
Background: bias in the publication and reporting of research findings is a major threat in evidence synthesis. While publication and related bias have been well documented in clinical research (which concerns health issues related to individual patients), little is known about the occurrence and extent of these biases in health services and delivery research ((HSDR), which concerns how health services are best organised and delivered).
Objectives: to collect empirical evidence on publication and related bias in HSDR, current practice in detecting/mitigating these biases in HSDR systematic reviews, and stakeholders’ perceptions and experiences.
Methods: we undertook a multi-method study funded by the UK National Institute for Health Research consisting of:
1) a systematic review of relevant literature;
2) a survey of systematic reviews on substantive HSDR topics (n = 200);
3) case studies of statistical methods for detecting the bias;
4) follow-up of cohorts of HSDR studies (total n = 300);
5) key informant interviews of HSDR stakeholders and a focus group discussion (total n =3 2).
Results: our systematic review identified only four studies investigating publication bias in HSDR, three of which focused on health informatics research. All found some evidence of publication bias but all had methodological weaknesses. Three systematic reviews of substantive HSDR topics compared findings from published literature with grey/unpublished literature, and found that their effect estimates sometimes, but not always, differ significantly. One study provided an example where the volume, quality and geographical coverage and timeliness of evidence differed between published and grey literature related to global health. Our survey of HSDR systematic reviews found a low prevalence of considering/assessing publication (43%) and outcome reporting (17%) bias. Case studies highlighted major limitations in current methods for detecting publication bias due to heterogeneity. We did not find an association between statistical significance of findings and publication status in four HSDR study cohorts (total n = 300) followed up. Key informant interviews uncovered diverse perceptions about the bias among stakeholders, and identified features of HSDR that might have contributed to or mitigated their occurrence and impact.
Conclusions: publication and outcome reporting bias can and do exist in HSDR. Our findings suggest that the diversity of methodological approaches and heterogeneous nature of HSDR pose particular challenges for detecting and preventing the bias, but these features of HSDR might to some extent also mitigate their occurrence and impact.
Patient or healthcare consumer involvement: the project was informed by two patient and public advisors from inception. Patients and the public also participated in a focus group discussion. They all highlighted the importance of raising awareness of the bias and taking measures to minimising its occurrence.
Objectives: to collect empirical evidence on publication and related bias in HSDR, current practice in detecting/mitigating these biases in HSDR systematic reviews, and stakeholders’ perceptions and experiences.
Methods: we undertook a multi-method study funded by the UK National Institute for Health Research consisting of:
1) a systematic review of relevant literature;
2) a survey of systematic reviews on substantive HSDR topics (n = 200);
3) case studies of statistical methods for detecting the bias;
4) follow-up of cohorts of HSDR studies (total n = 300);
5) key informant interviews of HSDR stakeholders and a focus group discussion (total n =3 2).
Results: our systematic review identified only four studies investigating publication bias in HSDR, three of which focused on health informatics research. All found some evidence of publication bias but all had methodological weaknesses. Three systematic reviews of substantive HSDR topics compared findings from published literature with grey/unpublished literature, and found that their effect estimates sometimes, but not always, differ significantly. One study provided an example where the volume, quality and geographical coverage and timeliness of evidence differed between published and grey literature related to global health. Our survey of HSDR systematic reviews found a low prevalence of considering/assessing publication (43%) and outcome reporting (17%) bias. Case studies highlighted major limitations in current methods for detecting publication bias due to heterogeneity. We did not find an association between statistical significance of findings and publication status in four HSDR study cohorts (total n = 300) followed up. Key informant interviews uncovered diverse perceptions about the bias among stakeholders, and identified features of HSDR that might have contributed to or mitigated their occurrence and impact.
Conclusions: publication and outcome reporting bias can and do exist in HSDR. Our findings suggest that the diversity of methodological approaches and heterogeneous nature of HSDR pose particular challenges for detecting and preventing the bias, but these features of HSDR might to some extent also mitigate their occurrence and impact.
Patient or healthcare consumer involvement: the project was informed by two patient and public advisors from inception. Patients and the public also participated in a focus group discussion. They all highlighted the importance of raising awareness of the bias and taking measures to minimising its occurrence.