Article type
Abstract
Background: Evaluating the risk of bias in included studies is a key component of systematic review methods. Highlighting potential methodological limitations contributes to the overall confidence in the evidence and allows evidence users to consider the impact of bias on the research findings. A plethora of tools have been developed to enable researchers to systematically assess risk of bias.
As the volume of systematic reviews increases, so does the likelihood that individual trials will be included in more than systematic review on a topic. Individual trials may then be assessed for risk of bias multiple times and using multiple tools. We were interested to understand how consistent risk of bias assessments were across reviews.
Method: We extracted and tabulated the risk of bias assessment for each relevant study (n=68) from the systematic reviews (n=24) included in an umbrella review of multidisciplinary occupational health interventions aiming to improve return to work outcomes.
Results: Of the 68 primary studies, 47 were included in more than one systematic review (range 2 to 9) and for 38 of these an overall risk of bias score for the study was included within the review. Only five of the 38 studies (13%) were given the same overall risk of bias score across the reviews in which they were included. Some studies (n=5) were assessed as being of high, moderate and low risk of bias in different reviews.
Discussion: There are many possible reasons for inconsistency between risk of bias assessments across studies e.g. skills and experience of the review team, differences in interpretation between review teams, different tools giving a different overall rating and over-reliance on the use of tools without consideration of the context.
Lack of consistency in risk of bias assessments of the same study in different reviews has implications for research integrity, open science and ensuring accessible evidence for all, as well as our confidence in systematic review findings and in overviews of reviews.
Relevance and importance to patients: A better understanding of the reasons for the inconsistency in risk of bias assessments may result in more robust evidence production.
As the volume of systematic reviews increases, so does the likelihood that individual trials will be included in more than systematic review on a topic. Individual trials may then be assessed for risk of bias multiple times and using multiple tools. We were interested to understand how consistent risk of bias assessments were across reviews.
Method: We extracted and tabulated the risk of bias assessment for each relevant study (n=68) from the systematic reviews (n=24) included in an umbrella review of multidisciplinary occupational health interventions aiming to improve return to work outcomes.
Results: Of the 68 primary studies, 47 were included in more than one systematic review (range 2 to 9) and for 38 of these an overall risk of bias score for the study was included within the review. Only five of the 38 studies (13%) were given the same overall risk of bias score across the reviews in which they were included. Some studies (n=5) were assessed as being of high, moderate and low risk of bias in different reviews.
Discussion: There are many possible reasons for inconsistency between risk of bias assessments across studies e.g. skills and experience of the review team, differences in interpretation between review teams, different tools giving a different overall rating and over-reliance on the use of tools without consideration of the context.
Lack of consistency in risk of bias assessments of the same study in different reviews has implications for research integrity, open science and ensuring accessible evidence for all, as well as our confidence in systematic review findings and in overviews of reviews.
Relevance and importance to patients: A better understanding of the reasons for the inconsistency in risk of bias assessments may result in more robust evidence production.