Article type
Year
Abstract
Background: The decisions formedical practices are being encouraged based on scientific evidence from systematic reviews and meta-analyses. Usually, such reviews have concentrated exclusively on randomized trials. Indeed, some investigators have the opinion that non-randomized (or observational) studies should be excluded from all reviews because of the greater difficulties in assessing their methodological quality. However, in many areas of health care few randomized controlled trials exist and meta-analyses of observational studies may be important for healthy politics decision, mainly when the feasibility to perform randomized study is extremely difficult. For assessing the quality of evidence from observational studies and try to reduce potential bias numerous tools have been proposed for evaluation of methodological quality.
Objectives:To evaluate two tools for assessing quality and susceptibility to bias in observational studies.
Methods: Two authors independently applied questionnaries (Black instrument and New castle Ottawa) in three cohort studies from a systematic review of breast cancer group. Interobserver reproducibility was analyzed using kappa statistics.
Results:The average of kappa values was 0.3 95%CI (0.12–0.46) for Black and Downs instrument and average of kappa for NOS 0.39 95%CI(0.01–0.79).
Conclusions: The interobserver reliability was not observed in our study. The tools seem useful to analyze the risk of bias. However a larger sample is needed to show the interobserver reproducibility.
Objectives:To evaluate two tools for assessing quality and susceptibility to bias in observational studies.
Methods: Two authors independently applied questionnaries (Black instrument and New castle Ottawa) in three cohort studies from a systematic review of breast cancer group. Interobserver reproducibility was analyzed using kappa statistics.
Results:The average of kappa values was 0.3 95%CI (0.12–0.46) for Black and Downs instrument and average of kappa for NOS 0.39 95%CI(0.01–0.79).
Conclusions: The interobserver reliability was not observed in our study. The tools seem useful to analyze the risk of bias. However a larger sample is needed to show the interobserver reproducibility.