Article type
Year
Abstract
Background: Citations by other researchers are important in the dissemination of research findings.
Objectives: 1. To investigate whether papers reporting a statistically significant primary outcome are cited more often than papers with non-significant findings. 2. To assess whether statistical reporting and statistical errors in the analysis of the primary outcome are associated with the number of citations received.
Methods: The source of data for this study was original research articles in four psychiatric journals. The nature of the main finding (statistically significant or non-significant), the statistical methodology and reporting were reviewed in the case of each article and compared with the number of citations received within eight years of publication, obtained from the Web of Science database.
Results: The total number of articles reviewed that reported original findings based on statistical analysis was 448, of which 369 used tests of statistical significance and 287 (77.8 %) reported p<0.05. The median number of citations for papers reporting 'significant' and 'non-significant' results was 33 vs 16. After adjustment for journal, study design, reporting quality, whether outcome confirmed previous findings and study size, the citation rate ratio for papers reporting 'p< 0.05' on the primary outcome was 1.63 (95% CI 1.32-2.02, p < 0.001). Unclear or inadequate reporting was not associated with the citation counts. Extended description of statistical procedures had a positive effect on the number of citations received. Inappropriate statistical analysis did not affect the number of citations received.
Conclusions: Authors cite studies based on their p-value rather than intrinsic scientific merit. This practice skews the research evidence. The journal in which a study is published appears to be as important as the statistical reporting quality in ensuring dissemination of published medical science.
Objectives: 1. To investigate whether papers reporting a statistically significant primary outcome are cited more often than papers with non-significant findings. 2. To assess whether statistical reporting and statistical errors in the analysis of the primary outcome are associated with the number of citations received.
Methods: The source of data for this study was original research articles in four psychiatric journals. The nature of the main finding (statistically significant or non-significant), the statistical methodology and reporting were reviewed in the case of each article and compared with the number of citations received within eight years of publication, obtained from the Web of Science database.
Results: The total number of articles reviewed that reported original findings based on statistical analysis was 448, of which 369 used tests of statistical significance and 287 (77.8 %) reported p<0.05. The median number of citations for papers reporting 'significant' and 'non-significant' results was 33 vs 16. After adjustment for journal, study design, reporting quality, whether outcome confirmed previous findings and study size, the citation rate ratio for papers reporting 'p< 0.05' on the primary outcome was 1.63 (95% CI 1.32-2.02, p < 0.001). Unclear or inadequate reporting was not associated with the citation counts. Extended description of statistical procedures had a positive effect on the number of citations received. Inappropriate statistical analysis did not affect the number of citations received.
Conclusions: Authors cite studies based on their p-value rather than intrinsic scientific merit. This practice skews the research evidence. The journal in which a study is published appears to be as important as the statistical reporting quality in ensuring dissemination of published medical science.
PDF