Journal and conference abstracts of diagnostic accuracy studies: sufficiently informative?

Article type
Authors
Korevaar D1, Cohen J1, de Ronde M1, Virgili G2, Dickersin K3, Hooft L4, Bossuyt P1
1Department of Clinical Epidemiology, Biostatistics and Bioinformatics, Academic Medical Center, University of Amsterdam, The Netherlands
2Department of Translational Surgery and Medicine, Eye Clinic, University of Florence, Italy
3Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, USA
4Dutch Cochrane Centre, Julius Center for Health Sciences and Primary Care, University Medical Centre Utrecht, University Utrecht, The Netherlands
Abstract
Background: Journal and conference abstracts are crucial for the identification and initial appraisal of studies. Conference abstracts are also a source of unpublished research that could potentially be included in systematic reviews.
Objectives: We assessed the informativeness of journal and conference abstracts of diagnostic accuracy studies.
Methods: Journal abstracts were identified by searching PubMed for reports of studies published in 12 higher-impact journals in 2012, using a previously validated search filter for diagnostic accuracy studies. Conference abstracts were identified by searching the online abstract repository of the annual meeting of the Association for Research in Vision and Ophthalmology of 2010. Abstracts were included if they calculated or announced the calculation of a measure of diagnostic accuracy (e.g. sensitivity, specificity, predictive values). Two independent reviewers evaluated their content using a list of 21 items, which were selected based on published guidance for adequate reporting and study quality assessment.
Results: We included 103 journal abstracts and 126 conference abstracts. The mean number of reported items was 10.1 out of 21 (SD 2.2; range 6-15) in journal abstracts, and 8.9 (SD 2.1; range 4-17) in conference abstracts. Reporting of crucial items is presented in the Table. The informativeness of journal and conference abstracts was comparable. Several elements that are essential to assess the validity and applicability of study findings were rarely reported: inclusion criteria, study setting, patient sampling, reference standard, masking, data to construct the 2x2 table, and confidence intervals around accuracy estimates. Reporting was better for other crucial elements: study design, index test under evaluation, number of participants, and disease prevalence.
Conclusions: Many journal and conference abstracts of diagnostic accuracy studies fail to report essential information about the study, making it difficult to assess the risk of bias and the applicability to specific clinical settings. Incomplete abstracts impede the identification of studies that meet the criteria for inclusion in systematic reviews.