Article type
Year
Abstract
Background: Retrospective database, which is defined as data collected for purposes other than research, has become a useful data source to inform post-marketing drug evaluation. Complete and transparent reporting is necessary to assess the reliability of study findings, while clarified reporting in titles and abstracts is a key step in knowledge syntheses. However, the reporting quality of Retrospective database studies in titles and abstracts has still been unclear.
Objectives: A cross-sectional study was conducted to investigate the reporting quality of titles and abstract in Retrospective database studies evaluating drug effects.
Methods: We searched PubMed to identify all Retrospective database studies published in 2018. Retrospective database studies evaluating drug effects (including both effectiveness and safety) were considered eligible. We randomly selected 150 studies from eligibilities, and assessed the quality of reporting in titles and abstract using RECORD-PE checklist and newly developed items based on expert consensus. The comparisons of reporting quality were conducted between high impact groups (top five journals: NEJM, Lancet, JAMA, BMJ and JAMA Internal Medicine) and lower impact groups, as well as journals endorsed RECORD and not endorsed RECORD.
Results: A total of 298 reports ultimately proved eligible, from which 150 were randomly selected. Fifteen of 150 (10%) articles were published in the top five journals and 27 (18%) were published in journals endorsed the RECORD list. As to objective reporting, only 24.7% (37/150) specified the predefined hypothesis. There were 67.3% (101/150) clearly described study design in title or abstract. Study settings were insufficiently reported in selected studies, of which, 28.0% (42/150) did not report the region and 15.3% (23/150) did not report the timeframe of patient selecting and 65.3% (98/150) did not report the follow-up time. With respect to data source, 30.7% (46/150) did not report the type of database that was used. 69.3 % (104/150) reported the name of the database, of which only 54.8% (57/104) specified the type of the data source. The statistical models were neglected by 30.0% (45/150) studies. In results stating, 42.0% (63/150) reported both absolute risk and relative risk. Of 37 articles with prespecified hypotheses, only 62.2% (23/37) draw a coincident conclusion. Journals endorsed RECORD list or on top 5 list had a higher reporting proportion of study design (top five: 100% vs 64.4%, P = 0.005; RECORD-endorsed: 88.9% vs 63.4%, P = 0.010) and follow-up period (top five: 80.0% vs 29.6%, P < 0.001; RECORD-endorsed: 59.3% vs 29.3%, P = 0.003).
Conclusions: The reporting of retrospective database studies assessing drug effects was often insufficient. We recommend researchers to accept RECORD-PE list when reporting Retrospective database studies focusing on drug effect.
Patient or healthcare consumer involvement: None.
Objectives: A cross-sectional study was conducted to investigate the reporting quality of titles and abstract in Retrospective database studies evaluating drug effects.
Methods: We searched PubMed to identify all Retrospective database studies published in 2018. Retrospective database studies evaluating drug effects (including both effectiveness and safety) were considered eligible. We randomly selected 150 studies from eligibilities, and assessed the quality of reporting in titles and abstract using RECORD-PE checklist and newly developed items based on expert consensus. The comparisons of reporting quality were conducted between high impact groups (top five journals: NEJM, Lancet, JAMA, BMJ and JAMA Internal Medicine) and lower impact groups, as well as journals endorsed RECORD and not endorsed RECORD.
Results: A total of 298 reports ultimately proved eligible, from which 150 were randomly selected. Fifteen of 150 (10%) articles were published in the top five journals and 27 (18%) were published in journals endorsed the RECORD list. As to objective reporting, only 24.7% (37/150) specified the predefined hypothesis. There were 67.3% (101/150) clearly described study design in title or abstract. Study settings were insufficiently reported in selected studies, of which, 28.0% (42/150) did not report the region and 15.3% (23/150) did not report the timeframe of patient selecting and 65.3% (98/150) did not report the follow-up time. With respect to data source, 30.7% (46/150) did not report the type of database that was used. 69.3 % (104/150) reported the name of the database, of which only 54.8% (57/104) specified the type of the data source. The statistical models were neglected by 30.0% (45/150) studies. In results stating, 42.0% (63/150) reported both absolute risk and relative risk. Of 37 articles with prespecified hypotheses, only 62.2% (23/37) draw a coincident conclusion. Journals endorsed RECORD list or on top 5 list had a higher reporting proportion of study design (top five: 100% vs 64.4%, P = 0.005; RECORD-endorsed: 88.9% vs 63.4%, P = 0.010) and follow-up period (top five: 80.0% vs 29.6%, P < 0.001; RECORD-endorsed: 59.3% vs 29.3%, P = 0.003).
Conclusions: The reporting of retrospective database studies assessing drug effects was often insufficient. We recommend researchers to accept RECORD-PE list when reporting Retrospective database studies focusing on drug effect.
Patient or healthcare consumer involvement: None.