Quality assessment of included studies in systematic reviews/meta-analysis based on individual patient data: a cross-sectional analysis

Article type
Authors
Wang Q1, Song XY2, Shi YH3, Gao YT2, He BZ3, Dong JW3, Wei D1, Chen YL1, Yang KH1
1Evidence-Based Medicine Center of Lanzhou University, Key Laboratory of Evidence-Based Medicine and Knowledge Translation of Gansu Province, China
2School of Basic Medical Sciences of Lanzhou University, China
3The Second Clinical Medical College of Lanzhou University, China
Abstract
Background: One of the key characteristics of a systematic review is an assessment of the validity of the findings of the included studies, for example through the assessment of risk of bias. Recently, systematic reviews/meta-analyses (SR/MAs) of individual patient data (IPD) have been increasing gradually. If SR/MAs are conducted based on IPD from poorly designed trials, the IPD MAs might be deficient. So IPD MAs should assess the quality of the original studies. It is still unclear how many SR/MAs of IPD conduct a quality assessment.
Objectives: To explore the quality assessments of included studies in SR/MAs based on IPD.
Methods: We searched PubMed for SR/MAs of IPD and conducted a cross-sectional analysis of quality assessments in IPD MAs identified in 2014. Two researchers did the searching, screening and data extraction independently and disagreements were solved by discussion.
Results: We identified 114 IPD MAs in 2014 in PubMed. They were published in 68 journals: the top three journals were BMJ, Lancet and American Heart Journal. These IPD MAs focused mainly on cardiovascular disease and cancers. Twenty-six per cent (30) of the IPD MAs assessed the quality of included studies: 18 (16%) reported the quality assessment tools (12 used Cochrane's tool assessing risk of bias, two used the Newcastle-Ottawa Quality Assessment Scale, one used CASP (Critical Appraisal Skills Programme), one used the QUADAS (Quality Assessment of Diagnostic Accuracy Studies) tool, one used the WHO-UMC Causality Assessment System, one used a Delphi score) and 15 (13%) reported the overall quality of included studies.
Conclusions: Less than one-third of IPD MAs assessed the quality of included original studies and the tools used to assess quality were varied. We think authors of meta-analyses of IPD should assess the risk of bias of included studies for meeting the definition of systematic reviews/meta-analyses and providing a transparent conclusion.