Article type
Year
Abstract
Background: Quality assessment of included studies is a crucial step in any systematic review (SR). Review and synthesis of prediction modelling studies is a relatively new and evolving area and a tool facilitating quality assessment for prognostic and diagnostic prediction modelling studies is needed. Objectives: To introduce PROBAST (prediction study risk of bias assessment tool), a tool for assessing the risk of bias (RoB) and applicability of prediction modelling studies in a SR. Methods: A Delphi process, involving 42 experts in the field of prediction research, was used until agreement on the content of the final tool was reached. Existing initiatives in the field of prediction research such as the REMARK (reporting recommendations for tumour marker prognostic studies) and TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) reporting guidelines formed part of the evidence base for the tool development. The scope of PROBAST was determined with consideration of existing tools, such as QUIPS (quality in prognostic studies) and QUADAS (Quality assessment of diagnostic accuracy studies). Results: After 6 rounds of the Delphi procedure, a final tool was developed that utilises a domain-based structure supported by signalling questions similar to QUADAS-2. PROBAST assesses the RoB and applicability of prediction modelling studies. RoB refers to the likelihood that a prediction model leads to distorted predictive performance for its intended use and targeted individuals. The predictive performance is typically evaluated using calibration, discrimination, and (re)classification. Applicability refers to the extent to which the prediction model from the primary study matches the SR question, for example in terms of the population or outcomes of interest. PROBAST comprises 5 domains (participant selection, outcome, predictors, sample size and flow, and analysis) and 22 signalling questions grouped within the domains. Conclusions: PROBAST can be used to assess the quality of prediction modelling studies included in a SR. The presentation will give an overview of the development process and introduce the final tool.