Article type
Year
Abstract
Background:
Quality assessment of included studies is a crucial step in any systematic review. The review and synthesis of prediction modelling studies are a relatively new and evolving area and a tool facilitating quality assessment for prognostic and diagnostic prediction modelling studies is needed.
Objectives:
To develop PROBAST, a tool for assessing the risk of bias and applicability of prediction modelling studies.
Methods:
A Delphi process, involving 42 experts in the field of prediction research, was used until agreement on the content of the final tool. Existing initiatives in the field of prediction research such as the REMARK (Reporting Recommendations for Tumor Marker Prognostic Studies) guidelines and the TRIPOD prediction model reporting guidelines formed part of the evidence base for the tool development. The scope of PROBAST was determined with consideration of existing tools, such as QUIPS and QUADAS.
Results:
After six rounds of the Delphi procedure, a final tool was developed which utilizes a domain-based structure supported by signalling questions similar to QUADAS-2, which assesses risk of bias and applicability of diagnostic accuracy studies. PROBAST assesses the risk of bias and applicability of prediction modelling studies. It consists of five domains (participant selection, outcome, predictors, sample size and flow, and analysis) and 27 signalling questions grouped within these domains. Risk of bias addresses the extent to which reported estimates of the predictive performance/accuracy (e.g. discrimination, calibration and (re)classification estimates) of the prediction model are potentially biased. Applicability refers to the extent to which the reported prediction model and the population used to measure model performance matches the review question and intended use of the model.
Conclusions:
PROBAST can be used for the quality assessment of prediction modelling studies. The presentation will give an overview of the process, the current version of the tool (including the addressed domains and signalling questions) as well as an insight into underlying discussions.
Quality assessment of included studies is a crucial step in any systematic review. The review and synthesis of prediction modelling studies are a relatively new and evolving area and a tool facilitating quality assessment for prognostic and diagnostic prediction modelling studies is needed.
Objectives:
To develop PROBAST, a tool for assessing the risk of bias and applicability of prediction modelling studies.
Methods:
A Delphi process, involving 42 experts in the field of prediction research, was used until agreement on the content of the final tool. Existing initiatives in the field of prediction research such as the REMARK (Reporting Recommendations for Tumor Marker Prognostic Studies) guidelines and the TRIPOD prediction model reporting guidelines formed part of the evidence base for the tool development. The scope of PROBAST was determined with consideration of existing tools, such as QUIPS and QUADAS.
Results:
After six rounds of the Delphi procedure, a final tool was developed which utilizes a domain-based structure supported by signalling questions similar to QUADAS-2, which assesses risk of bias and applicability of diagnostic accuracy studies. PROBAST assesses the risk of bias and applicability of prediction modelling studies. It consists of five domains (participant selection, outcome, predictors, sample size and flow, and analysis) and 27 signalling questions grouped within these domains. Risk of bias addresses the extent to which reported estimates of the predictive performance/accuracy (e.g. discrimination, calibration and (re)classification estimates) of the prediction model are potentially biased. Applicability refers to the extent to which the reported prediction model and the population used to measure model performance matches the review question and intended use of the model.
Conclusions:
PROBAST can be used for the quality assessment of prediction modelling studies. The presentation will give an overview of the process, the current version of the tool (including the addressed domains and signalling questions) as well as an insight into underlying discussions.