Article type
Year
Abstract
Background: Quality assessment of included studies is a crucial step in any systematic review (SR). Review and synthesis of prediction modelling studies is a relatively new and evolving area and a tool that facilitates quality assessment for prognostic and diagnostic prediction modelling studies is needed.
Objectives: To introduce PROBAST, a tool for assessing the risk of bias and applicability of prediction modelling studies in a SR.
Methods: A Delphi process, involving 42 experts in the field of prediction research, was used until agreement on the content of the final tool was reached. Existing initiatives in the field of prediction research such as the REMARK and TRIPOD reporting guidelines formed part of the evidence base for the tool development. The scope of PROBAST was determined with consideration of existing tools, such as QUIPS and QUADAS 2 (quality assessment tool for diagnostic accuracy studies).
Results: After six rounds of the Delphi procedure, a final tool was developed which utilises a domain-based structure supported by signalling questions similar to QUADAS 2. PROBAST assesses the risk of bias and applicability of prediction modelling studies. Bias occurs when shortcomings in the study design, conduct or analysis lead to systematically distorted estimates of predictive performance or an inadequate model to address the research question. Potential sources of bias in a prediction model study can be identified by comparing it to a hypothetical methodologically robust study. PROBAST comprises five domains (participant selection, outcome, predictors, sample size and flow, and analysis) and 23 signalling questions grouped within these domains. Applicability refers to the extent to which the prediction model matches the systematic review question, for example in terms of the population, predictors or outcomes of interest. PROBAST also includes a component to assess the applicability of the model being assessed to the review question.
Conclusions: PROBAST can be used to assess the quality of prediction modelling studies included in a SR. The presentation will give an overview of the development process and introduce the final tool.
Objectives: To introduce PROBAST, a tool for assessing the risk of bias and applicability of prediction modelling studies in a SR.
Methods: A Delphi process, involving 42 experts in the field of prediction research, was used until agreement on the content of the final tool was reached. Existing initiatives in the field of prediction research such as the REMARK and TRIPOD reporting guidelines formed part of the evidence base for the tool development. The scope of PROBAST was determined with consideration of existing tools, such as QUIPS and QUADAS 2 (quality assessment tool for diagnostic accuracy studies).
Results: After six rounds of the Delphi procedure, a final tool was developed which utilises a domain-based structure supported by signalling questions similar to QUADAS 2. PROBAST assesses the risk of bias and applicability of prediction modelling studies. Bias occurs when shortcomings in the study design, conduct or analysis lead to systematically distorted estimates of predictive performance or an inadequate model to address the research question. Potential sources of bias in a prediction model study can be identified by comparing it to a hypothetical methodologically robust study. PROBAST comprises five domains (participant selection, outcome, predictors, sample size and flow, and analysis) and 23 signalling questions grouped within these domains. Applicability refers to the extent to which the prediction model matches the systematic review question, for example in terms of the population, predictors or outcomes of interest. PROBAST also includes a component to assess the applicability of the model being assessed to the review question.
Conclusions: PROBAST can be used to assess the quality of prediction modelling studies included in a SR. The presentation will give an overview of the development process and introduce the final tool.