Meta-analysis methods for better validation of prediction models

Article type
Authors
Steyerberg E1
1Erasmus MC , The Netherlands
Abstract
Background: Prediction models are increasingly developed for many diagnostic and prognostic endpoints, and increasingly based on individual patient data from multiple studies. Validation of predictions from such models is important, but many researchers use inefficient designs and analyses.
Objectives: We aimed to evaluate the role of meta-analytic methods in the validation of prediction models.
Methods: We considered 15 studies of patients with traumatic brain injury (n = 11,026), where we predicted 6-month mortality. Prediction models were constructed in each of the 15 studies, and in the pooled data set. Various approaches to internal and external validation were explored.
Results: Logistic regression models included 10 predictors, with good overall discriminatory ability (median apparent c statistic in the 15 studies 0.81). For internal validation per study, we confirmed that a random split-sample approach was inefficient and unstable, in contrast to bootstrap resampling. In the pooled data set, an internal-external validation procedure provided insight in the substantial variability in discriminatory ability (c statistic) by study, as well as variability in calibration (study-specific intercept and calibration slope). This variability could well be quantified by random-effects meta-analysis modeling and I2 (I-squared) statistics (> 50%). The predictive distribution of predictions for patients in future studies could hence be estimated, and was much wider than that based on a fixed-effect analysis in the pooled data set.
Conclusions: We conclude that meta-analytic methods are very useful to obtain insight in the validity of prediction models. Quantification of heterogeneity in predicted risks should become the focus of validation studies, rather than confirmation of performance.