A framework for evaluating and distinguishing validity and generalization of prediction models

Article type
Authors
Debray TPA1, Koffijberg H1, Vergouwe Y2, Nieboer D2, Steyerberg EW2, Moons KGM1
1Julius Center for Health Sciences and Primary Care, The Netherlands
2Erasmus Medical Center Rotterdam, The Netherlands
Abstract
Background: It is widely acknowledged that newly developed diagnostic or prognostic prediction models should be validated in samples with different (i.e. not included in the sample from which the model was developed) but related (i.e. similar characteristics or case mix) individuals. However, criteria for ‘different but related’ are lacking, compromising structured model validation studies.

Objectives: Based on previous recommendations we describe a framework of methodological steps for analyzing and interpreting the results of prediction model validation studies, to enhance inferences about the model’s generalizability across populations, clinical practices and settings.

Methods: We propose three subsequent steps to quantify case mix differences between the derivation and validation sample, assess the model’s performance and interpret its generalizability. Several alternative options may be considered for each of the individual steps which still keeps the overall structure of the framework intact. We illustrate this approach using empirical example studies.

Conclusions: The proposed framework for assessing the generalizability of prediction models enhances the interpretability and transparency of validation studies. It is divided into straightforward steps for easy implementation.