Use of evidence in economic decision models: an appraisal of health technology assessments in the UK since 1997

Article type
Authors
Cooper N, Coyle D, Abrams K, Mugford M, Sutton A
Abstract
Background: All the evidence required for an economic evaluation is rarely extractable from one data source; for this reason, decision-analytical models are often developed to synthesise the data to allow the cost-effectiveness of alternative clinical strategies/interventions to be assessed. Such models are increasingly being developed as part of health technology assessments (HTA) with the objective of providing information to allow scarce health care resources to be allocated efficiently. As with any model, the results obtained are only as reliable as the models poorest data input.

Objectives: To review the sources and quality of evidence used in the development of economic decision models in HTAs in the UK.

Methods: All economic decision models developed as part of the NHS Research and Development HTA Programme between 1997 and 2003 inclusively were reviewed. Quality of evidence was assessed using a hierarchy of data sources developed for economic analyses.

Results: Economic decision models are parameterised using diverse sources of evidence (e.g. RCTs, observational studies, expert opinion). Evidence on the main clinical effect was mostly identified and quality assessed as part of the companion systematic review/meta-analysis of the HTA and therefore reported in a transparent and reproducible way. For the other model inputs (i.e. adverse events, baseline clinical data, resource use, and utilities) the search strategies for identifying relevant evidence were rarely made explicit and in a number of reports the sources of specific evidence were unclear due to poor reporting.

Conclusions: This research highlights the range of different sources of evidence used to populate a single economic decision model. More work is required to investigate the most appropriate sources of data for different model inputs; for example, data from large observational studies are often considered more appropriate for adverse events than RCTs due to the rarity of such events. A more formal and replicable approach to identification and assessment of quality of model inputs is required to reduce the 'black box' concept of decision models and lead to less scepticism regarding the model outputs by clinicians and decision makers.