What we don't know about evidence and medical practice: Issues raised by Project ImpACT and lessons from the impact literature

Article type
Authors
Robinson K, Marks H, Goodman S
Abstract
Background: The purpose of RCTs and evidence syntheses is to improve medical practice, yet the impact of trials on medical practice remains poorly studied and poorly understood. Methods: Building on the earlier survey of impact studies by [1], we summarize and evaluate the methodologies of existing impact research. This work has evolved from the methodological issues raised by Project ImpACT, an effort to select the most important RCTs performed in the 20th century.

Results: The standard methodology uses time trend data to compare drug or technology use before and after RCTs [2]. Such studies provide no information on whether physician behaviors were due to their awareness (positive or negative) of RCT results, disagreements about the implications of trials or reservations about their application to particular clinical populations [3]. Investigators have used an idiosyncratic mix of factors to explain the influence (or lack thereof) of RCTs on medical practice. Clinical specialty appears to predict practice patterns consistent with RCTs: specialists are more likely than general practitioners to adopt newly validated treatments, and to relinquish inefficacious or harmful ones [4,5]. Other reported factors influencing adoption/relinquishment of practices include: technical and economic features of the technology [6]; level of drug company marketing [2]; and broad participation of community physicians in trial conduct [7]. Psycho-social factors, such as physician work satisfaction, may also play a role [8].

Conclusions: A survey of the research on the impact of trials on medical practice suggests that it is time to go back to basics, to develop more systematic models of how physicians obtain and use information, and how practice environments and training influence therapeutic choice. Absent studies which link information and beliefs to behavior, it is difficult to know how to improve compliance with evidence-based guidelines: is the problem one of improving information flows, of designing more practice-oriented trials, or something else [9]?

References: 1. Fineberg HV. Effects of clinical evaluation on the diffusion of medical technology. In: Institute of Medicine, Assessing Medical Technology. Washington: National Academy Press; 1985. p. 176-210. 2. Sleight P. The influence of mortality trials on the evolution of clinical practice. Cardiology. 1994;84:413-419. 3. Fineberg HV, Gabe R, Sosman M. The acquisition and application of new medical knowledge by anesthesiologists: three recent examples. Anesthesiology. 1978;48:430-436. 4. Friedman L, Wenger N, Knatterud G. Impact of the coronary drug project findings on clinical practice. Controlled Clinical Trials.1983; 4:513-522. 5. Go AS, Rao RK, Dauterman KW, Massie BM. A systematic review of the effects of physician specialty on the treatment of coronary disease and heart failure in the United States. Am J Med. Feb 2000;108(3):259-61. 6. Parer J. Obstetric technologies: what determines clinical acceptance or rejection of results of randomized controlled trials? Am J Obstet Gynecol. 2003;188:1622-1628. 7. Tognoni G, Franzos MG, Garattini S, Maggioni A. The case of GISSI in changing the attitudes and practice of Italian cardiologists. Stat Med. 1990;9:17-27. 8. Melville A, Mapes R. Anatomy of a disaster: the case of practolol. In: Mapes R, editor. Prescribing Practice and Drug Usage. London: Croom Helm; 1980. p. 121-144. 9. Liberati A. The relationship between clinical trials and clinical practice: the risks of underestimating its complexity. Stat Med. 1994;13:1485-1489.