Article type
Year
Abstract
Background: In almost every system to grade epidemiological studies according to their level of evidence, randomised studies or meta-analyses of randomised studies receive the highest classification. Although the use of such hierarchies may help to separate the wheat from the chaff, it has also led to misconception and abuse. Since various research methods have their own particular advantages and disadvantages, the popular belief that only randomised studies produce results applicable to clinical practice and that observational studies can always be misleading does a disservice to patient care, clinical investigation and the education of health care professionals. Using randomised studies in diagnostic research certainly changes an essential characteristic of this type of clinical research. It turns research on diagnostic accuracy into intervention research. The nature of the diagnostic question and the resulting object of research should determine the appropriate study design.
Comment: In our view, a test's effect on patient outcome can be inferred and indeed considered as quantified 1) if the test is meant to include or exclude a disease for which an established reference is available, 2) if a cross-sectional accuracy study has shown the test's ability to adequately detect the presence or absence of that disease based on the reference, and finally 3) if other (randomised) therapeutic studies have provided evidence on efficacy of the optimal management of this disease. In such instances diagnostic research does not require an additional randomised comparison between two or more test-treatment strategies (one with and one without the test under study) to establish the testÂ’s effect on patient outcome.
Conclusions: Various research methods have their particular advantages and disadvantages, and the popular belief that only randomised studies produce results applicable to clinical practice with confidence and that observational studies may always be misleading does a disservice to patient care, clinical investigation and the education of health care professionals. In many instances, randomised studies in diagnostic research are not necessary and cross-sectional accuracy studies are fully acceptable to validly estimate the value of the diagnostic test in improvements of patient care.
Comment: In our view, a test's effect on patient outcome can be inferred and indeed considered as quantified 1) if the test is meant to include or exclude a disease for which an established reference is available, 2) if a cross-sectional accuracy study has shown the test's ability to adequately detect the presence or absence of that disease based on the reference, and finally 3) if other (randomised) therapeutic studies have provided evidence on efficacy of the optimal management of this disease. In such instances diagnostic research does not require an additional randomised comparison between two or more test-treatment strategies (one with and one without the test under study) to establish the testÂ’s effect on patient outcome.
Conclusions: Various research methods have their particular advantages and disadvantages, and the popular belief that only randomised studies produce results applicable to clinical practice with confidence and that observational studies may always be misleading does a disservice to patient care, clinical investigation and the education of health care professionals. In many instances, randomised studies in diagnostic research are not necessary and cross-sectional accuracy studies are fully acceptable to validly estimate the value of the diagnostic test in improvements of patient care.