Article type
Year
Abstract
Background: Within a guideline production programme on appropriate use of Positron Emission Tomography (PET), a new methodology- combining the GRADE approach for grading quality of evidence and strength of recommendations with the RAND/UCLA Appropriateness Method- was developed and tested.
Objectives: Evaluate whether and to what extent information provided to experts of multidisciplinary panels determined their decision on appropriateness of the diagnostic test.
Methods: 37 experts, convened in two multidisciplinary panels with the mandate to define appropriate use of PET for 34 clinical indications in four types of cancer, were provided with a voting form for each clinical question containing a range of information (Table1). Panellists individually rated the importance of outcomes and level of appropriateness. Statistical analyses included: correlation of each variable (sensitivity and specificity- favouring PET or comparator- level of evidence (LoE), pre-test probability, importance score for the four patient-important clinical outcomes) with rating of appropriateness of all panellists for all clinical questions; multiple linear regression modelling the relationship between the variables and appropriateness rating for all panellists; multiple logistic regression modelling the relationship between the variables and appropriateness rating for each panel. Preliminary Results: correlation coefficients of all variables with rating for appropriateness are reported in Table2. Linear regression analysis showed that the ten variables accounted for 63.4% of variability of appropriateness rating; among them level of evidence explained 36.5% of variability. The remaining analyses will be completed in due time and final results will be presented.
Conclusions: LoE seems to play a major role in explaining the rating of appropriateness. When the analysis is completed we expect to: better quantify the relationship of all chosen explanatory variables with the rating of appropriateness (dependent variable), establish to what extent panellists used the information provided to make judgements, and suggest explanation for would-be unpredicted correlations.
Objectives: Evaluate whether and to what extent information provided to experts of multidisciplinary panels determined their decision on appropriateness of the diagnostic test.
Methods: 37 experts, convened in two multidisciplinary panels with the mandate to define appropriate use of PET for 34 clinical indications in four types of cancer, were provided with a voting form for each clinical question containing a range of information (Table1). Panellists individually rated the importance of outcomes and level of appropriateness. Statistical analyses included: correlation of each variable (sensitivity and specificity- favouring PET or comparator- level of evidence (LoE), pre-test probability, importance score for the four patient-important clinical outcomes) with rating of appropriateness of all panellists for all clinical questions; multiple linear regression modelling the relationship between the variables and appropriateness rating for all panellists; multiple logistic regression modelling the relationship between the variables and appropriateness rating for each panel. Preliminary Results: correlation coefficients of all variables with rating for appropriateness are reported in Table2. Linear regression analysis showed that the ten variables accounted for 63.4% of variability of appropriateness rating; among them level of evidence explained 36.5% of variability. The remaining analyses will be completed in due time and final results will be presented.
Conclusions: LoE seems to play a major role in explaining the rating of appropriateness. When the analysis is completed we expect to: better quantify the relationship of all chosen explanatory variables with the rating of appropriateness (dependent variable), establish to what extent panellists used the information provided to make judgements, and suggest explanation for would-be unpredicted correlations.