Differences in assessment of certainty of evidence by methodologists versus content experts

Article type
Authors
Djulbegovic B1, Razavi M2, Hozo I3, Guyatt G4
1City of Hope
2City
3Indiana University
4McMaster University
Abstract
Background: clinical practice guidelines (CPG) panel members are selected because of their particular expertise (methodological, content, clinical experience, lived experience). Because most panel members are already familiar with topic and supporting evidence that will eventually be systematically analyzed and presented, they may have relevant prior opinions. In addition, disagreement exists as to whether CPG formulated with the major input from methodologists are more trustworthy than those issued solely by content experts. If differences between expert and methodologist judgments exist, the most likely mechanism lies in the assessment of certainty of evidence. How the existing knowledge and expertise affects the assessment of certainty remains, however, unknown.

Objectives: to evaluate the differences in the assessment of evidence certainty as a function of the role of the panel members (chairs/methodologists versus content experts) before and after CPG met to develop practice recommendations.

Methods: panels convened by American Society of Hematology (ASH) (10 panels), American College of Rheumatology (ACR) (2 panels), UK National Institute for Health and Care Excellence (NICE) and an international panel for CPG related to red meat (RM) intake participated. ASH, ACR and NICE developed their recommendations after face-to-face meeting discussion, while the RM panel used virtual meetings to finalize its practice recommendations. Sixty-seven participants (14 chairs/methodologists and 53 content experts) provided evidence certainty ratings before and after the meeting. Collectively they issued 612 (110 + 502) certainty judgments regarding 612 PICO (patient, intervention, comparison, outcome) questions. Judgments of patients’ representatives were excluded. Certainty was coded as a continuous variable on a scale 1 to 4 (1 = very low, 2 = low, 3 = moderate, 4 = high). We used a two-level, hierarchical, mixed-effect, multivariable, linear regression analysis to account both for observations clustered within panel level and individual members. The dependent variable was judgments related to post-meeting evidence certainty; independent variables were judgments regarding pre-meeting evidence certainty and expertise (methodological versus content).

Results: Figure shows raw data related to judgments of evidence certainty. Both judgments on pre-meeting evidence certainty and the type of expertise significantly affected assessment of post-meeting certainty of evidence. On average, compared with methodologists, content experts tended to increase their assessment of post-certainty evidence by 1.3 (95% confidence interval 0.34 to 3.7; P < 0.0001) when pre-meeting evidence certainty was judged to be high.

Conclusions: content experts tend to appraise evidence certainty at higher level than methodologists. The findings have important implications both for CPGs and systematic reviewers.

Patient or healthcare consumer involvement: patient representatives should be involved in the dispute between methodologists and content experts.