Article type
Year
Abstract
Background: In the addiction field the quality of reporting is known to be poor; various methods are used for quality assessment of primary studies; therefore it could be difficult for the reviewer to find and interpret the relevant information in the studies.
Objective: To explore the interobserver reliability in the assessment of methodological quality of a sample of RCTs and CCTs.
Methods: A random sample of 49 RCTs and CCTs out of the 300 included in the systematic reviews of the CDAG group has been selected. Three reviewers of the Cochrane Collaboration independently assessed their methodological quality. We used the classical criteria suggested by the Cochrane Handbook:
Selection bias:
A. adequate allocation concealment
B. unclear allocation concealment:.
C. inadequate allocation concealment:
Performance bias:
A) double blind
B) single blind
C) unclear
D) no blinding
E) not applicable
Attrition bias:
A) Loss to follow up completely recorded
B) Loss to follow up incompletely recorded
C) Unclear or not done
D) not applicable
Detection bias:
A) Blind to treatment allocation at outcome assessment
B) Not blind to treatment allocation at outcome assessment
C) Unclear
D) not relevant
We calculated the percentage of studies with total agreement between the three reviewers for all the four parameters and the percentage of studies with agreement and the K statistics for each parameter
Results:
Total agreement. 2,04%
Selection bias: 75%. K:0,47
Performance bias: 63% K:0,58
Attrition bias: 14% K:0.10
Detection bias: 41% K:0.33
Conclusion: Our results are very discouraging. Our total agreement was near to zero. The agreement for selection and performance bias was acceptable while it was very poor for attrition and detection bias. As expected the agreement decreases when the number of possible answers increases. We suggest that: a) quality assessment should always be assessed by two reviewers independently; b) very detailed instruction on how to give the score for each parameter should be prepared by the editorial groups; c) checklists as easy as possible including only the most relevant items should be used; d) the number of possible answers for each item of quality should be kept to the minimum.
Objective: To explore the interobserver reliability in the assessment of methodological quality of a sample of RCTs and CCTs.
Methods: A random sample of 49 RCTs and CCTs out of the 300 included in the systematic reviews of the CDAG group has been selected. Three reviewers of the Cochrane Collaboration independently assessed their methodological quality. We used the classical criteria suggested by the Cochrane Handbook:
Selection bias:
A. adequate allocation concealment
B. unclear allocation concealment:.
C. inadequate allocation concealment:
Performance bias:
A) double blind
B) single blind
C) unclear
D) no blinding
E) not applicable
Attrition bias:
A) Loss to follow up completely recorded
B) Loss to follow up incompletely recorded
C) Unclear or not done
D) not applicable
Detection bias:
A) Blind to treatment allocation at outcome assessment
B) Not blind to treatment allocation at outcome assessment
C) Unclear
D) not relevant
We calculated the percentage of studies with total agreement between the three reviewers for all the four parameters and the percentage of studies with agreement and the K statistics for each parameter
Results:
Total agreement. 2,04%
Selection bias: 75%. K:0,47
Performance bias: 63% K:0,58
Attrition bias: 14% K:0.10
Detection bias: 41% K:0.33
Conclusion: Our results are very discouraging. Our total agreement was near to zero. The agreement for selection and performance bias was acceptable while it was very poor for attrition and detection bias. As expected the agreement decreases when the number of possible answers increases. We suggest that: a) quality assessment should always be assessed by two reviewers independently; b) very detailed instruction on how to give the score for each parameter should be prepared by the editorial groups; c) checklists as easy as possible including only the most relevant items should be used; d) the number of possible answers for each item of quality should be kept to the minimum.