Rating the methodological quality of randomised controlled trials

Article type
Year
Authors
Sindhu F
Abstract
Introduction: A systematic review accumulates results from a number of independent studies. This raises the issue of potential variability in the methodological quality of studies to be accumulated. Although there has been an increase in the use of some form of quality rating by reviewers in the last ten years, the debate regarding the importance and the validity of quality assessment continues. Rating the methodological quality of a study can be an inherently subjective and complex process. This complexity is reflected in the fact that there is no validated scale/tool designed for this purpose in the literature. Further, it is often unclear how existing quality assessment tools were developed. Those assessment tools that were designed for use in a systematic review seem to have been developed for different purposes ranging from use in editorial policy to enhancing evidence-based practice, rather than for specifically rating the methodological quality of a study.

Objective: This paper will concentrate on the complexity of assessing the methodological quality of randomised controlled trials (RCTs) to be included in a systematic review. The development of a scale to specifically rate the methodological quality of RCTs is reported.

Methods: The Delphi technique using several rounds of questions, was used to seek consensus regarding criteria important in rating the quality of a RCT. The authors of a random sub-sample of 23 (20%) RCTs, published in Medline during 1992, were requested to participate in the Delphi survey. Twelve of these responded, of which eight (67%) agreed to participate. The initial responses were accumulated and iteratively refined in a process whereby the list of criteria was posted to the panel for further comments until consensus was reached. The reliability and validity of the tool was tested by rating a random sample of five (10%) of the 49 studies to be included in a meta-analysis.

Results: The developed Quality of Study rating tool consisted of 53 items in 15 dimensions. Infra-class coefficient of reliability (n) was found to be high at 0.93. In terms of validity, face, content and construct validity appeared to be upheld. However, the criterion validity was low in comparison with a tool developed by Chalmers et al.

Discussion: No definitive assessment tool to rate the quality of a study currently exists. The developed Quality of Study tool appears to be detailed, succinct, it is comprehensive and easy to use but requires further testing. The difficulties in assessing the quality of RCTs, the potential benefits and limitations of the developed Quality of Study tool, and the rationale behind adopting a Delphi survey approach, will be discussed more fully in the paper.