Article type
Year
Abstract
Background: The establishment of the Cochrane Qualitative Methods Group signals enthusiasm amongst reviewers for including qualitative studies in Cochrane reviews. These types of studies can complement trials by increasing our understanding of the experiences of those receiving and implementing interventions and their context. However, there is much greater knowledge about how to include trials in systematic reviews than there is about how to include qualitative studies. One essential aspect of this is how to assess their quality. There are useful systematic overviews of tools for assessing trial quality, but none for assessing the quality of qualitative studies.
Objectives: 1) To search systematically for tools to assess the quality of qualitative studies via their written report; 2) To describe, compare and evaluate tools; and 3) To identify gaps for further methodological work.
Methods: Systematic searches were undertaken and reports were assessed for inclusion to identify those that provide a structured way to assess the quality of a qualitative study via its written report ("tools"). Each tool was coded according to a standardised strategy covering tool authors' discipline or field of study; conceptual underpinnings of the tool; aspects of quality covered; and potential for the tool to be used in a systematic review.
Results: 31 "tools" were identified. Preliminary analyses revealed that the majority of these were designed to assess healthcare research (N=21) and were developed for "qualitative" research not further specified (N=24). The number of items across the tools ranges from six to 81. Whilst most tools provide guidance for assessing a study according to each item, only three provide a threshold or a way of weighting studies according to quality. There was scant evidence to help reviewers choose between tools on the basis of ease of use; only seven tools had been tried out on a set of qualitative studies. Further analyses will examine the common quality dimensions covered by the tools.
Conclusions: This review provides a resource for reviewers on existing tools to quality assess qualitative studies. It also identifies new questions for future methodological research. An important conceptual issue raised is that existing tools tend to treat qualitative studies as homogenous, with no differentiation between the different questions that qualitative research can answer or the different methods it employs. This sits uncomfortably with the principle that systematic reviews should be question driven whereby studies are assessed according to their ability to reliably answer a particular question. Future tool development and methodological work should therefore address issues such as: clarifying the types of questions to which qualitative studies might contribute; whether all items listed within tools are equally important for different categories of questions; whether some tools are more reliable or easier to use than others and; whether the use of different tools leads to differences in review findings.
Acknowledgements: Angela Harden holds a senior research scientist award in evidence synthesis funded by the Department of Health (England). The views expressed are those of the author and not necessarily those of the Department of Health.
Objectives: 1) To search systematically for tools to assess the quality of qualitative studies via their written report; 2) To describe, compare and evaluate tools; and 3) To identify gaps for further methodological work.
Methods: Systematic searches were undertaken and reports were assessed for inclusion to identify those that provide a structured way to assess the quality of a qualitative study via its written report ("tools"). Each tool was coded according to a standardised strategy covering tool authors' discipline or field of study; conceptual underpinnings of the tool; aspects of quality covered; and potential for the tool to be used in a systematic review.
Results: 31 "tools" were identified. Preliminary analyses revealed that the majority of these were designed to assess healthcare research (N=21) and were developed for "qualitative" research not further specified (N=24). The number of items across the tools ranges from six to 81. Whilst most tools provide guidance for assessing a study according to each item, only three provide a threshold or a way of weighting studies according to quality. There was scant evidence to help reviewers choose between tools on the basis of ease of use; only seven tools had been tried out on a set of qualitative studies. Further analyses will examine the common quality dimensions covered by the tools.
Conclusions: This review provides a resource for reviewers on existing tools to quality assess qualitative studies. It also identifies new questions for future methodological research. An important conceptual issue raised is that existing tools tend to treat qualitative studies as homogenous, with no differentiation between the different questions that qualitative research can answer or the different methods it employs. This sits uncomfortably with the principle that systematic reviews should be question driven whereby studies are assessed according to their ability to reliably answer a particular question. Future tool development and methodological work should therefore address issues such as: clarifying the types of questions to which qualitative studies might contribute; whether all items listed within tools are equally important for different categories of questions; whether some tools are more reliable or easier to use than others and; whether the use of different tools leads to differences in review findings.
Acknowledgements: Angela Harden holds a senior research scientist award in evidence synthesis funded by the Department of Health (England). The views expressed are those of the author and not necessarily those of the Department of Health.