Article type
Abstract
Background. A hallmark of a high-quality evidence synthesis is the examination of the methods of included studies and whether these lead to potential risk of bias or poor study quality. An evidence synthesist’s role is to examine these issues and interpret synthesized findings considering the quality of the included evidence. As a result, researchers that conduct evidence syntheses incorporate an assessment of study quality and/or risk of bias into reviews, and there are standardized tools that collect this information. Yet these tools focus on more traditional empirical designs, such as randomized clinical trials or quasi-experimental designs. Currently, there are no guidelines/tools for evidence synthesists to systematically synthesize and assess the rigor of scale development evidence. In psychosocial research, the scale used to collect data is the fundamental, crucial source of data for analysis. Yet review authors rarely examine the rigor of its development.
Objectives. This presentation provides the rationale for the importance of a tool assessing the rigor of scale development in a particular domain, discusses how our team developed such a tool from our own review, and details how the tool can be used.
Methods. To develop the tool, we reviewed scale development guidelines published in 2018 by Boateng and colleagues, reviewed the literature cited in that manuscript, and consulted with an expert in survey validation. To increase accessibility, we developed the tool as an electronic REDCap project: it is freely sharable and can be simultaneously used across multiple users online.
Results. Two coders tested the tool on studies eligible for our addiction recovery capital review and adjusted it after additional meetings and discussions with the overall review team and with an external statistical consultant. After these adjustments, coders reached 97% coding agreement. The tool has 9 quality domains (covered by 76 questions): Domain Identification and Item Development; Content validity; Pretesting Questions; Survey Administration/Sample Size; Item Reduction; Factor Extraction; Dimensionality Tests; Reliability Tests; and Validity Tests.
Conclusions. This free and easy-to-use tool allows authors to systematically code and examine the quality of scale development to support more robust evidence synthesis across disciplines whose primary outcome assessment is via surveys.
Objectives. This presentation provides the rationale for the importance of a tool assessing the rigor of scale development in a particular domain, discusses how our team developed such a tool from our own review, and details how the tool can be used.
Methods. To develop the tool, we reviewed scale development guidelines published in 2018 by Boateng and colleagues, reviewed the literature cited in that manuscript, and consulted with an expert in survey validation. To increase accessibility, we developed the tool as an electronic REDCap project: it is freely sharable and can be simultaneously used across multiple users online.
Results. Two coders tested the tool on studies eligible for our addiction recovery capital review and adjusted it after additional meetings and discussions with the overall review team and with an external statistical consultant. After these adjustments, coders reached 97% coding agreement. The tool has 9 quality domains (covered by 76 questions): Domain Identification and Item Development; Content validity; Pretesting Questions; Survey Administration/Sample Size; Item Reduction; Factor Extraction; Dimensionality Tests; Reliability Tests; and Validity Tests.
Conclusions. This free and easy-to-use tool allows authors to systematically code and examine the quality of scale development to support more robust evidence synthesis across disciplines whose primary outcome assessment is via surveys.