Article type
Abstract
Background: Poor recruitment to randomised-controlled trials is common, and has the potential to result in underpowered studies which do not satisfactorily answer research questions. Trial-recruitment strategies attempt to support recruitment, yet evidence to support the choice of recruitment interventions is weak. Non-randomised evaluations of recruitment interventions have traditionally been rejected in systematic reviews due to poor methodological quality but non-randomised evaluations are far more common than randomised ones.
Methods: We searched the Cochrane Methodology Register, MEDLINE, EMBASE, CINAHL and PsycINFO for non-randomised studies that included a comparison of two or more recruitment interventions. Two reviewers assessed all studies for inclusion, and extracted data on the host study, recruitment methods, embedded study design, participant characteristics and setting. The primary outcome is number of individuals or centres recruited, the secondary outcome is cost per recruit. The Cochrane risk-of-bias tool for non-randomised studies was used to assess methodological quality of studies. Where possible, data were pooled and then assessed using GRADE.
Results: We screened 9642 abstracts, of which 248 full-text articles were assessed and 107 studies eligible for inclusion. The majority of included studies omit important details regarding interventions; largely focusing on mode of delivery over content of the intervention itself. Despite the volume of included studies, poor reporting severely limited their utility and prevented studies from being pooled.
Interventions centred on methods from the advertising world; newspaper notices, radio and television commercials, and brochures and flyers distributed within the community. This low-quality body of work neither provides evidence for or against the use of these common approaches.
Conclusions: The synthesised evidence from the world’s most frequently used design to evaluate trial recruitment strategies has little or no value to those planning trial-recruitment strategies. Some studies do add value, however. Clear guidance is needed to ensure that these studies are done well, or not at all.
Methods: We searched the Cochrane Methodology Register, MEDLINE, EMBASE, CINAHL and PsycINFO for non-randomised studies that included a comparison of two or more recruitment interventions. Two reviewers assessed all studies for inclusion, and extracted data on the host study, recruitment methods, embedded study design, participant characteristics and setting. The primary outcome is number of individuals or centres recruited, the secondary outcome is cost per recruit. The Cochrane risk-of-bias tool for non-randomised studies was used to assess methodological quality of studies. Where possible, data were pooled and then assessed using GRADE.
Results: We screened 9642 abstracts, of which 248 full-text articles were assessed and 107 studies eligible for inclusion. The majority of included studies omit important details regarding interventions; largely focusing on mode of delivery over content of the intervention itself. Despite the volume of included studies, poor reporting severely limited their utility and prevented studies from being pooled.
Interventions centred on methods from the advertising world; newspaper notices, radio and television commercials, and brochures and flyers distributed within the community. This low-quality body of work neither provides evidence for or against the use of these common approaches.
Conclusions: The synthesised evidence from the world’s most frequently used design to evaluate trial recruitment strategies has little or no value to those planning trial-recruitment strategies. Some studies do add value, however. Clear guidance is needed to ensure that these studies are done well, or not at all.