Article type
Year
Abstract
Background: For many years we have been hearing about new technology that will make systematic reviews more efficient, and these tools are now reaching a state of maturity where they are becoming viable for use in live reviews. These cover the full range of systematic review tasks, from study identification to risk of bias assessment and synthesis, and utilize a range of technologies from machine learning and natural language processing through to crowdsourcing and the use of large datasets. Some new technologies even promise to find studies automatically for review updates and ‘living’ reviews.
But which tools are safe to use, and when?
Objectives: This workshop aims to:
1) demystify some of the technologies that underpin these new tools, enabling participants to understand their strengths, limitations and potential;
2) provide hands-on experience of a range of tools that are available online;
3) facilitate discussion on the appropriate use of these new technologies for particular review tasks.
The workshop does not aim to give preferential treatment to any specific proprietary tool. Rather, it aims to provide participants with an overview of the available tools and technologies, together with an understanding about how these might fit best in their own review processes.
Learning outcomes: Participants should be able to:
1) understand the strengths, limitations and opportunities of new tools and technologies;
2) be able to use a range of tools in their own reviews; and
3) take away a list of links and resources to follow up after the workshop.
Description: The workshop will be organised in iterative form around the hands-on use of new technologies by participants. The use of technologies for each of the following review tasks will be introduced in a brief presentation, followed by a focus on hands-on testing, and then discussion about the appropriate use of each tool among participants.
1) Study identification: the use of study type classifiers (e.g. randomized controlled trials, diagnostic test accuracy, systematic review, economic evaluation), crowdsourcing, and active learning/priority screening for citation screening
2) New technologies for automated risk of bias assessment and data extraction
3) Automating study identification for review updates and living systematic reviews
Participants should bring a laptop or tablet which is connected to the internet in order to try the tools for themselves.
The facilitators have been actively involved in the development, evaluation and implementation of automation technologies in systematic reviews, and have published widely on the use - and evaluation - of these approaches.
But which tools are safe to use, and when?
Objectives: This workshop aims to:
1) demystify some of the technologies that underpin these new tools, enabling participants to understand their strengths, limitations and potential;
2) provide hands-on experience of a range of tools that are available online;
3) facilitate discussion on the appropriate use of these new technologies for particular review tasks.
The workshop does not aim to give preferential treatment to any specific proprietary tool. Rather, it aims to provide participants with an overview of the available tools and technologies, together with an understanding about how these might fit best in their own review processes.
Learning outcomes: Participants should be able to:
1) understand the strengths, limitations and opportunities of new tools and technologies;
2) be able to use a range of tools in their own reviews; and
3) take away a list of links and resources to follow up after the workshop.
Description: The workshop will be organised in iterative form around the hands-on use of new technologies by participants. The use of technologies for each of the following review tasks will be introduced in a brief presentation, followed by a focus on hands-on testing, and then discussion about the appropriate use of each tool among participants.
1) Study identification: the use of study type classifiers (e.g. randomized controlled trials, diagnostic test accuracy, systematic review, economic evaluation), crowdsourcing, and active learning/priority screening for citation screening
2) New technologies for automated risk of bias assessment and data extraction
3) Automating study identification for review updates and living systematic reviews
Participants should bring a laptop or tablet which is connected to the internet in order to try the tools for themselves.
The facilitators have been actively involved in the development, evaluation and implementation of automation technologies in systematic reviews, and have published widely on the use - and evaluation - of these approaches.