Article type
Abstract
Background
The identification and evaluation of conflicts of interest (COIs) in the development of practice guidelines are crucial for ensuring their transparency and quality. Traditionally, COIs are self-reported by guideline panel members and then assessed by the guideline committee, a process that can be both time-consuming and susceptible to biases. Recently, the advance of large language models equipped with sophisticated capabilities in natural language understanding, information retrieval, and analysis offers a promising approach to exploring COIs within guideline development in an efficient way.
Objectives
To use large language models to identify, analyze, and evaluate COIs in guidelines.
Methods
We utilized ChatGPT 4 to identify, analyze, and assess potential COIs within guidelines. This study included 3 steps. Firstly, we searched the World Health Organization guidelines published between January 2019 and December 2023 and collected the COI declarations. Secondly, we utilized ChatGPT 4 to analyze the identified COIs, categorizing them into 3 categories: confirmed COI, potential COI, and no COI. Thirdly, we compared machine-generated assessments with the authors’ own COI declarations to evaluate the consistency between AI-driven and human evaluations.
Results
Findings are under analysis and will be presented at the upcoming summit.
Conclusions
ChatGPT 4 has the potential to identify and evaluate potential COIs among researchers, thereby streamlining the guideline development process, enhancing efficiency, and minimizing disclosure bias. It is expected that COIs will be explored using large language models like ChatGPT 4 to enhance the efficiency of guideline development assisted by large language models and to ensure the transparency, objectivity, and fairness of the guidelines. Utilizing large language models such as ChatGPT-4 is expected to significantly improve the efficiency, transparency, and overall quality of practice guidelines, thus ensuring more reliable and informed decision-making.
The identification and evaluation of conflicts of interest (COIs) in the development of practice guidelines are crucial for ensuring their transparency and quality. Traditionally, COIs are self-reported by guideline panel members and then assessed by the guideline committee, a process that can be both time-consuming and susceptible to biases. Recently, the advance of large language models equipped with sophisticated capabilities in natural language understanding, information retrieval, and analysis offers a promising approach to exploring COIs within guideline development in an efficient way.
Objectives
To use large language models to identify, analyze, and evaluate COIs in guidelines.
Methods
We utilized ChatGPT 4 to identify, analyze, and assess potential COIs within guidelines. This study included 3 steps. Firstly, we searched the World Health Organization guidelines published between January 2019 and December 2023 and collected the COI declarations. Secondly, we utilized ChatGPT 4 to analyze the identified COIs, categorizing them into 3 categories: confirmed COI, potential COI, and no COI. Thirdly, we compared machine-generated assessments with the authors’ own COI declarations to evaluate the consistency between AI-driven and human evaluations.
Results
Findings are under analysis and will be presented at the upcoming summit.
Conclusions
ChatGPT 4 has the potential to identify and evaluate potential COIs among researchers, thereby streamlining the guideline development process, enhancing efficiency, and minimizing disclosure bias. It is expected that COIs will be explored using large language models like ChatGPT 4 to enhance the efficiency of guideline development assisted by large language models and to ensure the transparency, objectivity, and fairness of the guidelines. Utilizing large language models such as ChatGPT-4 is expected to significantly improve the efficiency, transparency, and overall quality of practice guidelines, thus ensuring more reliable and informed decision-making.