This review explores the current landscape of artificial intelligence (AI)-assisted semi-automation tools used in systematic reviews and guideline development. With the exponential growth of medical literature, these tools have emerged to improve efficiency and reduce the workload involved in evidence synthesis. Platforms such as Covidence, EPPI-Reviewer, DistillerSR, and Laser AI exemplify how machine learning and, more recently, large language models (LLMs) are being integrated into key stages of the systematic review process—ranging from literature screening to data extraction. Evidence suggests that these tools can save considerable time, with some achieving average reductions of over 180 hours per review. However, challenges remain in transparency, reproducibility, and validation of AI performance. In response, international initiatives such as the Responsible AI in Evidence Synthesis (RAISE) project and the Guideline International Network (GIN) have proposed frameworks to ensure the ethical, trustworthy, and effective use of AI in health research. These include principles like transparency, accountability, preplanning, and continuous evaluation. This review highlights both the opportunities and limitations of adopting AI in evidence synthesis and underscores the importance of human oversight and rigorous validation to ensure that such tools enhance, rather than compromise, the integrity of systematic reviews and guideline development.
Background
In the case of clinical practice guideline (CPG), the need for the prospective registration of protocols has been proposed several times. However, the registration of CPG protocols is not yet active. The objective of this study was to summarize the experience of the CPG protocol registration program in Korea.
Methods
This study was performed in the following order: 1) formation of a methodological expert group; 2) CPG protocol template development; 3) CPG protocol preparation and expert review; 4) exploration of the knowledge and attitude of the guideline developers toward CPG protocol.
Results
The final version of the CPG protocol templates consists of four parts (planning, development, finalization, and timetable). The protocols for 18 cancers were submitted by 14 medical societies. conflicts of interest (n = 14, 77.8%), guideline development group (GDG; n = 9, 50%), scope of CPG (n = 9, 50%), and key questions (n = 8, 44.4%) were the under-reported areas in the submitted protocols. The GDGs (n = 13, 72.7%) was the most misreported areas of the protocol. CPG developers generally agreed on the advantages of protocol registration but responded that it was difficult to understand the concepts in the protocol and fill them with appropriate content. The areas where CPG developers responded that they felt difficulty were were recommendation grade (n = 9, 75.0%), GDG composition (n = 7, 58.3%), and determining key questions (n = 7, 58.3%).
Conclusions
The CPG protocol registration program was planned and piloted in Korea, and it could be said that it is feasible. It is necessary to evaluate the developed CPG later and determine whether protocol registration affects the quality of CPG through indices such as transparency and clarity of CPG.