J Clin Med. 2026 Mar 14. pii: 2215. [Epub ahead of print]15(6):
Peer review is the cornerstone of scholarly publishing and, in medicine, the ultimate guarantor of the reliability of clinical evidence that informs guidelines, therapeutic strategies, and patient care. However, the current peer review system is increasingly strained by bias, abuse, and reviewer overload. Favoritism toward prominent authors, editorial "nepotism," coercive citation practices, superficial evaluations, and even documented cases of idea theft from confidential manuscripts undermine the trustworthiness of the scientific literature upon which clinical decisions depend. In this paper, we argue that artificial intelligence (AI) and large language models (LLMs) offer a transformative opportunity to strengthen the integrity and efficiency of medical peer review. AI-driven tools can perform rapid consistency checks, detect statistical errors or plagiarism, and enforce compliance with ethical and methodological standards across thousands of manuscripts. Early implementations of AI-guided review platforms, plagiarism detectors, and citation-anomaly algorithms demonstrate that machine assistance can make reviews more thorough, objective, and reproducible. At the same time, we acknowledge the limitations of AI, including hallucinations, a lack of human judgment, and risks to confidentiality if misused. To address these concerns, we propose a hybrid model in which AI handles routine screening and technical tasks under strict safeguards, while human experts retain final responsibility for scientific evaluation. This human-AI partnership may represent an essential step toward improving the quality, fairness, and reliability of the clinical evidence base.
Keywords: AI-assisted peer review; artificial intelligence; large language models; peer review; publication ethics; research integrity; reviewer bias