Cureus. 2025 May;17(5): e84098
Artificial intelligence (AI), after surviving two major AI winters (1974-1980 and 1987-2000), is now growing at an exponential rate. This rapid advancement, particularly in its application to medical science and literature, has significantly transformed how research is conducted. The large language tools can produce highly realistic text, enabling diverse tasks with broad applications. In other words, their responses resemble human answers to human questions; however, their malicious use poses serious challenges to scientific research integrity and literature, especially when outputs influence human life, the ethical compass gains more importance than the benefits. This review aims to provide a comprehensive narrative review of AI, in particular the emergence of large language models and their impact on healthcare scientific research, with a focus on the challenges it poses to ethics and scientific integrity. In addition, it aims to discuss the evolving guidelines from various international organizations on authorship, transparency, and the responsible use of AI. Databases such as PubMed, Cochrane, Scopus, and Google Scholar were searched to provide a comprehensive review from the published literature on the emergence of AI in the healthcare research setting, along with its positive and negative impacts on research ethics. We also performed a SWOT (Strengths, Weaknesses, Opportunities, and Threats) analysis of AI in research publications and evaluated the ethical challenges it poses. Chatbots are AI-based conversational large language models, which are proving to be of significant importance in healthcare education, practice, and research. However, caution needs to be exercised in its malicious fabricated use. Organizations such as the Committee on Publication Ethics, World Association of Medical Editors, Journal of the American Medical Association, and International Committee of Medical Journal Editors state that chatbots do not qualify as co-authors, with only responsible and ethical use of AI being permitted. Caution needs to be exercised at the individual level by academics when they use these tools, and they should be transparent in their disclosure of their use. The advent of Google revolutionized scientific research, and similarly, AI-assisted chatbots represent the next leap forward. Hence, it is crucial to use these tools with caution, accountability, and transparency. Through this narrative review, we aim to guide researchers in understanding new guidelines and approaches to research ethics in this fast-evolving era of AI.
Keywords: artificial intelligence; chatbots; large language models (llms); publishing ethics; swot analysis