bims-skolko Biomed News
on Scholarly communication
Issue of 2025–01–05
seventeen papers selected by
Thomas Krichel, Open Library Society



  1. Nature. 2025 Jan;637(8044): 34
      
    Keywords:  Publishing; Research data; Research management; Scientific community
    DOI:  https://doi.org/10.1038/d41586-024-04230-3
  2. Learn Publ. 2025 Jan;pii: e1635. [Epub ahead of print]38(1):
      This paper aims to enhance the understanding of the role of special issues in the evolving landscape of academic publishing, offering insights for publishers, editors, guest editors, and researchers, including how new technologies influence transparency in publishing processes, open access models, and metrics for success. Based upon original analysis, the paper also discusses the importance of special issues and opportunities to support diversity, equity, and inclusivity in special issue publishing programs. The goal is to contribute to the discussion of maintaining research integrity through special issues, acknowledging their significance in scholarly communication, while offering suggestions for the future.
    DOI:  https://doi.org/10.1002/leap.1635
  3. Clin Dermatol. 2024 Dec 27. pii: S0738-081X(24)00285-2. [Epub ahead of print]
      The rise of predatory journals has created a pressing ethical dilemma in academic publishing, exploiting researchers' urgency to publish while prioritizing profits over quality. These journals, characterized by deceptive practices and inadequate peer review, often undermine scientific integrity and disproportionately affect early-career academicians and those from underfunded institutions. While open-access publishing aims to democratize knowledge, its reliance on high processing charges (APCs) poses accessibility challenges, particularly in resource-limited settings. This issue extends beyond predatory journals, as even reputable journals often impose substantial APCs, creating a broader crisis of inequitable access to publishing research findings. The implications of these exploitative practices are far-reaching, potentially compromising patient care (via publication of inferior papers in predatory journals), fostering researcher burnout, and hindering global collaboration. Addressing this requires systemic reform, including increased transparency, reduced costs, expanded funding, and promoting community-led publishing platforms. Ethical publishing practices must prioritize inclusivity and the dissemination of knowledge to preserve the integrity and accessibility of academic research.
    Keywords:  Ethics; academics; article processing charges; open access; predatory journals; profit; publication fees
    DOI:  https://doi.org/10.1016/j.clindermatol.2024.12.019
  4. Account Res. 2025 Jan 01. 1-20
       BACKGROUND: Researchers are increasingly accessing scientific articles through unauthorized websites like Sci-Hub. Sci-Hub contains retracted articles, including those which are not labelled as retracted, and this is a potential threat to academic research.
    METHODS: This study analyses the extent of the availability of retracted articles within the Sci-Hub, particularly focusing on the presence of unlabeled retracted articles (URA) which may inadvertently be used in subsequent research, thus propagating flawed findings. The authors identified 16925 English-language research articles retracted between 2003 and 2022 indexed in the Web of Science and Scopus databases. These articles were cross-checked with Sci-Hub to ascertain whether they were appropriately labelled as retracted.
    RESULTS: The investigation revealed that 84.83% of the retracted articles available on Sci-Hub do not have any indication of their retracted status. These URA could potentially be reused by researchers, unaware of their retracted status. The availability of URA in the field of health sciences is particularly high, which indicates a significant risk of their unintended use and further citation in future research.
    CONCLUSIONS: This study underscores the crucial need for stringent implementation of regulatory measures on retraction suggested by the Committee on Publication Ethics (COPE) or newly published National Information Standards Organization (NISO) recommendations.
    Keywords:  Academic publishing; article retraction; post-retraction citations; research integrity; scholarly communication
    DOI:  https://doi.org/10.1080/08989621.2024.2446558
  5. Res Ethics. 2023 Oct;19(4): 449-465
      In this article, we discuss ethical issues related to using and disclosing artificial intelligence (AI) tools, such as ChatGPT and other systems based on large language models (LLMs), to write or edit scholarly manuscripts. Some journals, such as Science, have banned the use of LLMs because of the ethical problems they raise concerning responsible authorship. We argue that this is not a reasonable response to the moral conundrums created by the use of LLMs because bans are unenforceable and would encourage undisclosed use of LLMs. Furthermore, LLMs can be useful in writing, reviewing and editing text, and promote equity in science. Others have argued that LLMs should be mentioned in the acknowledgments since they do not meet all the authorship criteria. We argue that naming LLMs as authors or mentioning them in the acknowledgments are both inappropriate forms of recognition because LLMs do not have free will and therefore cannot be held morally or legally responsible for what they do. Tools in general, and software in particular, are usually cited in-text, followed by being mentioned in the references. We provide suggestions to improve APA Style for referencing ChatGPT to specifically indicate the contributor who used LLMs (because interactions are stored on personal user accounts), the used version and model (because the same version could use different language models and generate dissimilar responses, e.g., ChatGPT May 12 Version GPT3.5 or GPT4), and the time of usage (because LLMs evolve fast and generate dissimilar responses over time). We recommend that researchers who use LLMs: (1) disclose their use in the introduction or methods section to transparently describe details such as used prompts and note which parts of the text are affected, (2) use in-text citations and references (to recognize their used applications and improve findability and indexing), and (3) record and submit their relevant interactions with LLMs as supplementary material or appendices.
    Keywords:  ChatGPT; Publication ethics; artificial intelligence; authorship; large language models; transparency; writing
    DOI:  https://doi.org/10.1177/17470161231180449
  6. J Am Assoc Nurse Pract. 2025 Jan 01. 37(1): 1-3
       ABSTRACT: Peer review is a time-honored cornerstone of publishing. Peer reviewers, often blinded to the author, provide feedback to clarify the manuscript and validate the key messages and sciences. Authors may openly accept the feedback outright and make revisions. Authors may also view the feedback as intrusive and ill-mannered. Authors must revise the document promptly and consider each comment for merit. Tips for handling feedback are reviewed.
    DOI:  https://doi.org/10.1097/JXX.0000000000001078
  7. Naunyn Schmiedebergs Arch Pharmacol. 2025 Jan 03.
      Scientific integrity has been increasingly challenged by scientific misconduct and paper mills, resulting in an increase in retractions. Naunyn-Schmiedeberg's Archives of Pharmacology has been significantly impacted by fraudulent submissions, resulting in numerous retractions. By analyzing retraction notes and utilizing a post-publication surveillance strategy, this editorial discusses how this journal continues to deal with problematic publications, uncovers image- and physiological-related integrity issues, and responds to fraudulent activity. By adopting innovative methods to detect integrity issues and transparently communicating our concerns, we aim to increase awareness among scientists and scientific journals.
    Keywords:  Image issues; Paper mills; Scientific fraud; Scientific integrity
    DOI:  https://doi.org/10.1007/s00210-024-03697-1
  8. Sci Rep. 2024 Dec 30. 14(1): 31672
      With breakthroughs in Natural Language Processing and Artificial Intelligence (AI), the usage of Large Language Models (LLMs) in academic research has increased tremendously. Models such as Generative Pre-trained Transformer (GPT) are used by researchers in literature review, abstract screening, and manuscript drafting. However, these models also present the attendant challenge of providing ethically questionable scientific information. Our study provides a snapshot of global researchers' perception of current trends and future impacts of LLMs in research. Using a cross-sectional design, we surveyed 226 medical and paramedical researchers from 59 countries across 65 specialties, trained in the Global Clinical Scholars' Research Training certificate program of Harvard Medical School between 2020 and 2024. Majority (57.5%) of these participants practiced in an academic setting with a median of 7 (2,18) PubMed Indexed published articles. 198 respondents (87.6%) were aware of LLMs and those who were aware had higher number of publications (p < 0.001). 18.7% of the respondents who were aware (n = 37) had previously used LLMs in publications especially for grammatical errors and formatting (64.9%); however, most (40.5%) did not acknowledge its use in their papers. 50.8% of aware respondents (n = 95) predicted an overall positive future impact of LLMs while 32.6% were unsure of its scope. 52% of aware respondents (n = 102) believed that LLMs would have a major impact in areas such as grammatical errors and formatting (66.3%), revision and editing (57.2%), writing (57.2%) and literature review (54.2%). 58.1% of aware respondents were opined that journals should allow for use of AI in research and 78.3% believed that regulations should be put in place to avoid its abuse. Seeing the perception of researchers towards LLMs and the significant association between awareness of LLMs and number of published works, we emphasize the importance of developing comprehensive guidelines and ethical framework to govern the use of AI in academic research and address the current challenges.
    Keywords:  Academic writing; Artificial intelligence; Biomedical research; Large language models
    DOI:  https://doi.org/10.1038/s41598-024-81370-6
  9. Am J Ophthalmol. 2024 Dec 30. pii: S0002-9394(24)00588-9. [Epub ahead of print]
       PURPOSE: The integration of generative artificial intelligence (GAI) into scientific research and academic writing has generated considerable controversy. Currently, standards for using GAI in academic medicine remain undefined. This study aims to conduct a comprehensive analysis of the guidance provided for authors regarding the use of GAI in ophthalmology scientific journals.
    DESIGN: Cross-sectional bibliometric analysis.
    PARTICIPANTS: A total of 140 ophthalmology journals listed in the Scimago Journal & Country Rankings, regardless of language or origin.
    METHODS: We systematically searched and screened the 140 ophthalmology journals' websites on October 19 to 20, 2024, and conducted updates on November 19 to 20, 2024.
    MAIN OUTCOME MEASURES: The content of GAI guidelines from the websites of the 140 ophthalmology journals.
    RESULTS: Of the 140 journals, 96 (69%) provide explicit guidelines for authors regarding the use of GAI. Among these, nearly all journals agree on three key points: 1) 94 journals (98%) have established specific guidelines prohibiting GAI from being listed as an author. 2) 94 journals (98%) emphasize that human authors are responsible for the outputs generated by GAI tools. 3) All 96 journals require authors to disclose any use of GAI. Additionally, 20 journals (21%) specify that their guidelines pertain solely to the writing process with GAI. Furthermore, 92 journals (66%) have developed guidelines concerning GAI-generated images, with 63 journals (68%) permitting their use and 29 (32%) prohibiting them. Among those that prohibit GAI images, 27 journals (93%) allow their use under specific conditions.
    CONCLUSIONS: Although there is considerable ethical consensus among ophthalmology journals regarding the use of GAI, notable variations exist in terms of permissible use and disclosure practices. Establishing standardized guidelines is essential to safeguard the originality and integrity of scientific research. Researchers must uphold high standards of academic ethics and integrity when utilizing GAI.
    Keywords:  ChatGPT; academic ethics; author guidelines; generative artificial intelligence
    DOI:  https://doi.org/10.1016/j.ajo.2024.12.021
  10. Comput Inform Nurs. 2024 Dec 31.
      All disciplines, including nursing, may be experiencing significant changes with the advent of free, publicly available generative artificial intelligence tools. Recent research has shown the difficulty in distinguishing artificial intelligence-generated text from content that is written by humans, thereby increasing the probability for unverified information shared in scholarly works. The purpose of this study was to determine the extent of generative artificial intelligence usage in published nursing articles. The Dimensions database was used to collect articles with at least one appearance of words and phrases associated with generative artificial intelligence. These articles were then searched for words or phrases known to be disproportionately associated with large language model-based generative artificial intelligence. Several nouns, verbs, adverbs, and phrases had remarkable increases in appearance starting in 2023, suggesting use of generative artificial intelligence. Nurses, authors, reviewers, and editors will likely encounter generative artificial intelligence in their work. Although these sophisticated and emerging tools are promising, we must continue to work toward developing ways to verify accuracy of their content, develop policies that insist on transparent use, and safeguard consumers of the evidence they generate.
    DOI:  https://doi.org/10.1097/CIN.0000000000001237
  11. Cureus. 2024 Dec;16(12): e76452
      Writing manuscripts is an integral part of the research journey. Despite the availability of various guidelines to inform study reporting and manuscript preparation requirements by peer-reviewed medical journals, developing manuscripts that effectively communicate study findings or new knowledge requires a range of communication skills that evolve with successes and failures. In this manuscript, I feature some personal learnings and acquired habits in manuscript development and publication planning from my 15-year experience as a scholar, including insights on authorship matters, journal selection, manuscript type choices, medical writing of various data-driven and non-data-driven manuscript types, and handling revisions.
    Keywords:  medical journal; medical writing; publication; research; scientific communication
    DOI:  https://doi.org/10.7759/cureus.76452
  12. Tunis Med. 2024 Dec 05. 102(12): 988-994
       INTRODUCTION: The cover letter is a critical component of medical journal submissions, often influencing acceptance decisions. However, authors frequently underestimate its importance. This narrative review aimed to provide guidance for authors on writing effective and succinct cover letters.
    METHODS: We conducted a narrative review of literature on the recommended structure and content for drafting a cover letter.
    RESULTS: An effective and succinct cover letter should include the names of the editor in chief and journal, submission details, ethical statements, authors' agreement, and contact information. Additional elements such as declarations of conflicts of interest, funding sources, and permissions may also be necessary. The cover letter should emphasize the manuscript's uniqueness without merely duplicating the abstract.
    CONCLUSION: Cover letters remain pivotal for manuscript acceptance and must adhere to specific guidelines.
    Keywords:  Cover Letter; Editorial Process; Manuscripts as Topic; Medical Writing; Peer Review; Research
    DOI:  https://doi.org/10.62438/tunismed.v102i12.5438
  13. J Korean Med Sci. 2024 Dec 30. 39(50): e338
      An editorial article is a type of scholarly communication providing expert views and critical analysis of issues. It may reflect the view of the author(s) or of the organization/journal on a certain topic. An editorial may also comment on a published paper. Editorials are expected to be objective, evidence-based, and informative focusing attention on recent developments and matters of current societal/disciplinary concern. This format allows for timely dissemination of expert insight and facilitates ongoing scholarly discourse. The structure of editorials varies: critical, explanatory, and commendatory types serve varied purposes. Authors of editorials should follow certain principles of academic writing. The aim should be provided in an introductory paragraph. Thereafter, a constructive and balanced critique of the index article and/or a detailed yet concise analyze of the subject of matter should be provided. The conclusion paragraph should include brief take-home messages. Critical arguing should be supported by relevant references. A declaration of any potential conflicts of interests is essential to maintain objectivity and fairness. The current article aims to provide a primer, along with a checklist, on writing editorials.
    Keywords:  Academic Writing; Editorial; Editorial Comment; Medical Writing; Recommendations; Writing
    DOI:  https://doi.org/10.3346/jkms.2024.39.e338
  14. Biol Cybern. 2024 Dec 30. 119(1): 3
      The theoretical neurosciences research community produces many models, of different natures, to capture activities or functions of the brain. Some of these models are presented as «realistic » models, often because variables and parameters have biophysical units, but not always. In this opinion article, I explain why this term can be misleading and I propose some elements that can be useful to characterize a model.
    DOI:  https://doi.org/10.1007/s00422-024-00999-8