bims-skolko Biomed News
on Scholarly communication
Issue of 2025–12–28
thirty-six papers selected by
Thomas Krichel, Open Library Society



  1. Ann Plast Surg. 2025 Dec 24.
       PURPOSE: Open access publishing models are common in plastic surgery. We aim to quantify the financial investment required to support open access publishing for plastic surgery students during both medical school and residency training.
    METHODS: Peer-reviewed PubMed journal articles from plastic and reconstructive surgery-related journals published by current PGY-2 through PGY-6 integrated plastic surgery residents were divided into publications during medical school and publications during. Article-processing charges (APCs) for analyzed articles were collected online. Subgroup analyses by institutional NIH funding were conducted.
    RESULTS: A total of 2904 unique publications published by 606 PGY-2-PGY-6 integrated plastic surgery residents during medical school and 1109 publications from 245 PGY-5 and PGY-6 residents during residency were extracted. For medical school publications, each individual had a median (interquartile range [IQR]) of 4 (2-7) publications; 20.4% of their publications had a mandatory APC with a mean (SD) APC of $2140 (727) per project. The percent APC increased over time (correlation = 0.09). For residency publications, each student had a median (IQR) of 3 (2-7) publications; 23.6% of each resident's publications required an APC, with an average APC of $2140 ± $765 (mean ± SD) per project. Publications affiliated with a top 25 NIH-funded medical institution had a lower rate of open access publishing with an APC (17.8% vs 22.9%) but higher average impact factor (1.86 vs 2.03).
    CONCLUSIONS: Students publishing in plastic surgery journals require financial investment for open access fees. Institutions should ensure that they have adequate resources to support trainee publishing.
    Keywords:  article processing fees; open access publishing; student research funding
    DOI:  https://doi.org/10.1097/SAP.0000000000004612
  2. Front Health Serv. 2025 ;5 1686682
      Article processing charges (APCs) pose a material barrier to the dissemination of health research from low income countries where recent funding cuts compound limited domestic financing and fragile health systems. Despite carrying a disproportionate share of global disease, these settings contribute under one percent of global research publications. This Perspective piece explores how APCs and funding cuts intersect to shape research output, summarises mitigation efforts and gaps, and proposes practical options for more equitable access to scholarly publishing. APCs are reported to shape venue choice for researchers in low income countries, while reduced external funding leaves fewer upstream resources to absorb costs. Country examples point to institutional and capacity pressures. Early career researchers often face disproportionate obstacles including slower progression and reduced competitiveness. Waiver policies and regional initiatives such as AJOL, SciELO South Africa and AfricArXiv offer partial relief, yet inconsistencies in eligibility, awareness and implementation persist with ethical implications. A rights and equity oriented response would include tiered APC models, automatic waivers linked to country income classification, ring fenced support for health research in low income settings, greater investment and independent evaluation of diamond open access platforms, and focused research on the effects of funding cuts on APCs and dissemination in low income contexts.
    Keywords:  article processing charges (APCs); diamond open access models; global knowledge inequality; health research equity; low-income countries (LICs); open-access publishing; research funding cuts
    DOI:  https://doi.org/10.3389/frhs.2025.1686682
  3. Cureus. 2025 Nov;17(11): e97223
      The publish or perish culture has become a defining feature of modern academia, where career advancement often depends on publication quantity rather than scientific depth. This environment exerts intense psychological and ethical pressure on early-career researchers, contributing to stress, burnout, and declining research quality. Studies have linked publication pressure to increased depressive symptoms, sleep disturbances, and even research misconduct, underscoring the systemic nature of the problem. This editorial discusses how such pressures threaten innovation and scientific integrity while proposing reforms in evaluation metrics, mentorship, and mental health support to restore balance and sustainability within academic research.
    Keywords:  academic fatigue; medical education; mental health; publish or perish; research culture
    DOI:  https://doi.org/10.7759/cureus.97223
  4. Pharmacoecon Open. 2025 Dec 26.
       OBJECTIVES: Retractions of scientific articles have increased over time, across different health fields, disease indications, and study designs. However, it is currently unknown whether, and to what extent, retractions have impacted the literature of health economic evaluations. This research aimed to identify retracted health economic evaluations, describe the characteristics of such studies, and analyse reasons for retraction.
    METHODS: We conducted a systematic review of health economic evaluations published in peer-reviewed journals and subsequently retracted, identified using MEDLINE, Embase, and the Retraction Watch database from inception to May 2025. Retraction notices were examined and publication details, including reasons for retraction, were extracted.
    RESULTS: We identified 17 retracted economic evaluations published from 2006 to 2024. Studies evaluated a range of interventions in a broad array of disease indications. Retracted economic evaluations were published in 17 unique journals, 11 of which were top-tier outlets. Errors were the most common reason for retraction (11/17 studies, 64%), including errors in inputs, analysis, and/or model structure. Evidence of misconduct (plagiarism, duplicate publication, and peer-review manipulation) was found in 4/17 studies (24%). Retractions were rapid, with 12/17 studies (71%) retracted within the same year. Mean citation count was 4.8; the highest was 33. Despite retraction, studies were included in subsequent evidence reviews (6/17 studies, 35%) and one was used to inform clinical practice recommendations (1/17 studies, 6%).
    CONCLUSION: Evidence for retractions of health economic evaluations was found, spanning a variety of disease indications, interventions, and journal sources, due to errors or scientific misconduct. In some cases, papers produced subsequent citations in the literature despite retraction.
    TRIAL REGISTRATION: OSF https://osf.io/8c6jb/?view_only=0ed118b9830e45058e56fc333b93a62f .
    DOI:  https://doi.org/10.1007/s41669-025-00624-9
  5. Global Health. 2025 Dec 23. 21(1): 72
      Academic publishing is one of several forces that shape what is recognized as global health knowledge. The peer review process is meant to ensure rigor and quality, yet it can reproduce political and structural inequalities, especially when research challenges dominant narratives. For researchers from marginalized and colonized communities, these dynamics determine whether their language, identity, and lived realities are permitted in scholarly spaces. When political, historical, and socio-legal context is minimized or replaced with state-sanctioned labels, the result is not neutrality but the silencing of essential truths that directly shape health and mental health. This Comment examines how editorial and peer review practices operate as gatekeeping mechanisms that privilege dominant geopolitical narratives and marginalize Indigenous and decolonial perspectives. Drawing on a recent case where a peer-reviewed article, recommended for publication, faced subsequent editorial demands to replace politically accurate terminology referring to Palestinians, we show how language policing functions as epistemic control. These are not isolated incidents: global publishing norms pressure scholars toward state-sanctioned labels and "neutral" frames, sidelining colonial and political determinants of health. In global health, that pressure produces an evidence base that overlooks the sociopolitical conditions; occupation, systemic violence, legal segregation, displacement, that shape exposure, access, care pathways, and outcomes, including mental health. It produces an appearance of neutrality that is methodologically incomplete and ethically fragile, with downstream consequences for research agendas, funding priorities, program design, and accountability. Confronting the politics of knowledge production in global health requires structural change, not just diversity statements. Safeguarding researchers' right to represent their communities in their own terms and embedding sociopolitical realities into analysis are essential. Without these changes, global health will continue to reproduce the inequalities it seeks to reduce, failing to generate knowledge that is genuinely global, representative, and just.
    Keywords:  Academic publishing; Decolonial perspectives; Editorial gatekeeping; Knowledge production; Peer review
    DOI:  https://doi.org/10.1186/s12992-025-01173-w
  6. Am J Nurs. 2026 Jan 01. 126(1): 61-63
       ABSTRACT: Predatory conferences employ deceptive tactics to attract attendees for the purpose of making money from large registration and publication fees. Protecting oneself and others from predatory conferences requires vigilance and collective action. This article aims to inform people who may not be aware of the growing incidence and increasing savviness of predatory conferences and provides recommendations and resources to help nurses protect themselves.
    Keywords:  nurse researchers; pay-to-play model; predatory conference; predatory journal
    DOI:  https://doi.org/10.1097/AJN.0000000000000221
  7. J Plast Reconstr Aesthet Surg. 2025 Dec 16. pii: S1748-6815(25)00738-7. [Epub ahead of print]113 582-583
      
    DOI:  https://doi.org/10.1016/j.bjps.2025.12.012
  8. Account Res. 2025 Dec 25. 2607681
      Retractions issued for misconduct offer a unique window into how questionable research is rhetorically constructed and made to appear credible. This study investigates how engaging with retracted articles can serve as a pedagogical tool for reviewer training, with particular attention to the rhetorical mechanisms through which unreliability is performed. Twenty STEM doctoral researchers analyzed self-selected retracted papers using guided critical-reading questions to identify problematic rhetorical features. Across the analyses, five recurring issues emerged: intertextual falsification, methodological opacity, rhetorical inconsistency, rhetorical overstatement, and terminological distortion. The findings indicate that this approach has the potential to raise doctoral students' rhetorical sensitivity by enabling them to detect subtle markers of unreliability and to adopt a more evaluative rhetorical stance toward scholarly texts. Retracted articles thus can provide an authentic pedagogical resource for developing reviewer rhetorical sensitivity within doctoral education.
    Keywords:  Retractions; Reviewer training; doctoral education; rhetorical features; scholarly publishing
    DOI:  https://doi.org/10.1080/08989621.2025.2607681
  9. J Korean Med Sci. 2025 Dec 22. 40(49): e342
      Choosing the right statistical tests is essential for reliable results, but errors, like picking the wrong test or misinterpreting data, can easily lead to incorrect conclusions. Research integrity implies presenting research that is honest, clear, and uses correct statistics. By identifying statistical errors, artificial intelligence (AI) systems such as Statcheck and GRIM-Test increase the reliability of research and assist reviewers. AI helps non-experts analyze data, but it can be unpredictable for experts dealing with complex data analysis. Still, its ease of use and growing abilities show promise. Recent studies show that AI is increasingly helpful in research, assisting in spotting errors in methodology, citations, and statistical analyses. Tools like LLMs, Black Spatula, YesNoError, and GRIM-Test improve accuracy, but they need good data and human checks. AI has moderate accuracy overall but performs better in controlled settings. The Statcheck and GRIM-Test are especially good at spotting statistical errors. As more studies are retracted, AI offers helpful, albeit imperfect, support. It can speed up peer review and reduce reviewer workload, but it still has limits, such as bias and a lack of expert judgment. AI also brings risks like misreading results, ethical issues, and privacy concerns, so editors must make final decisions. To use AI safely and effectively, large, well-labeled datasets, teamwork across fields, and secure systems are required. Human oversight is always necessary to review research processes and ensure their reliability; humans must make the final decision and utilize AI responsibly.
    Keywords:  Artificial Intelligence; Publications; Scientific Misconduct; Statistics
    DOI:  https://doi.org/10.3346/jkms.2025.40.e342
  10. JAAD Int. 2026 Feb;24 242-243
      
    Keywords:  ChatGPT; academic writing; artificial intelligence; ethics; large language models; non-native English authors
    DOI:  https://doi.org/10.1016/j.jdin.2025.10.017
  11. Postgrad Med J. 2025 Dec 23. pii: qgaf215. [Epub ahead of print]
      The rapid integration of generative artificial intelligence (AI) is transforming scientific writing and publishing, creating both unprecedented opportunities and critical ethical challenges. This article investigates how the use of AI tools affects research integrity, authorship accountability, and peer review processes in scientific publishing. Methodologically, the review synthesizes literature on current AI policies, detection tools, and empirical surveys of author and reviewer practices. Three key hypotheses are proposed for future empirical testing: (H1) mandatory AI disclosure improves the detection of fabricated content; (H2) AI-assisted language refinement enhances manuscript clarity without compromising originality; and (H3) undisclosed AI use by reviewers diminishes the depth of critique. The main findings indicate dominant reliance on descriptive studies, highlighting the need for hypothesis-driven, cross-disciplinary research frameworks and greater transparency to ensure that AI adoption fortifies the trustworthiness of scholarly communication.
    Keywords:  artificial intelligence; confidentiality; peer review; plagiarism; publishing ethics; research; research integrity
    DOI:  https://doi.org/10.1093/postmj/qgaf215
  12. Perspect Med Educ. 2025 ;14(1): 1003-1012
       Introduction: Generative AI is a powerful resource for health professions education (HPE) researchers publishing their work. However, questions remain about its use and guidance about disclosure is inconsistent. This study explores journal editors' experiences and expectations of AI-use disclosure, to assist journals to clarify expectations and authors to satisfy them.
    Methods: In this descriptive qualitative study, editors were interviewed between January 6, 2025, and May 7, 2025 using Zoom. Eligible participants were identified through journal webpages and snowball sampling. A purposive sampling strategy prioritized HPE journals and included a limited sample of general medical journals to explore transferability. Data collection and thematic analysis proceeded iteratively.
    Results: Eighteen participants, including 9 chief editors and 9 associate/deputy editors were interviewed. Fourteen worked in HPE journals, four in general medical journals. The analysis revealed 4 themes: 1) the basics of disclosure, made up of content expectations and process knowledge; 2) the necessity threshold, regarding which circumstances require disclosure; 3) the sufficiency threshold, regarding how much detail to include; and 4) the factors blurring these thresholds, which included the speed of change, the co-construction of standards, and the uneasy fit of some scientific principles with the AI-use context.
    Conclusions: While editors shared basic disclosure expectations, these were complicated by blurred thresholds of sufficiency and necessity that may exacerbate uncertainty in the scholarly community. By attending to these thresholds and the factors blurring them, and by working to articulate shared disclosure standards, HPE journals can help authors safely navigate the shifting norms of AI-use disclosure.
    DOI:  https://doi.org/10.5334/pme.2326
  13. Nursing. 2026 Jan 01. 56(1): 43-48
       ABSTRACT: Although generative artificial intelligence (AI) chatbots are a promising tool for scholarly writing, caution is needed with hallucinations (incorrect or misleading results). Responsible research design, clear ethical guidelines, and emphasis on authentic scholarly work are critical to ensuring integrity and accurate dissemination. This article examines examples that highlight concerns about accuracy and the necessity of verifying output.
    Keywords:  AI; artificial intelligence; chatbot; generative AI; hallucination; scholarly writing
    DOI:  https://doi.org/10.1097/NSG.0000000000000279
  14. J Clin Neurosci. 2025 Dec 24. pii: S0967-5868(25)00792-1. [Epub ahead of print]144 111819
       INTRODUCTION: The use of Artificial Intelligence (AI) has grown dramatically in recent years. In addition to its use for data analysis, its applications have extended to manuscript writing. In this article, we analyze the policies within top neurosurgical journals surrounding AI use for manuscript writing, its implementation, and whether disclosure of this practice affects article citation metrics.
    METHODS: Neurosurgical journals with h-indices ≥ 100 and with "spin*" or "neurosurg*" (including translations in other languages) and no other medical subspecialty within their title were included (n = 9). Each journal's policy surrounding AI use in manuscript writing was assessed for whether disclosure was mandated, and if so, requirements for the disclosure. A search was performed using each journal's respective database to find articles that disclosed AI use. Data extracted from each article included: article acceptance and online publication date, type of article, section containing the AI disclosure, total citations, AI program used, and the stated purpose of AI use. A cohort of non-AI-assisted articles was created to assess whether AI disclosure impacts the total number of citations received after publication.
    RESULTS: All nine journals mandated disclosure, however, there were variations in the contents required from each disclosure, where in the manuscript the disclosure must be, limitations for AI use, and whether the journal provided a template for how to disclose AI use. A total of 67 publications were included in this review. The journal with the greatest number of articles was World Neurosurgery (n = 41, 61 %), and the journal with the greatest percentage of articles published disclosing AI use since January 1, 2022, was Neurosurgical Focus (0.68 %). Despite the low prevalence across all journals assessed, the rate of growth for articles written with AI has steadily increased. No significant difference was found in the total number of citations between articles that disclosed AI and a cohort of similar articles that did not (W = 85.5, p = 0.69562).
    CONCLUSIONS: The number of articles declaring AI use was lower than expected. However, such articles have been growing exponentially. Policies surrounding AI use and its implementation varied across journals. We therefore provide recommendations to promote similarity in guidelines between journals, as this will lessen confusion among authors and promote transparency within the medical research community.
    Keywords:  Artificial Intelligence; Editorial Policies; Publication Trends; Publishing Ethics; Scientific Writing
    DOI:  https://doi.org/10.1016/j.jocn.2025.111819
  15. JAAD Int. 2026 Feb;24 244-245
      
    Keywords:  ChatGPT; ethics; fairness; manuscript editing; medical education; non-native English writers
    DOI:  https://doi.org/10.1016/j.jdin.2025.11.005
  16. Eur J Appl Physiol. 2025 Dec 24.
      The integration of Large Language Models (LLMs) into scientific writing presents significant opportunities for scholars but also risks, including misinformation and plagiarism. A new body of literature is shaping to verify the capability of LLMs to execute the complex tasks that are inherent to academic publishing. In this context this study was driven by the need to critically assess LLM's out-of-the-box performance in generating evidence synthesis reviews. To this end, the signature topic of the authors' group, cross-education of voluntary force, was chosen as a model. We prompted a popular LLM (Gemini 2.5 Pro, Deep Research enabled) to generate a scoping review on the neural mechanisms underpinning cross-education. The resulting unedited manuscript was submitted for formal peer-review to four leading subject-matter experts. Their qualitative feedback on manuscript's structure, content, and integrity was collated and analyzed. Peer-reviewers identified critical failures at fundamental stages of the review process. The LLM failed to: (1) identify specific research questions; (2) adhere to established methodological frameworks; (3) implement trustworthy search strategies; (4) objectively synthesize data. Importantly, the Results section was deemed interpretative rather than descriptive. Referencing was agreed as the worst issue being inaccurate, biased toward open-access sources (84%), and containing instances of plagiarism. The LLM also failed to hierarchize evidence, presenting minor or underexplored findings as established evidence. The LLM generated a non-systematic, poorly structured, and unreliable narrative review. These findings suggest that the selected LLM is incapable of autonomously performing scientific synthesis and requires massive human supervision to correct the observed issues.
    Keywords:  Evidence synthesis; Generative AI; Neurophysiology; Peer review; Plagiarims; Scholarly Publishing
    DOI:  https://doi.org/10.1007/s00421-025-06100-w
  17. Elife. 2025 Dec 23. pii: RP108748. [Epub ahead of print]14
      Peer reviewers sometimes comment that their own journal articles should be cited by the journal article under review. Comments concerning relevant articles can be justified, but comments can also be unrelated coercive citations. Here, we used a matched observational study design to explore how citations influence the peer review process. We used a sample of more than 37,000 peer reviews from four journals that use open peer review and make all article versions available. We find that reviewers who were cited in versions after version 1 were more likely to make a favourable recommendation (odds ratio = 1.61; adjusted 99.4% CI: 1.16-2.23), whereas being cited in the first version did not improve their recommendation (odds ratio = 0.84; adjusted 99.4% CI: 0.69-1.03). For all versions of the articles, the reviewers who commented that their own articles should be cited were less likely to recommend approval compared to the reviewers who did not, with the strongest association after the first version (odds ratio = 0.15; adjusted 99.4% CI: 0.08-0.30). Reviewers who included a citation to their own articles were much more likely to approve a revised article that cited their articles compared to a revised article that did not (odds ratio = 3.5; 95% CI: 2.0-6.1). Some reviewers' recommendations depend on whether they are cited or want to be cited. Reviewer citation requests can turn peer review into a transaction rather than an objective critique of the article.
    Keywords:  citations; medicine; meta-research; none; peer review; research misconduct
    DOI:  https://doi.org/10.7554/eLife.108748
  18. Curr Protoc. 2025 Dec;5(12): e70283
      Scientific progress relies on the generation, validation, and reuse of research data, yet standard practices and cultural, legal, and technological challenges have long limited data sharing. In the 21st century, growing volumes of data, higher transparency requirements, and concerns about reproducibility have pushed research data management to the forefront. This manuscript brings together three perspectives to provide an extensive overview of data sharing: theoretical foundations, ethical and normative frameworks, and practical implementation. First, it discusses the way research data differs across fields and formats, the distinction between primary and secondary data, and how metadata helps ensure data can be reused. It emphasizes how open data fosters transparency, reproducibility, accountability, and innovation, while also acknowledging that research data has historically been viewed as private intellectual property. Second, it explores the emergence of principles and ethical standards designed to enhance data quality and promote responsible use. Documentation standards, data management plans, and sharing of code and workflows have helped the FAIR (Findability, Accessibility, Interoperability, and Reusability) principles become a cornerstone for data sharing. Regulatory frameworks, such as the General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA), as well as mechanisms such as de-identification and Data Trusts, address legal and ethical issues, including privacy protection, licensing, and data governance. Finally, the third major topic discusses how these principles are implemented through infrastructure, incentives, and new technologies. It addresses the significance of cultural change and recognition systems, the impact of policies by journals and funders, and the role of repositories in preservation and interoperability. It also emphasizes the emergence of novel trends, such as artificial intelligence-driven metadata generation, blockchain-based provenance, executable workflows, and privacy-preserving computation, all of which are redefining the concept of responsible and scalable data sharing. By connecting conceptual, ethical, and practical dimensions, the manuscript outlines both current challenges and realistic pathways toward transparent, collaborative, and future-oriented research. © 2025 Wiley Periodicals LLC.
    Keywords:  FAIR principles; data repositories; open science; research data management
    DOI:  https://doi.org/10.1002/cpz1.70283
  19. Front Robot AI. 2025 ;12 1695169
      
    Keywords:  brain-computer interface; dataset; functional near-infrared spectroscopy; motor imagery; open access
    DOI:  https://doi.org/10.3389/frobt.2025.1695169
  20. J Dent. 2025 Dec 23. pii: S0300-5712(25)00755-9. [Epub ahead of print] 106312
       OBJECTIVE: Information regarding potential conflict of interest (COI) and funding is essential for informed interpretation of research findings. The aim of the present cross-sectional investigation was to evaluate the reporting of COI and funding in articles published in orthodontic journals.
    MATERIAL AND METHODS: Article titles contained within 14 orthodontic journals, selected from the Scopus orthodontic journal database, and published in 2023 were documented. Characteristics, including those related to COI and funding statements, of each article satisfying selection criteria were recorded.
    RESULTS: A total of 876 articles satisfied selection criteria. The median (interquartile range) number of authors per article was 5.0 (3.0, 6.0). Clinical studies (n=253; 28.9%) were the most common article type published. Articles related to newer technologies and appliances were commonplace. Almost 90% (n=784; 89.5%) of articles contained a COI statement. Twenty-five (2.9%) articles stated that there was a COI and provided details. Sixteen (1.8%) articles were determined to have a financial COI. A funding statement was made in 571(65.2%) articles. Of these, 257 (29.3%) declared that no funding was received, 179 (20.4%) were apparently from not-for-profit organisations and 17 (1.9%) from for-profit organisations. The source of funding in 114 (13.0%) articles was unclear. Fisher's Exact test indicated that details where details of COI are provided, the odds of the details of funding increases (odds ratio: 14.13: 95% CI 9.3, 21.3; P<0.01).
    CONCLUSIONS: COI and funding information appears to be underreported in orthodontic journals. This may indicate a requirement for orthodontic journals to improve COI disclosure processes.
    Keywords:  Conflict of interest; Dental Ethics; Funding; Orthodontics; Publishing
    DOI:  https://doi.org/10.1016/j.jdent.2025.106312
  21. Nurse Educ Pract. 2025 Dec 19. pii: S1471-5953(25)00431-7. [Epub ahead of print] 104674
      
    DOI:  https://doi.org/10.1016/j.nepr.2025.104674
  22. Trends Pharmacol Sci. 2025 Dec 20. pii: S0165-6147(25)00281-0. [Epub ahead of print]
      
    DOI:  https://doi.org/10.1016/j.tips.2025.12.001