bims-skolko Biomed News
on Scholarly communication
Issue of 2025–11–23
thirty papers selected by
Thomas Krichel, Open Library Society



  1. J Evol Biol. 2025 Nov 17. pii: voaf143. [Epub ahead of print]
      The current economics of scientific publishing reveal a profound imbalance: academia pays prices far exceeding the actual costs of publication. Rather than supporting research, much of this expenditure sustains the profits of a few dominant commercial publishers. Transitioning to responsible publishing is a collective challenge that requires raising awareness among scientists about the problem and the solutions available. We present DAFNEE, a database of academia-friendly journals in ecology, evolutionary biology and archaeology (https://dafnee.isem-evolution.fr/). DAFNEE includes information on over 600 journals (co)run by academic or non-profit institutions, aiming at helping to keep publishing funds within the academic community. The database details these journal's business models, article processing charges, citation rates and partnerships. We show that DAFNEE journals compare favourably to non-DAFNEE ones in terms of editorial and financial policy, while offering similar citation rates. Finally, we offer several recommendations aimed at encouraging authors, reviewers, and evaluators to adopt more responsible publishing practices.
    Keywords:  Article processing charges; Diamond Open Access; Responsible publishing; Scientific publishing
    DOI:  https://doi.org/10.1093/jeb/voaf143
  2. Adv Med Educ Pract. 2025 ;16 2103-2114
       Purpose: The proliferation of predatory open-access journals poses a significant threat to scientific integrity, especially among early-career researchers unfamiliar with deceptive publishing practices. This study aimed to assess dental interns' awareness of predatory journals in Riyadh and to identify factors associated with awareness, including research interest and familiarity with journal evaluation tools.
    Methods: An analytical cross-sectional survey was conducted among 155 dental interns across six institutions in Riyadh. A self-developed, psychometrically validated electronic questionnaire assessed participants' demographic profiles, research experience, and awareness of predatory journals. Statistical analyses included Cronbach's alpha, exploratory factor analysis, chi-square tests, t-tests, and logistic regression.
    Results: Only 47.7% of interns recognized the term "predatory journal", and 74.8% were unfamiliar with Beall's List. Awareness was significantly associated with research interest (OR = 7.18, *p* < 0.001) and prior invitations from predatory journals (OR = 8.82). Demographic factors such as gender, marital status, and university affiliation showed no significant associations with awareness. Awareness was moderately correlated with prior publication activity (r = 0.38, p < 0.001) but was not significantly predicted by publication count in regression models.
    Conclusion: The findings reveal moderate yet inconsistent awareness of predatory publishing practices among dental interns in Riyadh. While many exhibit strong research interest, gaps remain in ethical journal evaluation and identification. Targeted educational initiatives, including institutional workshops and curriculum integration, are essential to foster ethical publishing literacy among emerging dental professionals.
    Keywords:  dental interns; open access publishing; predatory journals; research ethics
    DOI:  https://doi.org/10.2147/AMEP.S548141
  3. J Korean Med Sci. 2025 Nov 17. 40(44): e280
       BACKGROUND: Artificial intelligence (AI) has promoted progress across various fields. The number of papers regarding AI has risen in recent years. This study examines retracted publications regarding AI by analyzing trends, journals, and reasons.
    METHODS: This descriptive cross-sectional study thoroughly investigated retracted AI-related papers listed in PubMed. The data extraction comprised bibliographic data, reasons for retraction, citation metrics, journal indexing status, and Altmetric Attention Scores (AASs). Retraction notices were classified according to particular reasons. Descriptive statistics were employed to evaluate retraction trends, geographic distribution, and citation impact.
    RESULTS: A total of 764 retracted AI-related papers were examined, with the most retractions occurring in 2023 (n = 667). China had the highest number (n = 551), followed by India (n = 40) and Bangladesh (n = 23). Journals focusing on mathematical and computational biology, neurosciences, and healthcare sciences had the most retractions. The most common retraction reasons were peer review issues (n = 716) and data concerns (n = 714), followed by irrelevant citations (n = 571) and unethical AI use (n = 238). The median time to retraction was 510 days (18-4,200). The median citation and AAS scores were (0-167) and 0 (0-191).
    CONCLUSION: The high number of retractions from China highlights the need for higher research standards. Deficits in peer review and data issues emerged as the main reasons for retraction, underscoring persistent challenges in maintaining research integrity and quality assurance. For scientific literature integrity, academic institutions, publishers, and researchers should stress transparency, ethics, and rigorous post-publication inspection.
    Keywords:  Artificial Intelligence; Deep Learning; Machine Learning; Retraction Notice; Retraction of Publication; Scientific Misconduct
    DOI:  https://doi.org/10.3346/jkms.2025.40.e280
  4. PLoS One. 2025 ;20(11): e0335059
       BACKGROUND: Gender disparities in scientific authorship are well documented, yet little is known about gender representation among authors of retracted publications.
    METHODS: We analyzed 878 retracted publications from 131 high-impact medical journals across nine clinical disciplines (anesthesiology, dermatology, general internal medicine, gynecology/obstetrics, neurology, oncology, pediatrics, psychiatry, and radiology). Gender was inferred using Gender API for all, first, and last authors. Two analytic samples were constructed based on prediction confidence thresholds (≥60% and ≥70%). We examined gender distribution across authorship positions, number of retractions per author, and disciplinary representation. Wilcoxon rank-sum and chi-squared tests were used to assess group differences. Gender proportions were compared with publication benchmarks from 2008-2017, restricting retraction data to the same period for comparability.
    RESULTS: Among 4,136 authors, 3,909 had full first names, and gender could be assigned to 3,865 (98.9%). In the sample with prediction confidence ≥60% (n = 3,743), 863 (23.1%) were identified as women. They accounted for 16.5% (123/747) of first and 12.7% (87/687) of last authors. They had significantly fewer retractions per author and were less likely to have >5 retractions (all authors: 3 women [8.1%] vs 34 men [91.9%], p < 0.001). Across most disciplines, their representation was below publication benchmarks. Dermatology (retractions = 80.0%, publications = 48.9-51.8%) and radiology (retractions = 40.0%, publications = 31.0-36.8%) were exceptions among first authors, while pediatrics (retractions = 50.0%, publications = 37.0%-42.6%) was an exception among last authors, though all based on small numbers.
    CONCLUSIONS: Women are markedly underrepresented among authors of retracted publications, particularly in cases involving multiple retractions. Further research is needed to clarify underlying mechanisms.
    DOI:  https://doi.org/10.1371/journal.pone.0335059
  5. Nature. 2025 Nov 19.
      
    Keywords:  Authorship; Medical research; Publishing
    DOI:  https://doi.org/10.1038/d41586-025-03796-w
  6. Med Sci (Paris). 2025 Oct;41(10): 770-774
      Since the early 2000s, retractions of articles in biomedical research have increased exponentially, revealing growing structural tensions within the global scientific system. This article offers a critical synthesis of retractions occurring between 2000 and 2025, based on bibliometric data and socio-institutional analyses. It highlights differentiated geographical dynamics, the institutionalization of fraud, increased editorial responsibility, and typical profiles of retracted authors- often male, hyper-productive, and poorly supervised. By distinguishing between honest error and misconduct, the analysis shows that the majority of retractions are linked to serious violations (fraud, plagiarism, manipulation). Although scientific self-correction mechanisms have been strengthened, they remain imperfect, particularly considering the continued citation of retracted articles. This study underscores the urgent need to rethink academic evaluation criteria, to strengthen a culture of scientific integrity, and to establish more rigorous editorial governance. Retractions, far from being mere anomalies, emerge as systemic indicators calling for a profound reform of scientific publishing practices.
    DOI:  https://doi.org/10.1051/medsci/2025162
  7. Res Integr Peer Rev. 2025 Nov 21. 10(1): 25
       BACKGROUND AND AIM: The International Committee of Medical Journal Editors (ICMJE) defines a potential conflict of interest (COI) as a situation where professional judgment could be influenced by secondary interests. Competing interests can introduce bias into the peer-review process, making it essential for all participants to declare any potential COIs. While authors are currently required to disclose their COIs, editors and editorial board members are not held to the same standard. This study aimed to evaluate the extent to which editors and editorial board members of ethics journals report their potential competing interests.
    METHODS: From October 23 to November 1, 2024, 82 ethics journals selected based on their impact factors were assessed, focusing on the disclosure of potential COIs by editors and editorial board members. Journal websites were examined to determine how editors and board members disclose potential COIs. Additionally, publisher websites were assessed for policies guiding these individuals in reporting COIs during peer review.
    RESULTS: Only 2% of the journals disclosed potential COIs for their editors, and 13% provided biographical information about editorial members. None of the journals employed a structured reporting approach, such as the ICMJE disclosure form, despite most claiming adherence to ICMJE and COPE guidelines. There was considerable variability in how journals and publishers guided their editors and board members in reporting their own COIs.
    CONCLUSION: The findings indicate that disclosures of potential COIs by editors and editorial board members in leading ethics journals are often inconsistent and insufficient. Increasing transparency in this area could lead to a fairer and more trustworthy peer-review process.
    DOI:  https://doi.org/10.1186/s41073-025-00181-z
  8. BMC Nurs. 2025 Nov 20. 24(1): 1419
       BACKGROUND: The rapid integration of large language models (LLMs) into scholarly publishing has created an urgent need for clear standards. This study aims to comprehensively analyze the editorial stances of leading nursing publications regarding the use of LLMs in manuscript preparation and peer assessment.
    METHODS: We conducted a cross-sectional analysis of the top 50 nursing publications according to their journal impact factor. Each publication's website was systematically evaluated for directives concerning LLM use in authorship, content generation, image creation, and peer assessment. Journal metrics were also extracted to assess any correlation with policy adoption.
    RESULTS: Of the 50 publications, 35 (70%) had explicit LLM-related directives. A strong point of agreement permits the use of LLMs for content generation (97%) but prohibits LLM authorship (94%). However, a significant divergence was found regarding AI-generated images, with 52% of publications prohibiting their use. Guidance on LLM use in peer assessment was also inconsistent, with 49% of publications prohibiting it. Policy adoption varied significantly by publisher (ranging from 20% to 100%). No statistical association was found between policy existence and journal impact metrics (p > 0.05).
    CONCLUSIONS: Leading nursing publications exhibit a fractured landscape on LLM use. While foundational agreement exists on authorship and content generation, critical areas like image creation and peer assessment lack consistent standards. This ambiguity underscores the need for a more unified, transparent framework to guide ethical and responsible LLM integration in nursing scholarship.
    Keywords:  Authorship; Editorial policies; Large language models; Nursing journals; Peer review
    DOI:  https://doi.org/10.1186/s12912-025-04102-9
  9. Adv Simul (Lond). 2025 Nov 22.
       BACKGROUND: The increasing use of artificial intelligence (AI) by scholars presents a pressing challenge to healthcare publishing. While legitimate use can potentially accelerate scholarship, unethical approaches also exist, leading to factually inaccurate and biased text that may degrade scholarship. Numerous online AI detection tools exist that provide a percentage score of AI use. These can assist authors and editors in navigating this landscape. In this study, we compared the scores from three AI detection tools (ZeroGPT, PhraslyAI, and Grammarly AI Detector) across five plausible conditions of AI use and evaluated them against human assessments.
    METHODS: Thirty open access articles published in the journals Advances in Simulation and Simulation in Healthcare prior to 2022 were selected, and the article introductions were extracted. Five experimental conditions were examined, including: (1) 100% human written; (2) human written, light AI editing; (3) human written, heavy AI editing; (4) AI written text from human content; and (5) 100% AI written from article title. The resulting materials were assessed by three open-access AI detection tools and five blinded human raters. Results were summarized descriptively and compared using repeated measures analysis of variance (ANOVA), intraclass correlation coefficients (ICC), and Bland-Altman plots.
    RESULTS: The three AI detection tools were able to differentiate between the five test conditions (p < 0.001 for all), but varied significantly in absolute score, with ICC ranging from 0.57 to 0.95, raising concerns regarding overall reliability of these tools. Human scoring was far less consistent, with an overall accuracy of 19%, indistinguishable from chance.
    CONCLUSION: While existing AI detection tools can meaningfully distinguish plausible AI use conditions, reliability across these tools is variable. Human scoring accuracy is uniformly low. Use of AI detection tools by scholars and journal editors may assist in determining potentially unethical use but they should not be relied upon alone at this time.
    Keywords:  Academic writing; Artificial intelligence; ChatGPT; Detection; Ethics; Large language models
    DOI:  https://doi.org/10.1186/s41077-025-00396-6
  10. BMC Prim Care. 2025 Nov 17. 26(1): 368
       BACKGROUND: Artificial intelligence (AI) is increasingly integrated into family medicine research and practice, enhancing diagnostics, data analysis, and care delivery. Yet, its rapid adoption has outpaced the development of standardized editorial policies, raising concerns about transparency, ethics, and reproducibility. Clear guidance from journals is urgently needed to ensure responsible use of AI in research and publishing.
    OBJECTIVE: To evaluate editorial policies and reporting guideline endorsements related to AI across leading FM journals.
    METHODS: Using the SCImago Journal Rank database, we conducted a cross-sectional analysis of FM journals. From November 2024 to January 2025, we reviewed publicly available Instructions for Authors for AI-related policies, including authorship, manuscript writing, content/image generation, and disclosure. We also assessed whether journals endorsed AI-specific RGs (e.g., CONSORT-AI, SPIRIT-AI). Data were extracted in duplicate using a standardized form. Reproducibility was supported through protocol registration on Open Science Framework.
    RESULTS: Of 57 FM journals identified, 40 met inclusion criteria. Among these, 82.5% (33/40) referenced AI in their policies. Most (77.5%) prohibited AI authorship and required disclosure of AI use, while 72.5% permitted AI-assisted manuscript writing. Policies on AI-generated content and images varied, with 47.5% and 50.0% of journals allowing their use, respectively. Only 5.0% (2/40) endorsed AI-specific RGs. No correlation was observed between journal characteristics and AI policy adoption.
    CONCLUSIONS: Most family medicine journals now address AI use, but notable gaps remain, particularly in endorsing AI-specific reporting guidelines. Without broader adoption of structured guidance, AI-integrated research risks inconsistency, limited reproducibility, and ethical challenges. Strengthening journal policies and endorsing standardized reporting frameworks is essential to ensure high-quality, trustworthy AI research in family medicine.
    Keywords:  AI journal policies; Artificial intelligence; Editorial policies; Family medicine; Reporting guidelines
    DOI:  https://doi.org/10.1186/s12875-025-03044-0
  11. Gates Open Res. 2025 ;9 103
       Background: There has been steady progress and advancement of research in Africa. However, African researchers face numerous challenges among them, limited international recognition. This is due to the low discoverability and inclusion of their research outputs by indexers and databases. A lot of initiatives have attempted to address the challenge, however, there is a need for support to enhance the discoverability and inclusion of research outputs from Africa.
    Methods: We conducted a desk review of 1,116 journals hosted on the Sabinet journal repository and the African Journal Online (AJOL) platform. The factors that were considered to influence journals' discoverability and inclusion include (i) the journals' Open Access (OA) status, (ii) OA journals' listing in the Directory of Open Access Journals (DOAJ), (iii) the journals' presence on the International Standard Serial Number (ISSN) portal, (iv) the membership of the journals' publishers on the Committee on Publication Ethics (COPE), (v) the journals' hosting on International Network for Advancing Science and Policy (INASP) and (vi) geographic location of the journals' online publisher.
    Findings: A total of 1,116 journals were identified from the Sabinet and AJOL platforms. The highest proportion of journals was neither discovered by Google Scholar nor included in Scopus (63.2%). The study established one significant predictor of journal discoverability by Google Scholar and inclusion in Scopus. This was the journal listing on the ISSN portal which increased the odds of the journal being discoverable by Google Scholar and inclusion in Scopus by 2.033 and 5.451 respectively. Journals listed in the DOAJ but whose publishers were COPE members had significantly reduced odds of being discoverable by Google Scholar and being included in Scopus by 0.334 and 0.161 respectively. This suggests that the journal's discoverability and inclusion are more nuanced and not always straightforward hence quality markers need to be aligned.
    Keywords:  African Journals; Capacity strengthening; Journal indexing; Research evaluation; discoverability; international standards
    DOI:  https://doi.org/10.12688/gatesopenres.16372.1
  12. J Empir Res Hum Res Ethics. 2025 Nov 18. 15562646251395350
      Ethical authorship practices ensure both accountability and credibility. In this study, we estimated the frequency of encountering honorary and ghost authorship at least once among researchers at Hamad Medical Corporation (HMC) in Qatar. Additionally, we evaluated researchers' familiarity with standard authorship guidelines. Using a cross-sectional design, we administered a pre-developed anonymous online survey to 4043 researchers. Descriptive statistics in the form of percentages and frequences along with a 2-sided Chi-square tests were used. Significance was defined as p ≤ .05. Overall, researchers demonstrated low awareness of adopted authorship guidelines. While 24% of respondents reported never having heard of the International Committee of Medical Journal Editors (ICMJE) guidelines, 76% were aware of them but unfamiliar with the content. Additionally, the low awareness coincided with reported frequencies of having encountered honorary and ghost authorship at least once-70.5% and 45.5%, respectively. In conclusion, authorship misuse is a significant issue in Qatar, and appears to occur at levels consistent with those found in international surveys. It remains a delicate matter that can be approached by promoting awareness, educating researchers, and encouraging adherence to guidelines.
    Keywords:  Qatar; and authorship misuse; ethics; ghost authorship; honorary authorship; research integrity
    DOI:  https://doi.org/10.1177/15562646251395350
  13. Nursing. 2025 Dec 01. 55(12): 37-40
       ABSTRACT: Reporting guidelines indicate the minimum information required in a manuscript when reporting specific types of research, evidence syntheses, quality improvement (QI) projects, and other study types. These guidelines provide a list of content areas for clinicians to include, along with the recommended order in which they should appear, to ensure complete and transparent reporting. Adherence to reporting guidelines can improve the quality of a manuscript describing a research study, an evidence-based practice project, or a QI initiative. This article explains reporting guidelines and discusses their use in preparing manuscripts. It is essential to identify the relevant guidelines before initiating a research study, so that the necessary information is available for dissemination upon completion.
    Keywords:  checklists; dissemination; nursing journals; reporting guidelines; research reports
    DOI:  https://doi.org/10.1097/NSG.0000000000000285
  14. eNeuro. 2025 Nov;pii: ENEURO.0486-24.2025. [Epub ahead of print]12(11):
      Ongoing efforts over the last 50 years have made data and methods more reproducible and transparent across the life sciences. This openness has led to transformative insights and vastly accelerated scientific progress (Gražulis et al., 2012; Munafó et al., 2017). For example, structural biology (Bruno and Groom, 2014) and genomics (Benson et al., 2013; Porter and Hajibabaei, 2018) have undertaken systematic collection and publication of protein sequences and structures over the past half century. These data, in turn, have led to scientific breakthroughs that were unthinkable when data collection first began (Jumper et al., 2021). We believe that neuroscience is poised to follow the same path, and that principles of open data and open science will transform our understanding of the nervous system in ways that are impossible to predict at the moment. New social structures supporting an active and open scientific community are essential (Saunders, 2022) to facilitate and expand the still limited adoption of open science practices in our field (Schottdorf et al., 2024). Unified by shared values of openness, we set out to organize a symposium for open data in neurophysiology (ODIN) to strengthen our community and facilitate transformative open neuroscience research at large. In this report, we synthesize insights from this first ODIN event. We also lay out plans for how to grow this movement, document emerging conversations, and propose a path toward a better and more transparent science of tomorrow.
    Keywords:  code; collaboration; datasets; neurophysiology; open science; sharing
    DOI:  https://doi.org/10.1523/ENEURO.0486-24.2025
  15. Nat Comput Sci. 2025 Nov;5(11): 975
      
    DOI:  https://doi.org/10.1038/s43588-025-00931-5
  16. J Appl Clin Med Phys. 2025 Dec;26(12): e70387
       BACKGROUND: Although medical physics residents frequently engage with academic literature, many have limited exposure to the peer review process from the reviewer's perspective. Mentored peer review offers a structured, accessible opportunity for residents to develop critical appraisal skills, understand scholarly communication workflows, and participate in academic service. However, many faculty mentors and residency programs lack clear guidance on how to support residents through this process. This work outlines how medical physics residency educators can support a mentored peer review experience for medical physics residents.
    LEARNING OBJECTIVES FOR RESIDENTS INCLUDE: Identifying and explaining key components of the peer review process. Demonstrating the ability to draft a structured, ethical peer review. Reflecting on how peer review supports scholarly growth and professional identity formation.
    DISCUSSION: This article outlines actionable strategies to help mentors integrate peer review into medical physics residency training. Topics include orienting residents to editorial workflows, modeling professional feedback, emphasizing ethics, and building independence. These strategies are grounded in published literature and professional best practices and aim to cultivate both competent reviewers and reflective, engaged contributors to the medical physics scholarly community.
    Keywords:  education; mentorship; peer review; residency; scholarly development
    DOI:  https://doi.org/10.1002/acm2.70387
  17. Cureus. 2025 Oct;17(10): e94465
      The academic publishing landscape increasingly demands precision in research reporting and article classification. However, confusion persists over the distinctions between original studies and systematic, scoping, integrative, and narrative reviews, particularly when studies use secondary or aggregated data. This paper critically examines the defining features of each article type, highlights frequent misconceptions in peer review (e.g., the expectation for systematic data extraction in narrative reviews), and proposes a clear taxonomy based on methodological rigor and knowledge generation. We argue that originality should be defined by creating new knowledge, not by the exclusive use of primary data. Through literature examples and classification criteria, we call for harmonization across journals and editorial policies to improve clarity, transparency, and the integrity of scientific reporting.
    Keywords:  category; journal; medicine; original article; review article; science; taxonomy
    DOI:  https://doi.org/10.7759/cureus.94465
  18. Nutr Rev. 2025 Nov 16. pii: nuaf233. [Epub ahead of print]
      
    Keywords:  education; error; nomenclature; scientific communication; scientific literacy
    DOI:  https://doi.org/10.1093/nutrit/nuaf233