bims-skolko Biomed News
on Scholarly communication
Issue of 2026–02–15
28 papers selected by
Thomas Krichel, Open Library Society



  1. Nature. 2026 Feb;650(8101): 516
      
    Keywords:  Careers; Lab life; Publishing; Research management
    DOI:  https://doi.org/10.1038/d41586-026-00419-w
  2. Ann Med Surg (Lond). 2026 Feb;88(2): 1842-1848
      The rise of predatory journals threatens the integrity of academic publishing by exploiting open-access models and bypassing rigorous peer review. The lack of standardized criteria complicates their identification. This study systematically reviews existing predatory journal lists, assessing their effectiveness in enhancing transparency and safeguarding scholarly publishing. This systematic review adhered to PRISMA guidelines, including lists identifying predatory journals from peer-reviewed sources or reputable organizations. Using relevant keywords, a comprehensive search was conducted across academic databases (PubMed, Web of Science, Scopus, DOAJ), grey literature, and publisher websites. Key variables extracted included governance, accessibility, update mechanisms, and identification criteria. A comparative analysis assessed transparency, evaluation processes, and gaps such as historical tracking and evolving criteria. Descriptive statistics, including frequency, percentage, median, and range, were calculated using SPSS Version 26.0. Ten lists identifying predatory journals were analyzed; six (60.0%) were established after 2017, and nine (90.0%) were publicly accessible. The majority (seven, 70.0%) covered journals and publishers, with nine (90.0%) relying on a manual review process for identification. Delisting criteria were unclear in eight (80.0%) of the lists. Most lists (six, 60.0%) were available in database format . In terms of updating frequency, one list (10.0%) was updated daily, and six lists (60.0%) did not specify their update frequency. While these lists help identify fraudulent publishing practices, criteria, updates, and delisting inconsistencies reduce their reliability. Standardized methodologies, transparency, and sustained efforts are needed to keep them relevant, ensuring they safeguard academic integrity and guide researchers toward credible publishing.
    Keywords:  Beall list; Cabell list; Kscien list; predatory journal; predatory publisher
    DOI:  https://doi.org/10.1097/MS9.0000000000004733
  3. Gates Open Res. 2026 ;10 6
       Introduction: Information on journal visibility helps researchers decide where to publish. Some quality indicators used are directly associated with the journal's editorial practices. By understanding the barriers, challenges, and opportunities, this study aims to explore existing editorial practices among African journals, explore the underlying factors affecting the editorial practices of African journals, and understand the views and preferences of authors regarding the choice of journals for publication.
    Methodology: This study triangulated the sources of information and qualitative design data-gathering techniques to allow for nuances and deeper insights into the performance and visibility of African Journals. We conducted In-depth Interviews (IDIs), Key Informant Interviews (KIIs), and Focus Group Discussions (FGDs) in Kenya, Ethiopia, Nigeria and Mozambique. The study population comprised journal editors-in-chief, representatives from African-wide journal databases/indexers, institutional repository representatives, and authors. A purposive sampling technique was used to identify participants. Ethical approval was obtained from the relevant bodies. Qualitative data from the audio-recorded interviews were transcribed using MS Word and exported to NVivo software for analysis.
    Results: The key structural issues on editorial practices among African journals established by the study included adherence to internationally accepted editorial practices on peer review decision-making and challenges in implementing measures of transparency and rigor. Some of the underlying factors affecting African journal editorial practices that were highlighted included financial constraints, challenges in peer review, challenges in maintaining editorial integrity, and challenges in technological and digital infrastructure. African journals also face challenges of credibility and trustworthiness among authors. Participants outlined how the longstanding neglect of African journals and lack of funding have created cultures of editorial mismanagement, publishing inconsistency, and other logistical issues, all of which contribute to perceptions of African journals as inferior to Northern ones.
    Keywords:  Discoverability and indexing of African journals; Editorial practices; Existing capacity; Journal credibility; Journal visibility; Journals trustworthiness; LMIC; Underlying factors
    DOI:  https://doi.org/10.12688/gatesopenres.16376.1
  4. Reprod Biol Endocrinol. 2026 Feb 10. 24(1): 22
      Retraction of scientific papers may occur when the peer-review or publication process is compromised, even in cases where authors have no responsibility for the identified shortcomings. Using a recent case in which a peer-reviewed open-access mega-journal retracted a series of articles due to compromised peer review, also one from our group, this work examines the implications of limited editorial transparency in the retraction process. While failures in peer review can undermine the integrity of the scientific literature, inadequate communication by journal editors may have substantial negative effect on affected authors, particularly early-career researchers, including disorientation, humiliation, and a sense of perceived injustice. This analysis highlights the factors contributing to these outcomes such as the sense of loss associated with the substantial time and effort devoted to the research, as well as the practical impossibility of submitting the retracted article to alternative journals. Transparency represents a frontline defence against research misconduct but the call for increased transparency cannot be one-sided. Transparency needs to be a useful tool for the entire system, for those who report data and for those who publish data.
    DOI:  https://doi.org/10.1186/s12958-026-01523-2
  5. PLoS One. 2026 ;21(2): e0342225
      The causes of the reproducibility crisis include lack of standardization and transparency in scientific reporting. Checklists such as ARRIVE and CONSORT seek to improve transparency, but they are not always followed by authors and peer review often fails to identify missing items. To address these issues, there are several automated tools that have been designed to check different rigor criteria. We have conducted a broad comparison of 11 automated tools across 9 different rigor criteria from the ScreenIT group. We found some criteria, including detecting open data, where the combination of tools showed a clear winner, a tool which performed much better than other tools. In other cases, including detection of inclusion and exclusion criteria, the combination of tools exceeded the performance of any one tool. We also identified key areas where tool developers should focus their effort to make their tool maximally useful. We conclude with a set of insights and recommendations for stakeholders in the development of rigor and transparency detection tools. The code and data for the study is available at https://github.com/PeterEckmann1/tool-comparison.
    DOI:  https://doi.org/10.1371/journal.pone.0342225
  6. J Am Acad Dermatol. 2026 Feb 05. pii: S0190-9622(26)00138-6. [Epub ahead of print]
      
    Keywords:  Artificial intelligence; academia; academic publishing; dermatology; disclosure; ethics; journalology; plagiarism; scientific integrity; transparency
    DOI:  https://doi.org/10.1016/j.jaad.2026.01.084
  7. EJIFCC. 2026 Feb;37(1): 177-180
       Introduction: Printing allowed the scientific revolution. Scientific journals established peer review. AI is driving the next wave of scientific progress. Ethical aspects of AI in publishing are an emerging area of concern.
    Key issues: AI tools are used in generating papers. This raises questions about authorship and accountability: who is responsible? If AI contributes, should they be credited as authors? Are researchers accountable for AI-generated content? If AI is involved in writing, this should be disclosed to maintain transparency. Otherwise, there could be concerns about misrepresentation or lack of rigor.
    Another consequence is intellectual property: if AI generates portions of a paper, who owns the rights to that work? Frameworks for intellectual property were designed for human creators, so these might be rethought. Many journals require a written statement regarding AI use. AI use in publishing could exacerbate inequality in research access, leading to a divide between well-funded and less-funded institutions. Global inequality in science sharpens: AI might skew research toward countries with more technological resources.
    AI can be used to assist peer review This challenges peer review integrity: relying on AI could undermine the integrity of human oversight. AI does not replace but complements reviewers' expertise. AI-driven tools might lack the nuanced human understanding. Over-reliance on AI could compromise publishing quality.
    Conclusion: AI offers possibilities to speed up and to improve scientific publishing, but it is essential to judge and to address the ethical implications. This requires guidelines and rules warranting an honest, transparent and integer approach of publishing.
    Keywords:  artificial intelligence; ethics; medical publishing
  8. Nature. 2026 Feb 13.
      
    Keywords:  Computer science; Peer review; Research data
    DOI:  https://doi.org/10.1038/d41586-025-03967-9
  9. J Minim Invasive Gynecol. 2026 Feb 11. pii: S1553-4650(26)00105-6. [Epub ahead of print]
      The rapid integration of Large Language Models (LLMs) into biomedical publishing necessitates clear ethical frameworks, yet current policies in obstetrics and gynecology remain heterogeneous. This study systematically evaluated the "Instructions for Authors" and ethical guidelines of ten high-impact obstetrics and gynecology journals-representing Elsevier, Wolters Kluwer, Oxford University Press, and Wiley-to identify areas of consensus and critical policy gaps. Data collected through January 4, 2026, revealed that while some journals rely on broad publisher mandates, others maintain specific, divergent guidelines. Universal consensus exists regarding the prohibition of artificial intelligence (AI) authorship, the requirement for full human accountability, and the exclusion of AI from peer review to preserve confidentiality. However, significant variation was identified in disclosure protocols, the permissibility of AI for content drafting versus linguistic editing, and the management of AI-generated visual content. Critical gaps remain regarding standardized taxonomies, bias mitigation, and post-publication accountability. To address these disparities, we propose a unified, tiered framework-categorizing AI use as prohibited, supervised, or allowed-alongside a standardized disclosure statement. Harmonizing these standards is essential to maintain research integrity and reduce author confusion, while preserving scientific rigor and trust in obstetrics and gynecology research.
    Keywords:  Academic publishing; Artificial intelligence; Editorial policy; Gynecology; Research integrity
    DOI:  https://doi.org/10.1016/j.jmig.2026.02.014
  10. Int J Gynaecol Obstet. 2026 Feb 14.
      Manuscript review is crucial to scientific progress. Although artificial intelligence (AI) may appeal to time-pressed clinicians for its efficiency, peer review should be performed by "peers", not AI. While AI use in writing has gained wider permission, its use in reviewing remains more strictly regulated. Some argue that this inequity and reviewer shortages justify broader AI use in review. However, the long-term impact of AI on publishing and human cognition remains unknown. Given this uncertainty, strict regulation of AI use in peer review should be maintained until its usefulness and safety are confirmed.
    Keywords:  ChatGPT; artificial intelligence; author; review; reviewer
    DOI:  https://doi.org/10.1002/ijgo.70885
  11. J Obstet Gynecol Neonatal Nurs. 2026 Feb 09. pii: S0884-2175(26)00006-7. [Epub ahead of print]
      JOGNN's Editor in Chief examines the responsible use of citations and references, with attention to the influence of artificial intelligence and common errors.
    DOI:  https://doi.org/10.1016/j.jogn.2026.01.004
  12. JMIR AI. 2026 Feb 11. 5 e84322
       BACKGROUND: Peer review remains central to ensuring research quality, yet it is constrained by reviewer fatigue and human bias. The rapid rise in scientific publishing has worsened these challenges, prompting interest in whether large language models (LLMs) can support or improve the peer review process.
    OBJECTIVE: This study aimed to address critical gaps in the use of LLMs for peer review of papers in the field of organ transplantation by (1) comparing the performance of 5 recent open-source LLMs; (2) evaluating the impact of author affiliations-prestigious, less prestigious, and none-on LLM review outcomes; and (3) examining the influence of prompt engineering strategies, including zero-shot prompting, few-shot prompting, tree of thoughts (ToT) prompting, and retrieval-augmented generation (RAG), on review decisions.
    METHODS: A dataset of 200 transplantation papers published between 2024 and 2025 across 4 journal quartiles was evaluated using 5 state-of-the-art open-source LLMs (Llama 3.3, Mistral 7B, Gemma 2, DeepSeek r1-distill Qwen, and Qwen 2.5). The 4 prompting techniques (zero-shot prompting, few-shot prompting, ToT prompting, and RAG) were tested under multiple temperature settings. Models were instructed to categorize papers into quartiles. To assess fairness, each paper was evaluated 3 times: with no affiliation, a prestigious affiliation, and a less prestigious affiliation. Accuracy, decisions, runtime, and computing resource use were recorded. Chi-square tests and adjusted Pearson residuals were used to examine the presence of affiliation bias.
    RESULTS: RAG with a temperature of 0.5 achieved the best overall performance (exact match accuracy: 0.35; loose match accuracy: 0.78). Across all models, LLMs frequently assigned manuscripts to quartile 2 and quartile 3 while avoiding extreme quartiles (quartile 1 and quartile 4). None of the models demonstrated affiliation bias, though Gemma 2 (P=.08) and Qwen 2.5 (P=.054) were substantially biased. Each model displayed unique "personalities" in quartile predictions, influencing consistency. Mistral had the highest exact match accuracy (0.35) despite having both the lowest average runtime (1246.378 seconds) and computing resource use (7 billion parameters). While accuracy was insufficient for independent review, LLMs showed value in supporting preliminary triage tasks.
    CONCLUSIONS: Current open-source LLMs are not reliable enough to replace human peer reviewers. The largely absent affiliation bias suggests potential advantages in fairness, but these benefits do not offset the low decision accuracy. Mistral demonstrated the greatest accuracy and computational efficiency, and RAG with a moderate temperature emerged as the most effective prompting strategy. If LLMs are used to assist in peer review, their outputs require nonnegotiable human supervision to ensure correct judgment and appropriate editorial decisions.
    Keywords:  AI; artificial intelligence; bias; large language models; peer review; prompt engineering; retrieval-augmented generation; scholarly publishing; transplantation
    DOI:  https://doi.org/10.2196/84322
  13. Postgrad Med J. 2026 Feb 09. pii: qgag003. [Epub ahead of print]
      
    DOI:  https://doi.org/10.1093/postmj/qgag003
  14. Appl Environ Microbiol. 2026 Feb 10. e0006626
      This commentary focuses on my experiences as an editor of Applied and Environmental Microbiology (AEM) from 1988 to 1996. I reflect on the challenges of the pre-internet world when all communications, including paper manuscripts, traveled by post. It was a time when editors chose the reviewers without the "benefit" of reviewers suggested by the authors. I describe the advantages and disadvantages of being an editor in those times, using seven memorable papers as examples of our efforts to advance both the field and our esteemed journal. I do so in the hope that this perspective brings relevance to present-day authors, reviewers, and editors.
    Keywords:  editor; reviewer
    DOI:  https://doi.org/10.1128/aem.00066-26
  15. J Neuropsychol. 2026 Feb 09.
      Involving people with lived experience in research (patient and public involvement or co-production) is one principle of open research (transparent research practices). Involvement of experts by experience helps ensure that clinical and health research is relevant, ethical and accessible. While public contributors are likely to view co-production as important, what do public contributors know and think about other open research practices (e.g., pre-registration, data sharing)? We carried out a mixed methods online survey investigating what public contributors already know and would like to know about different open research practices, working with public contributors to shape the study. The 64 participants had a range of lived experience, which they had contributed to research and were passionate about the benefits of co-production. Although many participants did not know the term 'open research', they rated specific practices as familiar and important, seeing the moral imperative. Participants described the balance of practical benefits (e.g., efficiency, transparency) and potential risks (e.g., data sharing, pre-prints). Some practices (e.g., pre-registration) were less well understood, and participants learnt more about open research from the survey. Most participants were interested to learn more, and over 70% indicated an interest in further training. Overall, there is a need and an opportunity to share accessible information and training about open research with those who contribute their lived experience to research. This has the potential to improve research involvement and co-production, as well as the quality and applicability of research more broadly.
    Keywords:  Co‐production; data sharing; involvement; open research; open science; transparency
    DOI:  https://doi.org/10.1111/jnp.70034
  16. BMC Med Ethics. 2026 Feb 11.
      
    Keywords:  Acute febrile illness; Data sharing; Epidemic setting; Genetic data sharing; Non-epidemic setting; PEARL barriers; Sample sharing
    DOI:  https://doi.org/10.1186/s12910-026-01399-2
  17. J Community Hosp Intern Med Perspect. 2025 ;15(6): 1-5
      JCHIMP's Editor-in-Chief and a member of the Editorial Board acknowledge the importance of authors and peer reviewers to the success of JCHIMP in 2024. They discuss the value of peer review and how it satisfies ACGME directives for scholarship. Publication costs, and sources for publication funding are also examined.
    Keywords:  ACGME scholarly activity; Cost of publications; Peer review
    DOI:  https://doi.org/10.55729/2000-9666.1558
  18. Br J Anaesth. 2026 Feb 09. pii: S0007-0912(26)00035-8. [Epub ahead of print]
      Social media has fundamentally transformed anaesthesia education, research dissemination, and professional networking. The British Journal of Anaesthesia uses a multi-platform strategy overseen by a dedicated Social Media Editor and Fellows. Despite challenges including misinformation, artificial intelligence-generated content, and platform fragmentation, social media remain essential for bridging the gap between research publications and clinical practice while fostering global academic communities.
    Keywords:  Bluesky; FOAMed; Twitter; X; medical education; social media
    DOI:  https://doi.org/10.1016/j.bja.2026.01.011
  19. J Voice. 2026 Feb 07. pii: S0892-1997(26)00033-0. [Epub ahead of print]
      
    DOI:  https://doi.org/10.1016/j.jvoice.2026.01.032
  20. Ann Jt. 2026 ;11 15
      
    Keywords:  Evidence-based medicine; academic publishing; research integrity; scholarly criticism; scientific methodology
    DOI:  https://doi.org/10.21037/aoj-2025-1-91