bims-skolko Biomed News
on Scholarly communication
Issue of 2026–04–12
thirty-two papers selected by
Thomas Krichel, Open Library Society



  1. Science. 2026 Apr 09. 392(6794): 133
      Now the ERROR project is promising an additional incentive: a publication.
    DOI:  https://doi.org/10.1126/science.aeh8588
  2. Nature. 2026 Apr 07.
      
    Keywords:  Diseases; History; Publishing
    DOI:  https://doi.org/10.1038/d41586-026-00913-1
  3. Oncotarget. 2026 Feb 06. 17(1): 50-53
      Copyright: © 2026 Polykretis et al. This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
    Keywords:  COVID-19; cancer; haematopoietic malignancies; mRNA COVID-19 vaccine
    DOI:  https://doi.org/10.18632/oncotarget.28829
  4. Angle Orthod. 2026 Apr 10. pii: e081525-693.1. [Epub ahead of print]
       Objectives: To determine the rate, characteristics, and reasons for retraction of orthodontic publications, and how often these articles continued to be cited after retraction.
    Materials and Methods: Retracted orthodontic publications were identified via PubMed and the Retraction Watch Database up to January 2025. Each article was screened and categorized using standardized taxonomy. Data collected included journal type, publisher, research design, study theme, country, and gender of the corresponding author, reasons for retraction, and post-retraction citation counts. Retraction rates were calculated using orthodontic output indexed in Scopus from 2010 to 2024.
    Results: A total of 39 retracted articles were identified, yielding a retraction rate of 0.08%. Nearly 28% (n = 11) occurred in 2023. Most retractions (66.7%, n = 26) came from multidisciplinary journals; only 15.4% appeared in orthodontic (n = 5) or dental journals (n = 5). Authors from Asian countries contributed to 76.9% (n = 30) of the retractions, with male corresponding authors making up 80% (n = 32). Clinical and in vitro/in vivo studies were the most affected. Retractions were primarily related to data integrity and editorial misconduct. About 60% (n = 23) continued to be cited after retraction.
    Conclusions: Orthodontic journals show relatively strong integrity but require continued vigilance. Although retractions in orthodontics are rare, they are increasing, particularly in multidisciplinary journals. Persistent citation of retracted research highlights the need for better editorial oversight and researcher awareness.
    Keywords:  Fraud; Orthodontics; Plagiarism; Research misconduct; Retraction
    DOI:  https://doi.org/10.2319/081525-693.1
  5. Int J Gynecol Cancer. 2026 Mar 09. pii: S1048-891X(26)00170-2. [Epub ahead of print]36(5): 104639
       OBJECTIVE: Retractions in scientific publishing have increased sharply in the past 2 decades, with more than 10,000 articles withdrawn globally in 2023. Despite this growth, the scope, causes, and temporal patterns of retractions within gynecologic oncology have not been systematically characterized. Understanding these patterns is essential to safeguard research integrity and maintain confidence in the oncologic evidence base.
    METHODS: We conducted a descriptive observational analysis of retracted gynecologic oncology publications using the Retraction Watch Database from its inception and publication outputs indexed in Web of Science between 1989 and 2024. Retracted articles were identified across all gynecologic oncology disease sites, and bibliometric characteristics, study type, country of origin, publisher, citation impact, time to retraction, and stated reasons for retraction were analyzed.
    RESULTS: We identified 220 retracted gynecologic-oncology articles published across 83 journals in all specialties. These retracted publications were cited 4955 times, with median citations per retraction of 6 (range; 0-1855). Ovarian (101, 45.9%), cervical (76, 34.5%), and endometrial cancer (34, 15.5%) were the most represented disease sites, and 126 (57.3%) of retracted articles were basic-science studies. Median time to retraction was 1 year (range; 0-14). Data concerns accounted for the majority of withdrawals (118, 53.6%), followed by compromised peer review (35, 15.9%), image duplication (15, 6.8%), and authorship or ethics issues (15, 6.8%). China accounted for the largest proportion of identified retractions (80.9%), followed distantly by the United States (4.1%) and Japan (2.7%). Two publishers accounted for 47.0% (n = 104) of retractions. When adjusted for overall publication volume (n = 265,102 gynecologic oncology articles), the global rate of retractions rose markedly from 0.7 per 1000 publications in 2000 to 10.1 per 1000 publications in 2024.
    CONCLUSIONS: These findings suggest opportunities to strengthen editorial and institutional safeguards, robust research-integrity training, and systematic implementation of fraud-detection tools to protect the quality of gynecologic-oncology literature.
    Keywords:  Article Retraction; Gynecologic Oncology; Plagiarism; Research Misconduct; Retractions
    DOI:  https://doi.org/10.1016/j.ijgc.2026.104639
  6. J Microbiol Biol Educ. 2026 Apr 10. e0032725
      Open educational resources (OERs) are freely accessible and adaptable teaching materials. In biology, OERs in the form of published lesson plans have steadily increased over the past 20 years. These lesson plans cover core concepts in biology and act as guides to incorporate evidence-based teaching practices into courses. The development and publication of these resources also provide an opportunity for the recognition of teaching-focused scholarship. Journals that publish peer-reviewed OERs provide credit through citable references, allowing authors to be recognized for tenure, promotion, and advancement decisions. Yet, despite this potential for recognition, little is known about how authors perceive the value of these publications or how well they align with institutional reward systems. Here, we surveyed first authors of published peer-reviewed OERs and found that authors across institutional contexts personally valued their publications. However, there are significant differences in how authors at Non-Doctoral and Doctoral-granting institutions perceive how their institutions value OER publications. These results provide a foundation by which biology departments and institutions can strengthen support and recognition for OER authorship. Moreover, having guidance for how peer-reviewed OER publications count for decisions around professional advancement and recognition may be beneficial, especially for faculty in teaching-focused positions.
    Keywords:  institutional support; open educational resources; promotion; survey; teaching-focused; tenure
    DOI:  https://doi.org/10.1128/jmbe.00327-25
  7. Autism. 2026 Apr 09. 13623613261433165
      Researchers' false, incomplete, or missing disclosures of conflicts of interest (COIs) can introduce bias into research, can erode public trust in research findings, and represent ethical violations of most academic journal policies. A 2020 study discovered that publications in applied behavior analysis (ABA) journals are particularly problematic in adherence to COI disclosure ethics. The current study is a 5-year update of this previously conducted study. We examined autism intervention research articles published over a 1-year period in eight ABA journals. Two coders extracted author names and COI disclosure statements from each study and conducted web searches to determine if authors were affiliated with organizations providing ABA services or consulting. One hundred and nineteen studies met our inclusion criteria, from which we compiled a database of 450 authors. Seventy-eight percent of authors held clinical and/or consultancy COIs. At the study level, 93% of studies were written by at least one author with a clinical and/or consultancy COI. Only 8% of studies disclosed any author COIs, and only 2% disclosed clinical and/or consultancy COIs. Ninety-three percent of statements claiming no COIs were false. COIs are increasingly pervasive in ABA autism intervention research, and the vast majority remain undisclosed.Lay AbstractThis study looked at how often researchers who publish about autism interventions in journals focused on one type of intervention called Applied Behavior Analysis (ABA) tell readers about their conflicts of interest (COIs). COIs happen when researchers benefit from showing something specific in their research, such as an intervention making things better for autistic people. The COIs we looked at are when researchers also receive money to provide ABA to autistic people or help other researchers provide ABA to autistic people (i.e., they worked as a consultant). COIs can negatively affect how research is designed, interpreted, and presented. We wanted to see if researchers tell readers about their COIs, or if they say they do not have COIs when they do. We reviewed autism-related intervention papers published over 1 year in eight ABA journals. For every paper, we copied the COI statement. Then, we searched online to see if authors were working as or consulting with ABA service providers. We looked at 119 papers with a total of 450 authors. This study is a five-year update of a 2020 study that found widespread but rarely reported financial COIs among ABA researchers. In our updated study, we found that 78% of authors had a COI. Some worked in ABA clinics, some offered paid consulting to other ABA providers, and some did both. Almost all papers (93%) had at least one author with these kinds of connections. But very few (8%) mentioned any COIs, and only 2% of papers stated that the authors worked as ABA providers or consultants. Most papers said the authors had no conflicts at all, but this was often not true. In fact, 93% of "no COI" statements were false. Although more ABA journals now require disclosure than in the past, many statements are still inaccurate, showing that the problem has not improved. The people in charge of publishing research, and the people who write research papers, need to do much better to let readers know about researchers' COIs.
    Keywords:  applied behavior analysis; autism; conflicts of interest; research ethics
    DOI:  https://doi.org/10.1177/13623613261433165
  8. Int J Med Inform. 2026 Apr 06. pii: S1386-5056(26)00158-9. [Epub ahead of print]214 106418
       BACKGROUND: Artificial intelligence (AI) is increasingly integrated into scholarly publishing workflows, extending beyond manuscript preparation into editorial triage, reviewer assistance, and policy development. Peer review simultaneously faces long-standing problems including reviewer fatigue, bias, opacity, and publish-or-perish incentives. How AI interacts with these structural weaknesses remains unclear.
    OBJECTIVE: To map how AI is currently used in scholarly peer review, synthesize reported benefits and risks, and identify governance and research gaps relevant to health sciences.
    METHODS: A scoping review following Arksey and O'Malley was conducted and reported according to PRISMA-ScR. Scopus, Web of Science, PubMed/MEDLINE, and IEEE Xplore were searched (January 1, 2024-August 31, 2025) using terms combining artificial intelligence and peer review. Grey literature (publisher policies, professional guidelines, editorials) was identified through targeted searches of COPE, ICMJE, WAME, major publisher portals, and preprint servers. Duplicate screening/extraction with adjudication were done. Data were synthesized using inductive thematic analysis.
    RESULTS: Of 2,908 records, 189 met inclusion criteria. AI is used as AI assistive (triage, assistance) and autonomous (review generation, prediction).Reported benefits include improved workflow efficiency, standardized checks, and clearer feedback. However, current systems lack domain reasoning and ethical judgment for autonomous evaluation. Key risks are confidentiality breaches when manuscripts are submitted to third-party tools, algorithmic bias favoring elite institutions or male authors, and homogenization of scholarly voice. As of August 31, 2025, governance policies across publishers, journals, and professional societies remain fragmented. In many documented cases, reviewer use of generative AI is more restricted than author-side use; however, policies vary by publisher, journal, and society, and continue to evolve.
    CONCLUSIONS: AI can strengthen peer review when deployed as a transparent, auditable, privacy-preserving support tool under human oversight. Responsible integration in medical informatics requires coordinated governance, bias monitoring, secure infrastructures, and reforms to evaluation incentives.
    Keywords:  Artificial intelligence; Editorial processes; Large language models; Peer review; Research integrity; Scholarly publishing
    DOI:  https://doi.org/10.1016/j.ijmedinf.2026.106418
  9. JMA J. 2026 Mar 16. 9(2): 578-579
      
    Keywords:  ChatGPT; artificial intelligence; paper; regulation; writing
    DOI:  https://doi.org/10.31662/jmaj.2025-0517
  10. Tunis Med. 2025 Nov 01. 103(11): 1577-1584
       INTRODUCTION: Journal selection is a critical step in the scientific publishing process, influencing the visibility, impact, and credibility of the published work. This task has become increasingly complex due to the proliferation of journals, predatory practices, and the diversity of editorial criteria. This narrative review presented an overview of classical tools, artificial intelligence (AI)-driven platforms, and generative models (ChatGPT, Grok) used to recommend suitable journals for an unpublished manuscript.
    METHODS: Six tools were tested (Springer Journal Finder, Jane, Manuscript Matcher, Trinka Journal Finder, ChatGPT, and Grok) using either the abstract or full text of a clinical article on nonspecific low back pain. The results were compared based on thematic relevance, availability of bibliometric indicators, and transparency of the recommendations.
    RESULTS: Classical tools are limited by their narrow editorial scope and the absence of key indicators. AI platforms offer broader coverage but sometimes lack precision for targeted topics. Generative tools stand out for their ability to structure recommendations, although the data provided (impact factor, fees, timelines) are often inaccurate or unverifiable. Several technological biases and algorithmic limitations impact the overall reliability of these systems.
    CONCLUSION: While AI tools expedite initial journal identification, they frequently suggest journals outside the manuscript's scope and provide incorrect journal metrics. These systems function best as exploratory instruments rather than authoritative advisors. The most successful approach positions the researcher as the primary decision-maker who employs computational assistance to survey options while exercising scholarly judgment for final determinations.
    Keywords:  Algorithmic Bias; Bibliometrics; Editorial Ethics; Impact Factor; Information Retrieval; Publication Standards; Research Dissemination; Scimago; Scopus; Software Validation; Web of Science
    DOI:  https://doi.org/10.62438/tunismed.v103i11.6265
  11. PLoS One. 2026 ;21(4): e0343163
      Research article abstracts are vital in scientific publications for readers to assess a study's significance. The increasing use of AI tools, such as Kimi, ChatGPT and DeepSeek, to generate abstracts raises concerns about their readability and writing styles compared to human-written ones. The study aims to compare the differences in text readability and writing styles between human-written against AI-generated abstracts. A total of 150 abstracts of high-impact journal articles in the field of linguistics and computer science, 75 from each discipline, and another 150 AI-generated abstracts from the same corpus of articles served as the source texts for analysis. The Readability Scoring System, a computational tool, yielded readability and writing style metrics, while expert evaluation was performed to assess the quality of AI-generated academic abstracts. The quantitative data generated were analysed using SPSS 27 with non-parametric statistical methods. Key findings revealed: (1) AI-generated abstracts exhibited significantly lower readability across eight metrics, indicating greater complexity and lower readability; (2) Discipline-specific analysis showed five differing metrics in linguistics and eight in computer science; (3) Interdisciplinary comparisons revealed non-significant differences across nine readability metrics, highlighting AI's potential to mimic natural writing. However, it still faces challenges in generating lexically diverse content. These results underscored the current limitations of AI in generating readable and human-like abstracts, especially in technical fields.
    DOI:  https://doi.org/10.1371/journal.pone.0343163
  12. Plast Reconstr Surg. 2026 Apr 09.
       BACKGROUND: The rise of artificial intelligence (AI) and large language models in academic writing has raised concerns regarding research integrity and authorship transparency. This study evaluated the prevalence of AI-generated content in plastic surgery publications following the release of ChatGPT.
    METHODS: We conducted a cross-sectional study of 1,627 manuscripts published in 10 major plastic surgery journals between January 2024 and May 2025. ZeroGPT was used to quantify AI-generated content. A baseline threshold for substantial AI involvement (22.5%) was established using 300 pre-ChatGPT manuscripts (2010-2011). Outcomes included the proportion of manuscripts exceeding this threshold, average AI content, and associations with publication year, journal, and evidence level.
    RESULTS: Overall, 21.5% of 2024-2025 articles exceeded the threshold for substantial AI involvement. The median proportion of AI-generated text rose from 7.4% in 2024 to 12.2% in 2025, while the percentage of manuscripts with substantial involvement increased from 17% to 29%. AI involvement varied widely across journals (0-41%). In multivariable analysis, 2025 publication year (OR 1.86, p<0.001) and certain journals were independently associated with substantial AI involvement. Higher evidence level studies demonstrated greater AI involvement, with Level 4 studies showing significantly lower odds than Level 1 (OR 0.47, p=0.001).
    CONCLUSION: More than one in five recent plastic surgery manuscripts contain substantial AI involvement, with marked variation across journals and evidence levels. These findings highlight the need for standardized editorial guidelines governing AI use to maintain research integrity and transparency in plastic surgery literature.
    Keywords:  Artificial intelligence; ChatGPT; large language models; plastic surgery; writing
    DOI:  https://doi.org/10.1097/PRS.0000000000013114
  13. Clin Gastroenterol Hepatol. 2026 Apr 02. pii: S1542-3565(26)00243-0. [Epub ahead of print]
       BACKGROUND AND AIMS: Artificial intelligence (AI) tools, including large language models (LLMs) such as ChatGPT, Claude, and Gemini, are increasingly used in biomedical research. However, practical guidance for integrating these tools into the manuscript writing workflow remains limited. This narrative review provides a structured, step-by-step guide for GI researchers and clinicians seeking to use AI responsibly across the stages of manuscript preparation.
    METHODS: We identified AI tools through literature review, expert peer recommendation, and direct evaluation by the authors. Tools were selected based on relevance to the GI manuscript workflow, public availability, and active maintenance as of March 2026. We evaluated tools across four domains: literature searching, data analysis and statistical computation, table and figure generation, and manuscript drafting and editing. Each tool was assessed using GI-specific examples, including head-to-head comparisons of outputs. Risk levels were assigned based on potential for fabrication, bias, or misuse.
    RESULTS: AI tools demonstrate significant utility in synthesizing and organizing literature and data, generating complex statistical code for data visualization, creating initial drafts of tables and figures, and refining academic writing. Tools vary substantially in maturity, from established infrastructure (reference managers, academic language tools) to narrow-purpose AI research platforms (Elicit, Consensus) to rapidly evolving experimental tools (AI-generated presentations and images). For data analysis, AI can generate statistical code for common visualizations when provided with raw data, though all outputs require independent verification. However, significant limitations persist; reference fabrication rates for general-purpose LLMs remain high, depending on the specificity of the query. Moreover, AI-generated images often lack the anatomical and technical precision required for medical publication.
    CONCLUSIONS: AI tools can meaningfully accelerate the GI manuscript writing workflow when used with appropriate safeguards. We propose a risk-stratified framework and emphasize that human verification of all AI-generated content remains essential. Transparent disclosure of AI use, consistent with current journal policies, should accompany all submissions.
    Keywords:  artificial intelligence; gastroenterology; large language models; manuscript writing; scientific publishing
    DOI:  https://doi.org/10.1016/j.cgh.2026.03.032
  14. Int J Med Sci. 2026 ;23(4): 1395-1407
       Background: Biostatistics is essential in personalized medicine, enabling the analysis of complex data, optimizing treatment strategies, and ensuring robust clinical trial designs for patient-specific therapies. The aim of the article was to find out the opinion of Global Burden of Disease Collaborators on the statistical recommendations that should be implemented in medical journals.
    Materials and Methods: The study involved 150 GBD Collaborators who authored articles between 2018 and 2023 under the research coordination of the Institute for Health Metrics and Evaluation. The analysis included 11 statistical recommendations and parameters such as the Hirsch index, number of published articles, and scientific seniority. Additionally, opinions were assessed regarding the percentage of accepted scientific manuscripts that meet statistical validity.
    Results: The key recommendation highlighted by the GBD collaborators is to ensure regular statistical reviews when there is uncertainty about the quality of the authors' analyses (p < 0.001). The remaining recommended guidelines primarily involve the publication of statistical recommendations (50%) and their inclusion on journal websites (53%). The GBD Collaborators, who assert that a lower percentage of accepted articles in medical journals are statistically correct, recommend that authors consult the statistical recommendations posted on journal websites before submitting an article (p = 0.03) and advocate for uniform publication guidelines across journals (p = 0.01).
    Conclusion: More emphasis should be placed on implementing statistical recommendations in medical journals, not just publishing them.
    Keywords:  biostatistics; medical journals; statistical reviews; surveys and questionnaires
    DOI:  https://doi.org/10.7150/ijms.119771
  15. Can J Nurs Res. 2026 Apr 09. 8445621261440315
      The academic nursing community across Canada and worldwide relies on literature with technically accurate terms. Despite this, scientific literature contains 'tortured phrases' (TPs)-linguistically and scientifically inaccurate representations of established technical terms or jargon-that can arise when texts are synonymized using tools based on artificial intelligence. TPs pose a threat to the integrity of nursing literature. We exemplify these issues in this editorial with several health-related examples of TPs that authors, peer reviewers, or editors of nursing journals might encounter in scientific literature they read, cite, peer review, or edit.
    Keywords:  accountability; health literature; jargon; linguistic; scientific and technical accuracy
    DOI:  https://doi.org/10.1177/08445621261440315
  16. Tunis Med. 2025 Nov 01. 103(11): 1565-1571
       BACKGROUND: Publication is crucial for disseminating research findings and advancing scientific knowledge. However, medical researchers in developing countries face significant challenges in publishing their work due to limited resources, mentorship, and access to high-impact journals. This study aimed to identify strategies for successful medical publication, drawing on the experiences of Tunisian researchers.
    METHODS: This perspective-based study combines a comprehensive literature review with expert-facilitated group discussions. A research session held at the Faculty of Medicine of Sousse (Tunisia) brought together 44 participants from diverse medical specialties. The session included group discussions and expert presentations to explore strategies for successful medical publication.
    RESULTS: Key strategies for successful publication were identified, including defining the target manuscript, choosing the appropriate journal, preparing a structured manuscript, ensuring clear and concise writing, following journal-specific guidelines, and adhering to ethical considerations. Other important aspects, such as identifying authorship, avoiding predatory journals, disclosure of conflicts, acknowledgements, cover letters, and responses to peer reviews, were often neglected in the feedback of Tunisian researchers.
    CONCLUSION: Strengthening the publishing capacity of researchers in developing countries requires targeted training programs and institutional support. By implementing best practices in manuscript preparation and submission, researchers can enhance their chances of publication in high-quality medical journals.
    Keywords:  Biomedical Research; Global Health; Low- and Middle-Income Countries; Manuscripts; Medical Writing; Publication
    DOI:  https://doi.org/10.62438/tunismed.v103i11.6091
  17. Adv Rehabil Sci Pract. 2026 Jan-Dec;15:15 27536351261437943
      In today's era of evidence-based medicine, scholarly publishing plays a crucial role in advancing medical knowledge and academic careers. While manuscript (MS) writing has been widely addressed in previous literature, practical guidance for the subsequent submission and publishing process remains relatively underexplored. This article aims to provide a comprehensive roadmap for novice authors navigating the often complex journey of medical manuscript submission. Key steps were discussed, including assessing MS readiness, selecting the appropriate journal, adhering to submission guidelines, preparing a compelling cover letter, and managing the online submission system. The peer review process, responding to reviewer comments, handling rejections, and ensuring ethical conduct were also elaborated. Additional topics such as post-acceptance production, promoting published work, and the ethical use of artificial intelligence were underscored. Emphasis was placed on common pitfalls and actionable advice to improve the overall success and integrity of academic publishing.
    Keywords:  academic writing; peer review; researcher; scientific publishing; submission
    DOI:  https://doi.org/10.1177/27536351261437943
  18. Clin J Oncol Nurs. 2026 Mar 24. 30(2): 139-146
       BACKGROUND: The Oncology Nursing Society has adopted the International Committee of Medical Journal Editors (ICMJE) authorship guidelines for its journals. Real-world application of these guidelines can be challenging if individuals and organizations are not familiar with the guidelines or ethical practices regarding assigning authorship. Ethical assignments of authorship promote professional integrity and scholarship.
    OBJECTIVES: The ICMJE authorship guidelines are reviewed and defined for oncology nurses to develop an understanding of how to apply them within their scholarly work.
    METHODS: Consequences and potential harms of unethical authorship are reviewed via case studies. Implications for practice are discussed to demonstrate how oncology nurses can influence their professional work environments to achieve ethical authorship through application of the guidelines.
    FINDINGS: The ICMJE authorship guidelines provide a framework to guide oncology nurses in achieving ethical authorship in dissemination. Safeguards to promote ethical authorship practices include being aware of the guidelines, discussing authorship at the initiation of a project, continuing discussion during the project, and speaking up when guidelines are disregarded.
    Keywords:  author; authorship; ethics; ghost authorship; honorary authorship; publishing
    DOI:  https://doi.org/10.1188/26.CJON.139-146
  19. Can J Anaesth. 2026 Apr 10.
       PURPOSE: In this study, we sought to evaluate the presence, quality, and accessibility of data sharing statements (DSS) in research articles published in five high-impact anesthesiology journals from 2020 to 2023. Data sharing is foundational to research transparency and reproducibility. As anesthesiology evolves, understanding how DSS are implemented in selected high-impact journals can inform open science efforts within anesthesiology research.
    METHODS: We conducted a cross-sectional study of five top-ranked anesthesiology journals selected using 2023 Clarivate Journal Impact Factor (JIF) rankings. Eligible studies (2020-2023) were screened in duplicate using Rayyan, and data were extracted using a structured Google Form. We used a large language model (ChatGPT, GPT-4) to aid in the exploratory thematic development of DSS, with manual validation by investigators.
    RESULTS: Among 1,123 included articles, DSS prevalence varied by journal and year. In Anaesthesia, Critical Care & Pain Medicine, articles with DSS increased from 15% (4/26) in 2020 to 30% (9/30) in 2023, whereas the prevalence of DSS remained below 8% in Anesthesia & Analgesia. Government-funded studies were more likely to include DSS (β = 0.734, P = 0.047), while higher JIF was negatively associated with DSS inclusion (β = -0.298, P = 0.008). Thematic analysis showed "Conditional Data Availability" was the most frequent DSS type (74%). Of authors contacted, 28% responded, and 14% ultimately agreed to share data for replication.
    CONCLUSIONS: We found that DSS were underused in leading anesthesiology journals. Strengthening journal policies, funder mandates, and education on data sharing practices may promote greater transparency in anesthesia research. Because our analysis focused on a limited sample of journals, findings may not be generalizable to the entire field of anesthesiology.
    Keywords:  anesthesiology; data sharing; data sharing statements; meta-research; transparency
    DOI:  https://doi.org/10.1007/s12630-026-03093-8
  20. J Trauma Acute Care Surg. 2026 Mar 23.
    Parker, Colorado
       ABSTRACT: This manuscript synthesizes the authors' anecdotal experience as editors and peer reviewers by summarizing 10 commonly encountered flaws in the presentation of submitted research manuscripts in the field of trauma and acute care surgery that may contribute to editorial decisions to reject. Notably, most of the listed mistakes are preventable through strict adherence to scientific reporting, coherent formatting, and proofreading before submission. (J Trauma Acute Care Surg. 2026;00:00-00. Copyright © 2026 The Author(s). Published byWolters Kluwer Health, Inc. on behalf of the American Association for the Surgery of Trauma.).
    Keywords:  Research integrity; large language models; manuscript preparation; peer review; publication ethics
    DOI:  https://doi.org/10.1097/TA.0000000000004940
  21. Int Dent J. 2026 Apr 08. pii: S0020-6539(26)00137-1. [Epub ahead of print]76(3): 109543
      To evaluate the frequency and types of data availability statements (DASs) of publications in journals indexed in the Dentistry, Oral surgery and Medical (DOM) category of Journal Citation Reports (JCR) database, and to identify risk indicators associated with the presence of DASs in publications. We searched PubMed on October 18, 2024, for publications presenting original research involving human subjects, published in journals indexed in the DOM category of JCR database, after July 1, 2023. Each included publication was assessed for DAS, which was categorised into different types using Springer Nature's standard DAS framework. The risk indicators regarding author, study and journal levels were extracted. Logistic regression analysis were performed to assess the association between the risk indicators and the presence of DASs. A total of 998 publications were included. Fewer than half (49.7%) of the included publications contained a DAS. The 2 most common DAS types were datasets being available from the corresponding author upon reasonable request with a prevalence of 40.4% (N = 403) and authors directly providing the repository and/or weblink for the datasets with a prevalence of 3.0% (N = 30). The presence of DAS was significantly associated with funding status of publications, journal impact factor and journal requirement on DASs. DASs appear infrequent in publications in journals indexed in the DOM category of JCR database. Funded studies published in journals which require a DAS and journals with a higher impact factor were more likely to contain a DAS than studies which were not funded and published in journals which do not require a DAS with a lower impact factor.
    Keywords:  Data sharing; Meta-research; Open data; Open science; Research integrity
    DOI:  https://doi.org/10.1016/j.identj.2026.109543
  22. Oncotarget. 2026 Jan 06. 17(1): 173-177
    Scientific Integrity Office at Oncotarget
      
    DOI:  https://doi.org/10.18632/oncotarget.28852
  23. Med Arch. 2026 ;80(1): 4-20
      This year the journal "Medical Archives" celebrates 80th anniversary of the establishiment (1947-2026). With the name "Medicinski Arhiv" this journal was founded in the year 1947 as official journal of the Society of physicians of Bosnia and Herzegovina (B&H). The first Editorial board was consisted of professors of the Faculty of Medicine of the University in Sarajevo opened in November 16th 1947: Vladimir Čavka, Blagoje Kovačević, Bogdan Zimonjić and Ibro Brkić. "Medical Archives" journal was a key milestone that helped in education of all academic and professional staff that became the foundation of Bosnian and Herzegovinian medicine, as a science and health care as a profession. The oldest medical journal in B&H was "Jahrbuch des Bosnisch-Herzegowinischen Landesspitales in Sarajevo" ("Annual of the National Hospital in Sarajevo") which was established in 1897 and printed in German language in the period from 1897 until 1900. From the year 1950 "Medicinski Arhiv" was included in the most influential and important index database MEDLINE, as one of oldest medical journals in Souut-Eastern Europe. The first aim of journal was to give a opportunity to young researchers from B&H to present their scientific and research work to wider community. The tradition of publishing of "Medical Archives" as the most recognizable journal in B&H, was kept by Editor of Professor Izet Masic in 1993 who in extraordinary and difficult occasions (during wartime in BiH from 1992 until 1995) re-established journal and continued printing of the war issues. Hopefully the MEDLINE did not stop to continue receiving "Medicinski arhiv" journal and did not stop accept of published articles of "Medicinski Arhiv" in Medline/PubMed, SCOPUS, EMBASE, HINARI, EBSCO, and a lot of other indexed databases. From the year 2013 articles published in "Medical Archives" were included in PubMed Central with full papers (in extenso). Today "Medical Archives" belong to the most cited journals in former Yugoslavia countries with h-Index 38 in SCImago rank, it means 38 published papers in "Medical Archives" were cited 38 times in other indexed scientific journals worldwide, with SJR 0.33 and Q3 in the year 2024.
    Keywords:  Academy of Medical Sciences of Bosnia and Herzegovina; MEDLINE; PMC; “Medical Archives”
    DOI:  https://doi.org/10.5455/medarh.2026.80.4-20
  24. Can J Hosp Pharm. 2026 ;79(2): e3863
       Background: Open access publishing has broadened research dissemination, but it has also enabled the rise of predatory journals and conferences, posing challenges for health care professionals, including pharmacists.
    Objectives: To analyze unsolicited professional emails received by a hospital pharmacist and to characterize potentially predatory solicitations.
    Methods: All email messages received over a 31-day period in 2024 by a senior Canadian hospital pharmacist involved in research were reviewed and assessed according to 12 indicators of predation, including false impact factors, suggestion to submit manuscript by email, flattery, solicitation for an unrelated field, and short deadlines.
    Results: Of 1228 emails received over the study period, 453 (37%) contained at least one predatory indicator, with a total of 494 distinct solicitations: 347 (70%) for manuscript submission, 116 (24%) for conference attendance, 15 (3%) for republication of a previously published article, 11 (2%) for peer review, and 5 (1%) for webinar participation. The emails contained an average of 3.6 (standard deviation 1.7) indicators.
    Conclusions: One-third of the emails received were predatory in nature, highlighting the scale of the phenomenon.
    Keywords:  ethics; pharmacist; predatory journals; publishing; spam
    DOI:  https://doi.org/10.4212/cjhp.3863