bims-skolko Biomed News
on Scholarly communication
Issue of 2025–02–09
twenty-six papers selected by
Thomas Krichel, Open Library Society



  1. J Korean Med Sci. 2025 Feb 03. 40(4): e83
      Cartoons in scholarly publishing are effective ways to improve communication and engagement. They can transform scientific concepts into visually interesting and easy-to-understand formats, thereby helping the concepts reach to larger groups more easily. Depending on the type of cartoons (concept cartoons, humor illustrations, and editorial cartoons), benefits may vary. It is recommended to maintain relevance, elegance, and quality in cartoon presentations. Potential insults must be carefully avoided to maintain respectful communication. The objectives of the current article were to examine the complex roles of cartoons in scholarly publishing, to provide recommendations for drawing scientific cartoons, and to discuss challenges in incorporating cartoons into scientific publishing.
    Keywords:  Caricature; Cartoon; Medical Writing; Scholarly Communication
    DOI:  https://doi.org/10.3346/jkms.2025.40.e83
  2. J Clin Orthop Trauma. 2025 Apr;63 102918
      There is an evolving role of medical journal editors in the 21st century, highlighting the challenges they face in ensuring the quality, accuracy, and relevance of published research. As gatekeepers of medical information, editors are inundated with a high volume of submissions, necessitating efficient triage and a keen understanding of diverse scientific fields. The peer review process presents its hurdles, particularly in securing timely assessments from busy experts, while ethical considerations around bias and misconduct demand rigorous scrutiny. Additionally, the emergence of artificial intelligence in research poses both opportunities and complications, calling for clear guidelines to maintain the value of human authorship. Overall, the findings emphasize the critical role of editors in sustaining the integrity of medical publishing amidst an increasingly complex landscape.
    Keywords:  Artificial intelligence; Editor; Ethics; Journal; Publication; Research
    DOI:  https://doi.org/10.1016/j.jcot.2025.102918
  3. Nature. 2025 Feb 04.
      
    Keywords:  Government; Public health; Publishing; Scientific community
    DOI:  https://doi.org/10.1038/d41586-025-00367-x
  4. J Biomech. 2025 Jan 31. pii: S0021-9290(25)00071-5. [Epub ahead of print]181 112560
      Articles published in the Journal of Biomechanics still reflect bias, with males positioned as the default in human research. This meta-analysis on the 2024 articles reveals a large disparity in female representation. One in four studies showed an imbalance (<30 % female representation) favouring male participants, while only 8 % favoured females. Male-only studies outnumbered female-only studies by over fivefold. Of particular concern is that male-only studies often lack justification for their single-gender focus, whereas female-only studies typically provide clear reasoning. This inconsistency not only lacks accountability but also reinforces the notion that male data is the standard in biomechanics research. I named this issuebiasmechanicsto encourage efforts to address them. While there are valid scientific reasons for focusing on specific gender/sex groups, this should not be the default. Authors must consider sex- and gender-based differences, and reviewers and editors should adopt stricter standards for accepting articles with unjustified imbalances. The Journal of Biomechanics could establish standardized guidelines promoting equitable representation in research. Exclusions of any sex or gender must include clear scientific justification in the introduction and methodology sections. The discussion and limitations sections should assess the implications of such exclusions, including their effects on validity, generalizability, and bias. If appropriate, titles and abstracts should clearly indicate single-sex or gender-specific studies to ensure transparency about the research's scope and applicability. By collectively affirming as a scientific community that, except for legitimate scientific justification, we oppose the exclusion of female participants, we can shift the default approach in our research studies.
    Keywords:  Bias; Biasmechanics; Gender; Sex
    DOI:  https://doi.org/10.1016/j.jbiomech.2025.112560
  5. Nature. 2025 Feb 05.
      
    Keywords:  Conservation biology; Ecology; Ethics; Public health; Publishing
    DOI:  https://doi.org/10.1038/d41586-025-00159-3
  6. Nature. 2025 Jan 31.
      
    Keywords:  Authorship; Publishing; Scientific community
    DOI:  https://doi.org/10.1038/d41586-025-00257-2
  7. Clin Hematol Int. 2025 ;7(1): 10-13
      High-quality peer review is a cornerstone of credible and impactful scientific and medical publishing. This manuscript provides a comprehensive overview of the best practices, responsibilities, and evaluation criteria for peer reviewers in clinical and translational research. By adhering to high standards of objectivity, rigor and professionalism, peer reviewers support the integrity of scientific research and contribute to the evolution of evidence-based medicine. We elaborate on the principles and structured processes that ensure a thorough, impartial review, aiming to guide reviewers in producing evaluations that enrich the scientific discourse and foster innovation in clinical practice.
    Keywords:  integrity; medical literature; peer-review; reviewer; scientific research
    DOI:  https://doi.org/10.46989/001c.128601
  8. Environ Toxicol Chem. 2025 Feb 01. 44(2): 318-323
      I make six arguments for why double-blind peer review practices increase vulnerability to scientific integrity lapses over more transparent peer review practices: (1) obscuring data from reviewers is detrimental; (2) obscuring sponsorship makes bias harder to detect; (3) author networks can be revealing; (4) undue trust and responsibility are placed on editors; (5) double-blind reviews are not really all that blind; and (6) willful blindness is not the answer to prestige bias. I offer an alternative approach that could provide a more transparent approach for improving scientific integrity and equity in publishing.
    Keywords:  open science; peer review; scientific integrity
    DOI:  https://doi.org/10.1093/etojnl/vgae046
  9. Acad Radiol. 2025 Feb 05. pii: S1076-6332(25)00017-0. [Epub ahead of print]
       RATIONALE AND OBJECTIVES: We aimed to evaluate the efficacy of perplexity scores in distinguishing between human-written and AI-generated radiology abstracts and to assess the relative performance of available AI detection tools in detecting AI-generated content.
    METHODS: Academic articles were curated from PubMed using the keywords "neuroimaging" and "angiography." Filters included English-language, open-access articles with abstracts without subheadings, published before 2021, and within Chatbot processing word limits. The first 50 qualifying articles were selected, and their full texts were used to create AI-generated abstracts. Perplexity scores, which estimate sentence predictability, were calculated for both AI-generated and human-written abstracts. The performance of three AI tools in discriminating human-written from AI-generated abstracts was assessed.
    RESULTS: The selected 50 articles consist of 22 review articles (44%), 12 case or technical reports (24%), 15 research articles (30%), and one editorial (2%). The perplexity scores for human-written abstracts (median; 35.9 IQR; 25.11-51.8) were higher than those for AI-generated abstracts (median; 21.2 IQR; 16.87-28.38), (p=0.057) with an AUC=0.7794. One AI tool performed less than chance in identifying human-written from AI-generated abstracts with an accuracy of 36% (p>0.05) while another tool yielded an accuracy of 95% with an AUC=0.8688.
    CONCLUSION: This study underscores the potential of perplexity scores in detecting AI-generated and potentially fraudulent abstracts. However, more research is needed to further explore these findings and their implications for the use of AI in academic writing. Future studies could also investigate other metrics or methods for distinguishing between human-written and AI-generated texts.
    Keywords:  Artificial intelligence; GPT; Large language model; Natural Language Processing; Perplexity score
    DOI:  https://doi.org/10.1016/j.acra.2025.01.017
  10. J Obstet Gynaecol Res. 2025 Feb;51(2): e16226
       AIM: To determine whether ChatGPT generates a manuscript with a "human touch" with appropriate inputs, and if yes, what's the difference between human writing versus ChatGPT writing. This is because the presence or absence of human touch may characterize human writing.
    METHODS: A descriptive study. The first author wrote a Disagreement Letter (Letter 1). Then, disagreement points and "human touch" were provided as input into ChatGPT-4 and tasked with generating a Letter (Letter 2). The authors, seven experienced researchers, and ChatGPT evaluated the readability of Letters 1 and 2.
    RESULTS: The authors, researchers, and ChatGPT, all reached the same conclusions: the human-written Letter 1 and the ChatGPT-generated Letter 2 had similar readability and similarly involved human touch. Some researchers and ChatGPT recognized slight differences in formal or informal and personal or nonpersonal tones between them, which they considered may not affect paper acceptance.
    CONCLUSIONS: Human touch is not humans' exclusive possession. The distinction between the human writing versus ChatGPT writing is considered to be present not in the output (manuscript) but in the process of writing, that is, the presence or absence of a joy of writing. Artificial intelligence should aid in enhancing, or at the very least, not impede the human joy. This discussion deserves ongoing exploration.
    Keywords:  ChatGPT; artificial intelligence; human; manuscript; writing
    DOI:  https://doi.org/10.1111/jog.16226
  11. Nature. 2025 Feb 04.
      
    Keywords:  Machine learning; Publishing; Scientific community
    DOI:  https://doi.org/10.1038/d41586-025-00343-5
  12. Am J Vet Res. 2025 Jan 30. 1-6
       Objective: To survey academic journals for the presence and clarity of author instructions for submitting veterinary systematic reviews.
    Methods: Instructions to authors for submitting systematic reviews were surveyed across the 10 academic journals publishing the greatest number of veterinary systematic reviews listed in VetSRev, a citation database exclusively listing systematic reviews of topics relevant to veterinary medicine. Two investigators independently reviewed each author instructions section to answer predetermined survey questions. Data were collected and reviewed from October 21, 2023, through April 9, 2024.
    Results: Instructions to authors varied across journals, and the requirements for compliance with established reporting guidelines (eg, Preferred Reporting Items for Systematic Reviews and Meta-Analyses) were inconsistent. Four of 10 journals clearly stated the need to follow systematic reporting guidelines, 4 recommended or encouraged the use of guidelines, and 2 had no specific instructions for systematic reviews or reporting guidelines.
    Conclusions: Instructions for authors submitting veterinary medical systematic reviews are often incomplete or unclear.
    Clinical Relevance: In the absence of clear and consistent journal requirements for compliance with established systematic review reporting guidelines, the risk of publishing bias or misleading systematic reviews may be increased, which may negatively impact clinical decision making. Ensuring clear and concise instructions for authors will improve the quality of evidence and reporting. Greater clarity and consistency of author instructions and reporting requirements across all journals and increasing author awareness of the need to use reporting guidelines will improve the quality of veterinary systematic reviews.
    Keywords:  academia; academic journals; reporting guidelines; systematic reviews; veterinary
    DOI:  https://doi.org/10.2460/ajvr.24.10.0304
  13. J Med Internet Res. 2025 Feb 07. 27 e64069
       BACKGROUND: Data sharing plays a crucial role in health informatics, contributing to improving health information systems, enhancing operational efficiency, informing policy and decision-making, and advancing public health surveillance including disease tracking. Sharing individual participant data in public, environmental, and occupational health trials can help improve public trust and support by enhancing transparent reporting and reproducibility of research findings. The International Committee of Medical Journal Editors (ICMJE) requires all papers to include a data-sharing statement. However, it is unclear whether journals in the field of public, environmental, and occupational health adhere to this requirement.
    OBJECTIVE: This study aims to investigate whether public, environmental, and occupational health journals requested data-sharing statements from clinical trials submitted for publication.
    METHODS: In this bibliometric survey of "Public, Environmental, and Occupational Health" journals, defined by the Journal Citation Reports (as of June 2023), we included 202 journals with clinical trial reports published between 2019 and 2022. The primary outcome was a journal request for a data-sharing statement, as identified in the paper submission instructions. Multivariable logistic regression analysis was conducted to evaluate the relationship between journal characteristics and journal requests for data-sharing statements, with results presented as odds ratios (ORs) and corresponding 95% CIs. We also investigated whether the journals included a data-sharing statement in their published trial reports.
    RESULTS: Among the 202 public, environmental, and occupational health journals included, there were 68 (33.7%) journals that did not request data-sharing statements. Factors significantly associated with journal requests for data-sharing statements included open access status (OR 0.43, 95% CI 0.19-0.97), high journal impact factor (OR 2.31, 95% CI 1.15-4.78), endorsement of Consolidated Standards of Reporting Trials (OR 2.43, 95% CI 1.25-4.79), and publication in the United Kingdom (OR 7.18, 95% CI 2.61-23.4). Among the 134 journals requesting data-sharing statements, 26.9% (36/134) did not have statements in their published trial reports.
    CONCLUSIONS: Over one-third of the public, environmental, and occupational health journals did not request data-sharing statements in clinical trial reports. Among those journals that requested data-sharing statements in their submission guidance pages, more than one quarter published trial reports with no data-sharing statements. These results revealed an inadequate practice of requesting data-sharing statements by public, environmental, and occupational health journals, requiring more effort at the journal level to implement ICJME recommendations on data-sharing statements.
    Keywords:  ICMJE; International Committee of Medical Journal Editors; clinical trial; clinical trials; data sharing; decision-making; health informatics; journal request; occupational health; patient data; public health
    DOI:  https://doi.org/10.2196/64069
  14. Prehosp Disaster Med. 2025 Feb 05. 1-2
      For 2025, three new additions will be made to the instructions for authors. This includes an updated policy on P values, more detailed instructions for educational studies, and the use of existing reporting guidelines for many study designs.
    Keywords:  P values; educational research; instruction for authors; reporting guidelines
    DOI:  https://doi.org/10.1017/S1049023X25000019
  15. BMC Med Res Methodol. 2025 Feb 01. 25(1): 31
    Methodology And Evidence Statistics for Transparency in medical Research and Outcome reporting (MAESTRO) working group
       BACKGROUND: Increasing transparency in clinical research is crucial to avoid misleading conclusions. Registering clinical trials prior to participant enrolment is mandatory, and the publication of trial protocols could further enhance transparency. However, the impact of protocol publication on primary outcomes (PO) and sample sizes (SS) remains unclear. This study aimed to determine the rates of trial protocol publication and registration for a sample of randomized controlled trials (RCTs) and to compare the consistency of published and registered PO and SS.
    METHODS: A search was conducted in MEDLINE via PubMed® for RCT reports indexed in May and June 2023 across various medical specialties, focusing on general and high-impact factor journals. Data were extracted regarding trial registration, protocol publication, and comparisons were made between PO and SS in articles, registries, and published protocols.
    RESULTS: Out of 1119 references, 589 (52.6%) were RCTs. The corresponding protocol was published for 146 RCTs (24.8%) including 40 over 140 (28.6%) (6 without end date available) after the trial had ended. Sixty-two (42.4%) protocols were published before the trial conclusion, with no significant differences between PO and SS in published protocols and their corresponding articles. Five hundred and twenty-eight (89.6%) RCTs were registered, 225 over 510 (44%) were registered before the study start with no differences in PO and SS between article and registry. Articles published in generalist or high impact factor journals were associated with higher frequencies of published protocols and trial registration and a lower frequency of difference in PO and SS between articles, registries, and published protocols.
    CONCLUSIONS: While publishing trial protocols may enhance transparency in peer-review process, the initial registered protocol alone appears sufficient for ensuring consistency in primary outcomes and sample sizes. Protocol publication does not seem to provide additional significant benefits in terms of outcome reporting.
    Keywords:  Protocol publication; RCT; Registration
    DOI:  https://doi.org/10.1186/s12874-025-02471-y
  16. J Exp Psychol Learn Mem Cogn. 2025 Jan;51(1): 1-3
      One of my clearest memories from graduate school is a piece of advice offered by Dr. Janet McDonald, a wise and caring mentor who taught a rigorous first-year statistics course: "You really want to aim for the top-tier cognitive journals, like JEP: LMC." That was my first introduction to what it means to publish in the Journal of Experimental Psychology: Learning, Memory, and Cognition (JEP:LMC). Over the ensuing 2 decades or so, I had the chance to learn more about what makes the journal unique as a contributing author, a reviewer on the Consulting Editor board, and a guest action editor. Based on these experiences, I came to see JEP:LMC as a premiere outlet for studies that employ careful methodology, produce informative results, and articulate a clear theoretical framework for interpreting these results and relating them to other phenomena. As editor in chief, I will seek to maintain this high standard and build on the journal's strengths. This document describes the general approach that I plan to take as editor, and I hope to convey information that will be helpful for submitting authors, reviewers, and action editors. In general, I do not plan to make any sweeping changes. I am humbled by the accomplishments of my predecessor, Aaron Benjamin, and a long line of editors in chief before him. JEP: LMC has a consistent record of promoting the practices and values that support solid science, and my main focus will be to uphold this reputation and continue to make progress on research reforms. I see this editorship as a chance to play a small role in ushering in the future of cognitive psychology, a field that is currently in a transition period in terms of both methodological practices and theoretical perspective. I think JEP:LMC can play a leading role in this transition, and I hope to gently guide authors, reviewers, and editors in the direction of both more rigorous theorizing (ideally relying on formal models) and more sophisticated methodological practices that optimize the value of open science tools and modern approaches to statistical inference. I also hope to expand the pool of contributors by helping a diverse array of early-career scientists learn the keys to success in the journal. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
    DOI:  https://doi.org/10.1037/xlm0001457
  17. Am J Med. 2025 Jan 30. pii: S0002-9343(25)00053-1. [Epub ahead of print]
      
    DOI:  https://doi.org/10.1016/j.amjmed.2025.01.017