bims-skolko Biomed News
on Scholarly communication
Issue of 2025–12–07
thirty-six papers selected by
Thomas Krichel, Open Library Society



  1. J Korean Med Sci. 2025 Dec 01. 40(46): e300
       BACKGROUND: Scientific medical research has progressed tremendously during the last 50 years, but concerns about research integrity, publishing ethics, and retraction trends have grown. Retractions are essential for revising the scientific record and maintaining credibility, yet an extensive long-term assessment of retracted medical publications is limited.
    METHODS: We performed a descriptive analysis of 50 years of retracted medical publications from the Retraction Watch Database. Data were refined to encompass solely medicine-related retractions, omitting corrections, expressions of concern, and reinstatements. We classified retraction reasons into 68 categories, emphasizing the top 10 most frequently encountered reasons. Temporal trends were evaluated employing semi-logarithmic linear regression models. The geographical distribution and journal-specific retractions were also examined.
    RESULTS: An analysis was conducted on 16,041 retracted medical documents from 1975 to 2024. The leading reasons for retraction included data concerns (31.47%), fraud (11.37%), peer review issues (11.21%), referencing issues (7.54%), and ethical violations (7.09%). The highest number of retractions was noted in Computational and Mathematical Methods in Medicine (5.91%), Journal of Healthcare Engineering (5.85%), and Evidence-Based Complementary and Alternative Medicine (4.36%). Approximately 45.28% of retracted papers included at least one author from China, followed by the United States and India. The medical subfields most impacted were oncology (19.87%), cardiovascular medicine (15.62%), and pharmacology (14.49%). Temporal analysis indicated a steady rise in retractions, with data concerns and fraud doubling typically every 5.5 and 5.2 years.
    CONCLUSION: The rising amount of retractions underscores heightened scrutiny and enhanced detection techniques while highlighting ongoing research integrity issues. Data integrity, fraudulent activities, and compromised peer review are significant issues. Fortifying editorial policies, augmenting transparency, and bolstering research ethics education are essential for reducing misconduct and maintaining the integrity of medical papers.
    Keywords:  Editorial Policies; Ethics; Medicine; Peer Review; Publishing; Research Ethics; Retraction Notice; Retraction of Publication
    DOI:  https://doi.org/10.3346/jkms.2025.40.e300
  2. Genomics Inform. 2025 Dec 02. 23(1): 26
      Artificial intelligence (AI)-assisted scientific writing is now a common practice in academic publishing, yet concerns persist regarding the authenticity and reproducibility of AI-generated content. While AI tools offer significant advantages, particularly for non-native English speakers who face substantial linguistic barriers in scientific communication, the risk of AI hallucinations and fabricated citations threatens the integrity of scholarly discourse. Journals often require disclosure of the entire AI prompt rather than meaningful intellectual contributions, but this is becoming increasingly impractical as AI prompts are getting longer and more complex. In this paper, I argue that transparency in AI-assisted writing should focus on capturing the author's core research perspective and section-specific key points-the foundational elements that drive meaningful scientific communication. To address this challenge, I developed a web-based tool that implements a human-in-the-loop approach requiring authors to define their research perspective and create detailed outlines with key points before any AI text generation occurs. The tool mitigates AI hallucination by only allowing the use of user-provided citations and generating transparency reports documenting the key elements used for text generation. I validated this approach by writing this paper using the tool itself, demonstrating how the transparency reporting method works in practice. This methodology ensures that AI serves as a linguistic tool rather than a content generator, preserving scientific integrity while democratizing access to high-quality academic writing across linguistic and cultural boundaries.
    Keywords:  AI-assisted writing; Language barriers; Reproducibility; Scientific integrity; Transparency
    DOI:  https://doi.org/10.1186/s44342-025-00057-0
  3. J Allied Health. 2025 ;54(4): e437
      Artificial intelligence (AI) increasingly has been integrated into medical publishing. As expressed in a recent publication, serious concerns persist regarding ethical implications, authorship attribution, and content reliability. Clearly, AI has a significant role to play in the realm of publications in periodicals, including the Journal of Allied Health (JAH). Some members of the JAH editorial board recently considered the following kinds of questions: when a paper is submitted, is it worthwhile to determine if the author(s) used AI in any way when preparing it? If so, how relevant is it to identify (a) which portions of the manuscript are involved, and (b) are proper acknowledgements being included in the list of references? Similarly, should these concerns also be applied to peer reviewers so that it also becomes apparent which portions of their respective analyses represent genuine personal contributions and which have been acquired from AI?
  4. Nature. 2025 Dec;648(8092): 9
      
    Keywords:  Computer science; Publishing; Scientific community
    DOI:  https://doi.org/10.1038/d41586-025-03909-5
  5. Facts Views Vis Obgyn. 2025 Dec 03.
      
    Keywords:  Academia; accountability; artificial intelligence; authorship; evidence-based care
    DOI:  https://doi.org/10.52054/FVVO.2025.276
  6. Anaesth Crit Care Pain Med. 2025 Nov 27. pii: S2352-5568(25)00246-2. [Epub ahead of print] 101714
      
    DOI:  https://doi.org/10.1016/j.accpm.2025.101714
  7. POCUS J. 2025 Nov;10(2): 7-10
      Peer review is a fundamental element of the modern scientific publishing process. It serves an important role in evaluating the quality of research and refining submitted manuscripts into accurate and impactful contributions to the existing scientific literature. Over the last two decades, opportunities for publication have skyrocketed, and the demand for peer reviewers has grown exponentially. Although the peer review process provides significant benefits, recruiting individuals as peer reviewers can be challenging. The most common obstacles include the time commitment needed to provide meaningful reviews and uncertainty about how to prepare a cohesive and beneficial peer review. This article offers prospective peer reviewers structured guidance to build confidence and enable them to perform effective reviews.
    Keywords:  Critical appraisal; Manuscript evaluation; Peer review; Reviewer guidance; Scientific publishing
    DOI:  https://doi.org/10.24908/pocusj.v10i02.20046
  8. Sex Health. 2025 Dec 03. pii: SH25165. [Epub ahead of print]
      Generous peer review is essential for scientific review, yet many researchers experience the current review culture as unkind. Harsh reviews disproportionately affect women, researchers of color, and other under-represented groups, contributing to reduced confidence, diminished job satisfaction, and delayed career progression. In contrast, generous reviewing strengthens research quality, promotes equity, and reinforces the collaborative foundations of science. This article outlines four principles to guide a more generous approach to manuscript review: empathizing with authors, providing constructive criticism, focusing on areas of genuine expertise, and advocating for systemic improvements in academic publishing. Empathy encourages reviewers to recognize the emotional and professional labor behind every submission, framing feedback in a manner that supports learning rather than discouragement. Constructive criticism emphasizes specificity, actionable guidance, and a distinction between major and minor concerns. Reviewing within one's expertise enhances accuracy and protects against misplaced or misleading critique, while acknowledging limitations promotes humility and transparency. Finally, reviewers are well positioned to promote broader equity, including advocating for fair article processing charge policies, inclusive language support, and greater recognition of reviewer labor. Cultivating generosity in peer review does not diminish rigor; it enhances the effectiveness, fairness, and impact of scholarly communication. By adopting these principles, reviewers can help shift the culture of peer review towards one that encourages improvement, supports diverse researchers, and advances high-quality science.
    DOI:  https://doi.org/10.1071/SH25165
  9. J Prof Nurs. 2025 Nov-Dec;61:pii: S8755-7223(24)00058-9. [Epub ahead of print]61 142-159
      Requesting manuscript feedback from a friend or colleague plays an important role in preparing a manuscript for journal submission. Writers benefit from honest feedback to identify strengths and weaknesses of a manuscript. A critical friend balances the roles of critic and friend enabling the writer to view their manuscript from a different perspective. This article reviews the critical friend literature related to providing manuscript feedback. Suggestions for selecting a critical friend, developing a writing feedback group, and how to provide quality feedback are discussed. Benefits and challenges exist when developing critical friendships for manuscript review.
    Keywords:  Critical friend; Feedback; Manuscript; Publishing; Writing; Writing groups
    DOI:  https://doi.org/10.1016/j.profnurs.2024.04.005
  10. Am Surg. 2025 Dec 03. 31348251405197
      The American Surgeon, as the official journal of the Southeastern Surgical Congress (SESC), serves as a central component of the society's educational mission to support the development of surgeons, trainees, and clinical educators across the southeastern United States. Historically, manuscripts originating from annual meeting presentations were reviewed by publication committees rather than through formal peer review. As the journal matured and its citation record and Impact Factor grew in importance, The American Surgeon adopted a uniform policy requiring that all submissions-whether meeting-derived or independently submitted-undergo full anonymized peer review. This transition heightened expectations for scholarly rigor and placed new demands on trainees and clinical faculty whose primary responsibilities often center on education and patient care rather than research.To meet this challenge while preserving the society's educational ethos, the journal introduced a structured Pre-Peer Review (Pre-PR) process grounded in four evaluative elements: Suitability Assessment, Editorial Domain Alignment, Readability and Language Assessment, and Positioning, Contextualization, and Currency. Together, these components provide a pedagogically oriented framework that improves manuscript clarity, coherence, and relevance prior to peer review. Authors receive a concrete revision plan and a concise coaching note that identifies actionable steps for strengthening their work. This approach transforms what might otherwise be a discouraging editorial barrier into a formative experience that enhances the quality of submissions and fosters scholarly growth. By integrating structured coaching with transparent editorial standards, The American Surgeon advances both its mission as a peer-reviewed journal and its longstanding commitment to the educational objectives of the SESC.
    Keywords:  editorial domain; peer review; position, context, currency; readability and language; suitability assessment
    DOI:  https://doi.org/10.1177/00031348251405197
  11. Eur Rev Aging Phys Act. 2025 Dec 02. 22(1): 24
       OBJECTIVE: This reporting guideline was developed to address the gap in methodological reporting standards for trials investigating physical exercise or training in older adults, aiming to enhance the quality, transparency, and replicability of such research. The aim is to improve the reporting of key elements, including population characteristics, intervention components [e.g., Frequency, Intensity, Time, Type (FITT) principles, tailoring, use of technology], study design and methods (e.g., recruitment, randomization, statistical analysis), as well as study results, including outcomes and adherence measures.
    METHODS: A six-stage process was used to develop this guideline. This included a three-round Delphi process involving experts from a large European network (COST Action PhysAgeNet), a comprehensive literature review of existing reporting guidelines, consensus meetings with international experts, and validation with journal editors who evaluated and refined the guideline.
    RESULTS: The final PETIO guideline includes an expanded checklist of items to report in the context of physical exercise interventions in older adults. Experts and editors agreed on essential items for improving quality, transparency, and replicability, such as intervention components (FITT) and setting, tailoring aspects, adverse events, and control group specifications. Notably, it was also emphasized that standardized reporting is critical for future meta-analyses and the implementation of future research protocols.
    CONCLUSION: The guideline is expected to support researchers, peer reviewers, and journal editors in improving the quality and transparency of research on physical exercise interventions in older adults.  RELEASE DATE: 2025 (original version).
    AVAILABILITY: The guideline is freely accessible online in the supplemental material.
    Keywords:  CONSORT; Consensus; Consistency; Delphi; Exercise intensity; Older adults
    DOI:  https://doi.org/10.1186/s11556-025-00390-x
  12. Quant Plant Biol. 2025 ;6 e33
      The impact factor has become a defining feature of scientific journals. However, such reductionism can be toxic to science. As Cambridge University Press Quantitative Plant Biology celebrates its 5-year anniversary, and its first impact factor, this is an opportunity to set things straight. A call to value what a scientific journal is about: a community of scientists, a guarantee of rigour and quality, an invitation to explore the complexity of our world, a fair and ethical environment and an engaging, diverse and creative arena.
    Keywords:  Goodhart law; impact factor; publishing; quantitative plant biology; systems thinking
    DOI:  https://doi.org/10.1017/qpb.2025.10024
  13. Implement Sci. 2025 Dec 04.
       BACKGROUND: Qualitative methods are central to implementation research. Qualitative research provides rich contextual insight into lived experiences of health and illness, healthcare systems and care delivery, and complex implementation processes. However, quantitative methods have historically been favored by editors and reviewers who serve as gatekeepers to scientific knowledge. Thus, we underscore that editors and reviewers must be familiar with the underlying principles and strengths of qualitative methods to avoid perpetuating inappropriate evaluation criteria that hinder qualitative research dissemination and funding opportunities. We aim to help authors and researchers provide sufficient details to dispel misperceptions and editors and reviewers to better evaluate studies using qualitative methods to maximize dissemination for high-impact implementation research.
    METHODS: We convened a panel of six researchers with extensive experience in: designing, conducting, and reporting on qualitative research in implementation science and other healthcare research; training and mentoring others on qualitative methods; and serving as journal editors and manuscript/grant peer reviewers. We reviewed existing literature, published and unpublished reviewer critiques of qualitative grants and manuscripts, and discussed challenges facing qualitative methodologists when disseminating findings. Over the course of one year, we identified candidate topics, ranked each by priority, and used a consensus-based process to finalize the inventory and develop written guidance for handling each topic.
    RESULTS: We identified and dispelled 10 common misperceptions that limit the impact of qualitative methods in implementation research. Five misperceptions were associated with the application of inappropriate quantitative evaluation standards (subjectivity, sampling, generalizability, numbers/statistics, interrater reliability). Five misperceptions were associated with overly prescribed qualitative evaluation standards (saturation, member checking, coding, themes, qualitative data analysis software). For each misperception, we provide guidance on key considerations, responses to common critiques, and citations to appropriate literature.
    CONCLUSIONS: Unaddressed misperceptions can impede the contributions of qualitative methods in implementation research. We offer a resource for editors, reviewers, authors, and researchers to clarify misunderstandings and promote more nuanced and appropriate evaluation of qualitative methods in manuscripts and grant proposals. This article encourages a balanced assessment of the strengths of qualitative methods to enhance understandings of key problems in implementation research, and, ultimately, to strengthen the impact of qualitative findings.
    Keywords:  Evaluation standards; Implementation science; Mixed methods; Qualitative analysis; Qualitative methods; Rigor; Subjectivity
    DOI:  https://doi.org/10.1186/s13012-025-01474-z
  14. Sci Data. 2025 Dec 02.
      Scientific data has become a cornerstone of contemporary biomedical research, yet the academic recognition of data contributions remains underexplored. In this study, we leveraged the open-access biomedical literature in the PMC (PubMed Central) to identify GEO (Gene Expression Omnibus) datasets and extract their associated original papers. By examining the authorship relationships between dataset contributors and paper authors, we quantitatively assessed the academic recognition of scientific data. Our findings reveal that approximately 80% of dataset contributors play pivotal roles in their respective original papers, either as first or corresponding authors, with this proportion continuing to rise. This trend highlights the growing importance of data collection, processing, and analysis in the research process, along with its increasing recognition by the scientific community. Furthermore, we observed that high-impact journals invest more resources in enhancing data quality, thereby improving research credibility, academic influence, and overall research outcomes. These results underscore the gradual shift toward recognizing the value of scientific data work, which is critical for advancing research quality.
    DOI:  https://doi.org/10.1038/s41597-025-06340-7
  15. Cleft Palate Craniofac J. 2025 Dec 02. 10556656251398916
      ObjectiveTo propose recommendations for ethical, participant-centered clinical data sharing in craniofacial research.DesignSeries of deliberative multidisciplinary expert working group meetings to develop recommendations.SettingTwo 1-h virtual meetings; one all-day hybrid meeting.Patients, ParticipantsWorking group (n = 16) comprised individuals with expertise in craniofacial research, bioethics, and patient/caregiver advocacy, as well as lived experience of craniofacial conditions.InterventionsThe working group first reviewed prior empirical data about research participant attitudes about data sharing then drafted initial recommendations that were built on the data and the group's collective expertise. Recommendations were iteratively refined until the group agreed upon their final presentation.Main outcome measuresWorking group endorsement of recommendations.ResultsThe working group produced 16 recommendations that addressed considerations for primary and secondary researchers, data repositories, and the craniofacial research community across 5 domains. These domains address: (1) research team communication with participants, (2) data collection and protections, (3) data governance, (4) education for researchers, and (5) remaining research gaps. Recommendations highlight the importance of prioritizing the experiences of those with lived experience of craniofacial conditions in decision-making about data sharing, navigating varied perspectives on privacy protections for facial images, and striving to implement trustworthy data sharing and governance practices.ConclusionsThis summary of recommendations offers guidance for the craniofacial research community to advance participant-centered clinical data sharing practices. Ethical data sharing that accounts for participants' experiences and values has potential to advance scientific research and improve outcomes for individuals with craniofacial conditions, their families, and their communities.
    Keywords:  ethics/health policies; health policies; pediatrics
    DOI:  https://doi.org/10.1177/10556656251398916
  16. Clin Microbiol Infect. 2025 Nov 27. pii: S1198-743X(25)00602-0. [Epub ahead of print]
      
    Keywords:  data sharing; open science; open-access databases
    DOI:  https://doi.org/10.1016/j.cmi.2025.11.030
  17. J Exp Psychol Hum Percept Perform. 2025 Dec;51(12): 1623-1625
      The year 2025 marked the 50th anniversary of the Journal of Experimental Psychology: Human Perception & Performance (JEP:HPP). JEP:HPP started as a standalone journal in January 1975 under the editorship of Michael Posner. The semicentennial birthday is a special occasion and warrants a special recognition. To celebrate, the editorial team curated a series of articles that explored the impact, the reach, and the value of research published in JEP: HPP. The articles were published throughout 2025. While scientific journals are often evaluated through metrics like impact factor, our celebratory articles show that the influence of JEP:HPP extends far beyond such simple measures. The 27 articles that made up the celebratory series demonstrated the vast reach and diverse influence of research published in JEP:HPP, crossing millennia from Plato to modern feminism to address questions ranging from perception of beauty (Grzywacz, 2025), music (Prpic, 2025; Sears, 2025), human reasoning (Fischhoff, 2025), and language acquisition (Nazzi, 2025) to attention (Olivers et al., 2025; Sauter, 2025; Zhang et al., 2025), representation of space (Yamamoto & Phillbeck, 2025), mental imagery (Martarelli & Mast, 2025), working memory (Olivers et al., 2025), cognitive (Logan, 2025), and attentional control (Montakhaby Nodeh, 2025), as well as social perception and action (Ferier & Heurley, 2025; Hafri & Papeo, 2025; Oswald, 2025). The anniversary series of articles included seven invited literature reviews, three editorial perspectives, and 17 readers' perspectives. Each contribution gave a window into a finding, a researcher, and a time. We got a peek into research from 50 years ago-the ways in which "subjects" were tested, data were plotted, and graphs were physically printed. The authors shared the story of data, how they came about, and how they continued to "live" in the literature. This capture of time, from 1975 to today, was one of the motivating factors in planning the celebratory series. The idea was to honor not only the journal but also people connected to the journal over those 50 years- those who led the journal, those who published in the journal, and, of course, those who read the journal. Thus, as the series celebrated the contributions of research published in JEP:HPP, it also celebrated the community of researchers making JEP:HPP, the diversity of our research questions, opinions, methods, and data, as well as the unity with which we converge in our keen interest in understanding the human mind. The editorial team received enthusiastic contributions from both established and up-and-coming junior scientists who are just embarking on a career journey similar to that taken by their predecessors. And while the two groups may differ in their training and methodological affinities, they appear to share the same passion and fire for scientific discovery (PsycInfo Database Record (c) 2025 APA, all rights reserved).
    DOI:  https://doi.org/10.1037/xhp0001361
  18. J Burn Care Res. 2025 Dec 06. pii: iraf225. [Epub ahead of print]
      Quality improvement (QI) is essential to advancing burn care, yet most locally successful QI initiatives are not disseminated beyond individual centres. Although QI activity is common in burn care, only a small proportion of projects progress to peer-reviewed publication. This restricts shared learning and slows the spread of evidence-based, context-adaptable practices. We highlight persistent barriers to QI publication, including unclear reporting expectations and limited reviewer familiarity with improvement methodology. To address this gap, we propose three practical strategies for burn centres: intentionally developing 1-2 publishable QI projects annually, adopting SQUIRE 2.0 as a reporting scaffold, and expanding QI-trained peer reviewer capacity. We also present a 10-Point QII Scoring Framework to guide project planning and scholarly dissemination.
    Keywords:  Burn care; Implementation science; Multidisciplinary systems; PDSA cycles; Quality improvement; SQUIRE guidelines
    DOI:  https://doi.org/10.1093/jbcr/iraf225
  19. Aesthetic Plast Surg. 2025 Dec 05.
       BACKGROUND: Generative artificial intelligence (AI) is rapidly transforming scientific publishing, offering new opportunities in surgical planning, imaging, and research. Plastic surgery's reliance on visual documentation creates unique ethical and methodological challenges, yet the extent of generative AI policy adoption in specialty journals is unclear.
    METHODS: We reviewed author guidelines from 30 leading plastic surgery journals (April 24, 2025) identified through the 2023 Journal Citation Reports, SCImago Journal Rank, and Google Scholar Metrics. Two reviewers independently assessed each journal for generative AI policy presence and characteristics, including permitted uses, disclosure requirements, authorship restrictions, and geographic distribution.
    RESULTS: Fourteen journals (46.7%) had generative AI-related policies, most prohibiting AI authorship (85.7%) and allowing language assistance. Disclosure requirements were inconsistent, only 28.6% explained how to disclose use, and 64.3% specified a disclosure location. References to large language models or AI-generated images appeared in 28.6% of policies and 35.7% required naming the tool used. Policies were more common among US-based journals (64.3%) and absent in South America and Africa.
    CONCLUSION: This study presents the first specialty-specific analysis of generative AI use policies in plastic surgery journals. Fewer than half of major plastic surgery journals have generative AI policies, with significant variation in scope and global disparities that may create inequitable publication standards. Specialty-specific guidelines addressing AI-generated visual content, surgical planning, and aesthetic analysis are needed to maintain transparency and research integrity while enabling safe, ethical integration of generative AI into plastic surgery research.
    LEVEL OF EVIDENCE V: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
    Keywords:  Artificial intelligence; Editorial policies; Generative artificial intelligence; ICMJE; Research ethics
    DOI:  https://doi.org/10.1007/s00266-025-05468-6
  20. J Exp Psychol Hum Percept Perform. 2025 Dec;51(12): 1619-1622
      The Journal of Experimental Psychology: Human Perception and Performance is commemorating its 50th anniversary. To celebrate, this commentary examines the adoption of open science practices as a function of the editorial policies implemented in the journal over a period of 8 years (2016-2023). Between 2016 and 2017, no open science policy was in effect. Accordingly, the rates of materials/data/code sharing and preregistration were nearly zero in published articles. In 2018, policy changed requiring sample size justification and recommending the inclusion of open science practices. This produced almost 100% sample size justification compliance between 2019 and 2020 and an increase in the adoption of all open science practices. Finally, in 2021, the journal adopted the Transparency and Openness in Publishing guidelines. Between 2022 and 2023, the adoption of all open science practices further increased, with over 88% of articles sharing their data and about half sharing the analysis code. This analysis shows that editorial policies can have a pivotal role in driving authors toward more transparent and replicable practices in their published articles. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
    DOI:  https://doi.org/10.1037/xhp0001299
  21. J Diabetes Sci Technol. 2025 Dec 01. 19322968251399653
      There are a plethora of medical journals, also for the diabetes indication. Only a limited number of these journals are listed in databases like PubMed. A number of other diabetes journals approach potential authors and ask for submission of manuscripts. They promise rapid publication; however, one wonders what kind of impact these journals have and how serious they are at handling the review process and so on. One wonders what the economical basis (= business model) for these journals is, the publication fee might be considerable. Apparently, some journals pretend to publish manuscripts; however, this does not happen in reality, despite the fact that the authors have paid the publication fee. In the same line of thinking, the quality of the publications in these journals is at least questionable.
    Keywords:  diabetes; fraud; impact factor; professional journals; quality
    DOI:  https://doi.org/10.1177/19322968251399653