bims-skolko Biomed News
on Scholarly communication
Issue of 2025–04–20
twenty-six papers selected by
Thomas Krichel, Open Library Society



  1. PLoS One. 2025 ;20(4): e0320334
       METHODS: A short survey was distributed to 40,402 authors of papers cited in Wikipedia (n=21,854 surveys sent, n=750 complete responses received). The survey gathered responses from published authors in relation to their views on Wikipedia's trustworthiness in relation to the citations to their published works. The unique findings of the survey were analysed using a mix of quantitative and qualitative methods using Python, Google BigQuery and Looker Studio.
    RESULTS: Overall, authors expressed positive sentiment towards research citation in Wikipedia and researcher engagement practices (mean scores >7/10). Sub-analyses revealed significant differences in sentiment based on publication type (articles vs. books) and discipline (Humanities and Social Sciences vs. Science, Technology, and Medicine), but not access status (open vs. closed access).
    CONCLUSIONS: This study provides unique insights into author perceptions of Wikipedia's trustworthiness. Further research is needed to deepen the understanding of the benefits for researchers and publishers including academic citations in Wikipedia.
    DOI:  https://doi.org/10.1371/journal.pone.0320334
  2. J Law Med Ethics. 2025 Apr 14. 1-7
      Retracted research publications reached an all-time high in 2023, and COVID-19 publications may have higher retraction rates than other publications. To better understand the impact of COVID-19 on the research literature, we analyzed 244 retracted publications related to COVID-19 in the PubMed database and the reasons for their retraction. Peer-review manipulation (18.4%) and error (20.9%) were the most common reasons for retraction, with time to retraction occurring far more quickly than in the past (13.2 mos, compared with 32.9 mos in a 2012 study). Publications focused on controversial topics were retracted rapidly (mean time to retraction 10.8 mos) but continued to receive media attention, suggesting that retraction alone may be insufficient to prevent the spread of scientific misinformation. More than half of the retractions resulted from problems that could have been detected prior to publication, including compromise of the peer review process, plagiarism, authorship issues, lack of ethics approvals, or journal errors, suggesting that more robust screening and peer review by journals can help to mitigate the recent rise in retractions.
    Keywords:  COVID-19; misconduct; peer review; research integrity; retractions
    DOI:  https://doi.org/10.1017/jme.2025.33
  3. J Oral Maxillofac Pathol. 2025 Jan-Mar;29(1):29(1): 137-139
      One of the growing concerns in scientific publishing is the rise of cloned journals. With increasing pressure on academic institutions for publications, many faculty members fall victim to these cloned journals, leading to not only financial loss but also the misuse of valuable research data. Post-graduate students are particularly vulnerable, making awareness and education about this issue crucial. Therefore, it is of paramount importance for faculty members to be aware of such scams and take steps to protect themselves from academic embarrassment. This short communication discusses the identification of cloned journals, government initiatives, and proper methods for verifying legitimate submission websites.
    Keywords:  Journal cloning; malpractice; oral pathology; publication
    DOI:  https://doi.org/10.4103/jomfp.jomfp_528_23
  4. Int J Gynecol Cancer. 2024 May;pii: S1048-891X(24)01388-4. [Epub ahead of print]34(5): 669-674
       OBJECTIVE: To determine if reviewer experience impacts the ability to discriminate between human-written and ChatGPT-written abstracts.
    METHODS: Thirty reviewers (10 seniors, 10 juniors, and 10 residents) were asked to differentiate between 10 ChatGPT-written and 10 human-written (fabricated) abstracts. For the study, 10 gynecologic oncology abstracts were fabricated by the authors. For each human-written abstract we generated a ChatGPT matching abstract by using the same title and the fabricated results of each of the human generated abstracts. A web-based questionnaire was used to gather demographic data and to record the reviewers' evaluation of the 20 abstracts. Comparative statistics and multivariable regression were used to identify factors associated with a higher correct identification rate.
    RESULTS: The 30 reviewers discriminated 20 abstracts, giving a total of 600 abstract evaluations. The reviewers were able to correctly identify 300/600 (50%) of the abstracts: 139/300 (46.3%) of the ChatGPT-generated abstracts and 161/300 (53.7%) of the human-written abstracts (p=0.07). Human-written abstracts had a higher rate of correct identification (median (IQR) 56.7% (49.2-64.1%) vs 45.0% (43.2-48.3%), p=0.023). Senior reviewers had a higher correct identification rate (60%) than junior reviewers and residents (45% each; p=0.043 and p=0.002, respectively). In a linear regression model including the experience level of the reviewers, familiarity with artificial intelligence (AI) and the country in which the majority of medical training was achieved (English speaking vs non-English speaking), the experience of the reviewer (β=10.2 (95% CI 1.8 to 18.7)) and familiarity with AI (β=7.78 (95% CI 0.6 to 15.0)) were independently associated with the correct identification rate (p=0.019 and p=0.035, respectively). In a correlation analysis the number of publications by the reviewer was positively correlated with the correct identification rate (r28)=0.61, p<0.001.
    CONCLUSION: A total of 46.3% of abstracts written by ChatGPT were detected by reviewers. The correct identification rate increased with reviewer and publication experience.
    Keywords:  Gynecologic Surgical Procedures
    DOI:  https://doi.org/10.1136/ijgc-2023-005162
  5. Aesthetic Plast Surg. 2025 Apr 14.
       BACKGROUND: Since its 2022 release, ChatGPT has gained recognition for its potential to expedite time-consuming writing tasks like scientific writing. Well-written scientific abstracts are essential for clear and efficient communication of research findings. This study aims to explore ChatGPT-4's capability to produce well-crafted abstracts.
    METHODS: Ten abstract-less plastic surgery articles from PubMed were uploaded to ChatGPT, each with a prompt to generate one abstract. Flesch-Kincaid Grade Level (FKGL) and Flesch Reading Ease Score (FRES) were calculated for all abstracts. Additionally, three physician evaluators blindly assessed the ten original and ten ChatGPT-generated abstracts using a 5-point Likert scale. Results were compared and analyzed using descriptive statistics with mean and standard deviation (SD).
    RESULTS: The original abstracts averaged an FKGL of 14.1 (SD 2.9) and an FRES of 25.2 (SD 14.2), while ChatGPT-generated abstracts had scores of 15.6 (SD 2.4) and 15.4 (SD 13.1), respectively. Collectively, evaluators identified two-thirds of the ChatGPT abstracts, but preferred the ChatGPT abstracts 90% of the time. On average, the evaluators found the ChatGPT abstracts to be more "well written" (4.23 vs. 3.50, p value < 0.001) and "clear and concise" (4.30 vs. 3.53, p value < 0.001) compared to the original abstracts.
    CONCLUSIONS: Despite a slightly higher reading level, evaluators generally preferred ChatGPT abstracts, which received higher ratings overall. These findings suggest ChatGPT holds promise in expediting the creation of high-quality scientific abstracts, potentially enhancing efficiency in research and scientific writing tasks. However, due to its exploratory nature, this study calls for additional research to validate these promising findings.
    LEVEL OF EVIDENCE IV: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors   www.springer.com/00266.
    Keywords:  Abstract; Artificial intelligence (AI); ChatGPT; Plastic surgery; Research; Scientific writing
    DOI:  https://doi.org/10.1007/s00266-025-04836-6
  6. Adv Simul (Lond). 2025 Apr 18. 10(1): 22
      Generative artificial intelligence (AI) tools have been selectively adopted across the academic community to help researchers complete tasks in a more efficient manner. The widespread release of the Chat Generative Pre-trained Transformer (ChatGPT) platform in 2022 has made these tools more accessible to scholars around the world. Despite their tremendous potential, studies have uncovered that large language model (LLM)-based generative AI tools have issues with plagiarism, AI hallucinations, and inaccurate or fabricated references. This raises legitimate concern about the utility, accuracy, and integrity of AI when used to write academic manuscripts. Currently, there is little clear guidance for healthcare simulation scholars outlining the ways that generative AI could be used to legitimately support the production of academic literature. In this paper, we discuss how widely available, LLM-powered generative AI tools (e.g. ChatGPT) can help in the academic writing process. We first explore how academic publishers are positioning the use of generative AI tools and then describe potential issues with using these tools in the academic writing process. Finally, we discuss three categories of specific ways generative AI tools can be used in an ethically sound manner and offer four key principles that can help guide researchers to produce high-quality research outputs with the highest of academic integrity.
    Keywords:  Academic writing; Artificial intelligence; ChatGPT; Ethics; Large language models
    DOI:  https://doi.org/10.1186/s41077-025-00350-6
  7. Int J Gynecol Cancer. 2024 Oct;pii: S1048-891X(24)03585-0. [Epub ahead of print]34(10): 1495-1498
      
    Keywords:  Carcinoma, Ovarian Epithelial; Cervical Cancer; Uterine Cancer; Vulvar and Vaginal Cancer
    DOI:  https://doi.org/10.1136/ijgc-2024-005691
  8. Cell Rep Med. 2025 Apr 15. pii: S2666-3791(25)00153-3. [Epub ahead of print]6(4): 102080
      Analyses of large-scale health data in biomedical data science can help uncover new treatments and deepen our understanding of disease and fundamental biology. Here we examine the balance between ethical and responsible data sharing and open science practices that are essential for reproducible research in biomedical data science.
    DOI:  https://doi.org/10.1016/j.xcrm.2025.102080
  9. BMJ. 2025 Apr 14. 389 e081123
       BACKGROUND: Well designed and properly executed randomised trials are considered the most reliable evidence on the benefits of healthcare interventions. However, there is overwhelming evidence that the quality of reporting is not optimal. The CONSORT (Consolidated Standards of Reporting Trials) statement was designed to improve the quality of reporting and provides a minimum set of items to be included in a report of a randomised trial. CONSORT was first published in 1996, then updated in 2001 and 2010. Here, we present the updated CONSORT 2025 statement, which aims to account for recent methodological advancements and feedback from end users.
    METHODS: We conducted a scoping review of the literature and developed a project-specific database of empirical and theoretical evidence related to CONSORT, to generate a list of potential changes to the checklist. The list was enriched with recommendations provided by the lead authors of existing CONSORT extensions (Harms, Outcomes, Non-pharmacological Treatment), other related reporting guidelines (TIDieR) and recommendations from other sources (eg, personal communications). The list of potential changes to the checklist was assessed in a large, international, online, three-round Delphi survey involving 317 participants and discussed at a two-day online expert consensus meeting of 30 invited international experts.
    RESULTS: We have made substantive changes to the CONSORT checklist. We added seven new checklist items, revised three items, deleted one item, and integrated several items from key CONSORT extensions. We also restructured the CONSORT checklist, with a new section on open science. The CONSORT 2025 statement consists of a 30-item checklist of essential items that should be included when reporting the results of a randomised trial and a diagram for documenting the flow of participants through the trial. To facilitate implementation of CONSORT 2025, we have also developed an expanded version of the CONSORT 2025 checklist, with bullet points eliciting critical elements of each item.
    CONCLUSION: Authors, editors, reviewers, and other potential users should use CONSORT 2025 when writing and evaluating manuscripts of randomised trials to ensure that trial reports are clear and transparent.
    DOI:  https://doi.org/10.1136/bmj-2024-081123
  10. BMC Cancer. 2025 Apr 17. 25(1): 720
       BACKGROUND: Medical writing services, initially developed to streamline manuscript preparation, have raised ethical concerns due to their association with industry influence and spin. While prevalent in oncology and malignant hematology clinical trials, medical writing involvement in review articles remains underexplored, particularly in the hematology literature. Furthermore, conflict of interests of the writers may also affect the content of review articles. This study investigates the prevalence, characteristics, and funding sources of medical writing in malignant hematology review articles and their relationship with the financial conflicts of interest (CoI) among authors.
    METHODS: We conducted a cross-sectional analysis of review articles published in the five-year period between January 2019 and December 2023 in the ten highest-rated hematology journals (by 2023 Journal Citation Report Impact Factor). Inclusion criteria encompassed narrative and systematic reviews, guidelines, and clinical advice articles, excluding studies focused solely on benign hematology or basic science.
    RESULTS: Among 663 included reviews, medical writing involvement was disclosed in 5.7% of articles in which in no instance the medical writer was included as a co-author; with as high as 21% of review articles in a single journal having disclosed medical writing assistance. Medical writers were primarily industry-sponsored (89%). Reviews on plasma cell malignancies had the highest medical writing usage (11%). Direct CoIs were identified in 28% and 34% of first and last authors, respectively, rising to 71% in drug-specific reviews. Only one journal had explicit policies regulating medical writing in reviews.
    CONCLUSIONS: Although the prevalence of medical writing in malignant hematology review articles remains low, at least one journal had over 20% of review articles disclosing medical writer usage. Review articles about specific drugs are often written by authors with direct payments from the manufacturer of the drug in question.
    Keywords:  Malignant hematology; Medical writing; Reviews
    DOI:  https://doi.org/10.1186/s12885-025-14137-5
  11. PLoS One. 2025 ;20(4): e0320347
      This study aims to compare the geographical and disciplinary coverage of OA journals in three databases: OpenAlex, Scopus and the Web of Science (WoS). We used the Directory of Open Access Scholarly Resources (ROAD), provided by the ISSN International Centre, as a reference to identify OA active journals (as of May 2024). Among the 62,701 active OA journals listed in ROAD, the WoS indexes 6,157 journals, Scopus indexes 7,351, while OpenAlex indexes 34,217. A striking observation is the presence of 24,976 OA journals exclusively in OpenAlex, whereas only 182 journals are exclusively present in the WoS and 373 in Scopus. The geographical analysis focuses on two levels: continents and countries. As for disciplinary comparison, we use the ten disciplinary levels of the ROAD database. Moreover, our findings reveal a similarity in OA journal coverage between the WoS and Scopus. However, while OpenAlex offers better inclusivity and indexing, it is not without biases. The WoS and Scopus predictably favor journals from Europe, North America and Oceania. Although OpenAlex presents a much more balanced indexing, certain regions and countries remain relatively underrepresented. Typically, Africa is proportionally as under-represented in OpenAlex as it is in Scopus, and some emerging countries are proportionally less represented in OpenAlex than in the WoS and Scopus. These results underscore a marked similarity in OA journal indexing between WoS and Scopus, while OpenAlex aligns more closely with the distribution observed in the ROAD database, although it also exhibits some representational biases.
    DOI:  https://doi.org/10.1371/journal.pone.0320347
  12. J Pediatr Health Care. 2025 Apr 15. pii: S0891-5245(25)00066-5. [Epub ahead of print]
      The new health policy department for the Journal of Pediatric Health Care (JPHC) will be referred to as Child Health Policy Perspectives, and abbreviated as Policy Perspectives. A major goal for this new department is to invigorate JPHC readers as policy advocates for all pediatric populations and their families in government, community, healthcare delivery, education, research, and quality improvement projects. Another goal is for National Association of Pediatric Nurse Practitioner members and all pediatric-focused NPs to submit their policy analysis for publication in the JPHC. The U.S. Centers for Disease Control and Prevention's Policy Analytical Framework is the recommended framework for all manuscript submissions. The Centers for Disease Control and Prevention's Overview of Policy Process will be used to guide authors in developing health policy manuscripts. The intent is to publish health policy articles that improve the health of pediatric populations.
    Keywords:  National health policies; child health policies; global health policies; reporting guidance
    DOI:  https://doi.org/10.1016/j.pedhc.2025.03.003
  13. Eur J Orthod. 2025 Apr 08. pii: cjaf019. [Epub ahead of print]47(3):
       BACKGROUND/OBJECTIVES: The inclusion of a participant flow diagram in randomized clinical trials (RCTs) is a requirement of the CONSORT guidelines. The aim of this study was to assess the reporting quality of flow diagrams of RCTs published in orthodontic journals in relation to the CONSORT Flow Diagram for Parallel Group RCTs.
    MATERIALS/METHODS: RCTs published between January 2011 and December 2023 in five orthodontic journals were identified and trial characteristics were extracted. The reporting of the flow diagram (if included) was assessed for completeness in relation to the CONSORT flow diagram template. Descriptive statistics and cross tabulations between RCT characteristics and presence/no presence of a flow diagram were performed. On an exploratory basis, univariable associations between RCT characteristics and presence/no presence of a flow diagram were performed and univariable logistic regression to examine the effect of publication year on flow diagram reporting.
    RESULTS: Three hundred and thirty-four RCTs met the inclusion criteria. The majority were published in 2021 (n = 39, 11.7%), and had 2 arms (n = 279, 83.5%). Three-hundred and seven (92.0%) RCTs were published in journals endorsing the CONSORT guidelines. Two hundred and thirty-three (69.8%) RCTs included a flow diagram and from these, 48.1% (n = 112) were fully compliant with flow diagram reporting. 121 (51.9%) omitted at least one item of the CONSORT reporting template. Significant associations between journal type, CONSORT endorsement by authors, ethical approval status, presence of a published protocol, significance of the primary outcome, involvement of a statistician, presence of conflict of interest, center type, type of analysis undertaken and the presence/ no presence of a flow diagram were present. Across the study timeframe, the odds of inclusion of RCT flow diagram increased per additional year (OR:1.47; 95%CI:1.34,1.61; p < .001).
    LIMITATIONS: Only five orthodontic journals.
    CONCLUSIONS/IMPLICATIONS: Despite improvements over time, the inclusion and reporting of CONSORT flow diagram for parallel group RCTs in trials published in orthodontic journals requires improvement. To mitigate potential biased interpretation of trial results, journal editors should ensure a complete CONSORT flow diagram is submitted by researchers.
    Keywords:  CONSORT statement; flow diagram; orthodontics; randomized controlled trials
    DOI:  https://doi.org/10.1093/ejo/cjaf019
  14. J Exp Psychol Learn Mem Cogn. 2025 Apr 14.
      This short review summarizes the ways in which articles published in Journal of Experimental Psychology: Learning, Memory, and Cognition have changed over the past 25 years, with a special focus on the 6 years of my recently completed editorial term (2018-2024). We evaluated the content of articles in the journal with respect to areas of priority outlined in my inaugural Editorial (Benjamin, 2019), including sample sizes, statistical approaches, and a number of other factors. Enhancements that stand to increase replicability, reproducibility, and open scientific exchange are evident but in certain areas are more modest than others. Establishing changes to a scientific culture requires consistent assays of the field and its behaviors, as well as a long time horizon for measuring change. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
    DOI:  https://doi.org/10.1037/xlm0001487
  15. JB JS Open Access. 2025 Apr-Jun;10(2):pii: e24.00166. [Epub ahead of print]10(2):
       Introduction: The PubMed database is used by many organizations as the benchmark for quality publications. This study aimed to identify quality metrics distinguishing orthopaedic journals indexed in PubMed from nonindexed journals. A second aim was to compare metrics of orthopaedic journals indexed in other major databases vs. nonindexed journals. We hypothesized that indexed orthopaedic journals would have several measurable attributes differentiating them from nonindexed journals.
    Methods: A list of all current orthopaedic journals in publication in 2021 was compiled. The journals were characterized based on their index status in PubMed, Master Journal List, Journal Citation Reports (JCR), MEDLINE, or Directory of Open Access Journals. Various journal attributes were collected and compared by indexed status. Each variable's association with indexed journals was determined through statistical analysis.
    Results: Of 478 evaluated journals, 271 were indexed by PubMed. Univariate analysis demonstrated significant associations between PubMed indexing and society affiliation, physician editorial leadership, print version availability, subscription availability, impact factor (IF) listed in JCR, use of Creative Commons licenses, Committee on Publication Ethics (COPE) membership, journals that require an article processing charge (APC), and earlier year of first issue (all with p < 0.001). Logistic regression of journals listed in any index demonstrates that COPE membership had the highest impact on indexed status (odds ratio = 34.19; 95% confidence interval 5.69-105.59; p < 0.001). The regression model also demonstrated that society affiliation, subscription availability, MD designation on website for editors, year of first issue, and number of physician editors have significant associations with indexed journals.
    Conclusion: COPE membership was the most distinguishing characteristic of indexed orthopaedic journals. Physician editorial leadership and society affiliation were also strong predictors of a journal having an indexed status. In addition, having a print version and/or subscription available, use of Creative Commons licenses, higher IF, longer publication history, and higher APC charge were associated more frequently with indexed journals. Authors should consider these factors when submitting articles to a journal and use caution submitting to journals which do not meet these quality metrics.
    DOI:  https://doi.org/10.2106/JBJS.OA.24.00166
  16. Rehabilitacion (Madr). 2025 Apr 16. pii: S0048-7120(25)00029-5. [Epub ahead of print]59(2): 100909
      
    DOI:  https://doi.org/10.1016/j.rh.2025.100909
  17. J Vitreoretin Dis. 2025 Apr 15. 24741264251331647
      The thoughtful process of peer review allows for the vetting and improvement of scientific work that leads to quality research and, eventually, advancement of the field of retina. Progress in medicine would not occur without dedicated researchers, but just as important are the peer reviewers who take the time assess their work, weigh in on their validity, and help bring these papers to life. Here, we discuss how to effectively review a journal manuscript in a way that helps both the beginning reviewer and the seasoned expert develop a framework to provide meaningful peer review. Although this guide was specifically written with JVRD aims in mind, these suggestions can be broadly applied to the process of manuscript review in general.
    Keywords:  JVRD; peer review; research; retina
    DOI:  https://doi.org/10.1177/24741264251331647