bims-skolko Biomed News
on Scholarly communication
Issue of 2024‒06‒30
25 papers selected by
Thomas Krichel, Open Library Society



  1. Reg Anesth Pain Med. 2024 Jun 27. pii: rapm-2024-105490. [Epub ahead of print]
      BACKGROUND: Peer review represents a cornerstone of the scientific process, yet few studies have evaluated its association with scientific impact. The objective of this study is to assess the association of peer review scores with measures of impact for manuscripts submitted and ultimately published.METHODS: 3173 manuscripts submitted to Regional Anesthesia & Pain Medicine (RAPM) between August 2018 and October 2021 were analyzed, with those containing an abstract included. Articles were categorized by topic, type, acceptance status, author demographics and open-access status. Articles were scored based on means for the initial peer review where each reviewer's recommendation was assigned a number: 5 for 'accept', 3 for 'minor revision', 2 for 'major revision' and 0 for 'reject'. Articles were further classified by whether any reviewers recommended 'reject'. Rejected articles were analyzed to determine whether they were subsequently published in an indexed journal, and their citations were compared with those of accepted articles when the impact factor was <1.4 points lower than RAPM's 5.1 impact factor. The main outcome measure was the number of Clarivate citations within 2 years from publication. Secondary outcome measures were Google Scholar citations within 2 years and Altmetric score.
    RESULTS: 422 articles met inclusion criteria for analysis. There was no significant correlation between the number of Clarivate 2-year review citations and reviewer rating score (r=0.038, p=0.47), Google Scholar citations (r=0.053, p=0.31) or Altmetric score (p=0.38). There was no significant difference in 2-year Clarivate citations between accepted (median (IQR) 5 (2-10)) and rejected manuscripts published in journals with impact factors >3.7 (median 5 (2-7); p=0.39). Altmetric score was significantly higher for RAPM-published papers compared with RAPM-rejected ones (median 10 (5-17) vs 1 (0-2); p<0.001).
    CONCLUSIONS: Peer review rating scores were not associated with citations, though the impact of peer review on quality and association with other metrics remains unclear.
    Keywords:  CHRONIC PAIN; EDUCATION; OUTCOMES
    DOI:  https://doi.org/10.1136/rapm-2024-105490
  2. Indian J Psychiatry. 2024 May;66(5): 472-476
      In research, outcomes are often categorized as primary and secondary. The primary outcome is the most important one; it determines whether the study is considered 'successful' or not. Secondary outcomes are chosen because they provide supporting evidence for the results of the primary outcome or additional information about the subject being studied. For reasons that are explained in this paper, secondary outcomes should be cautiously interpreted. There are varying practices regarding publishing secondary outcomes. Some authors publish these separately, while others include them in the main publication. In some contexts, the former can lead to concerns about the quality and relevance of the data being published. In this article, we discuss primary and secondary outcomes, the importance and interpretation of secondary outcomes, and considerations for publishing multiple outcomes in separate papers. We also discuss the special case of secondary analyses and post hoc analyses and provide guidance on good publishing practices. Throughout the article, we use relevant examples to make these concepts easier to understand. While the article is primarily aimed at early career researchers, it offers insights that may be helpful to researchers, reviewers, and editors across all levels of expertise.
    Keywords:  Post hoc analyses; primary outcomes; redundant publications; salami slicing; secondary outcomes; type I error
    DOI:  https://doi.org/10.4103/indianjpsychiatry.indianjpsychiatry_404_24
  3. Account Res. 2024 Jun 25. 1-12
      The frequency of scientific retractions has grown substantially in recent years. However, thus far there is no standardized retraction notice format to which journals and their publishers adhere voluntarily, let alone compulsorily. We developed a rubric specifying seven criteria in order to judge whether retraction notices are easily and freely accessible, informative, and transparent. We mined the Retraction Watch database and evaluated a total of 768 retraction notices from two publishers (Springer and Wiley) over three years (2010, 2015, and 2020). Per our rubric, both publishers tended to score higher on measures of openness/availability, accessibility, and clarity as to why a paper was retracted than they did in: acknowledging institutional investigations; confirming whether there was consensus among authors; and specifying which parts of any given paper warranted retraction. Springer retraction notices appeared to improve over time with respect to the rubric's seven criteria. We observed some discrepancies among raters, indicating the difficulty in developing a robust objective rubric for evaluating retraction notices.
    Keywords:  Scientific retractions; publishing; retraction notices
    DOI:  https://doi.org/10.1080/08989621.2024.2366281
  4. Pol Arch Intern Med. 2024 Jun 17. pii: 16778. [Epub ahead of print]
      INTRODUCTION: In recent years, there has been a decline in the quality of statistical reporting.OBJECTIVES: The aim of this survey was to find out the opinions of members of the World Association of Medical Editors (WAME) on the statistical reviews conducted in their journals and the related recommendations that should be implemented.
    METHODS: A questionnaire containing 25 questions on a range of statistical aspects was distributed to WAME members and their journals.
    RESULTS: The survey was completed by 141 respondents, the largest proportion of whom were editors-in-chief (36.9%). According to 40 per cent of them, only 31-50 per cent of accepted manuscripts are statistically correct. The higher their assessment of their own statistical knowledge, the lower they believe this percentage to be (P = 0.02). The frequency of statistical peer review was estimated by most respondents to be only 1-10%. These mainly include the difficulty in finding people with the right skills and the lack of funding in this area. Among respondents without a statistical editor on the editorial board, 49% believe that statistical reviews enhance the quality of published manuscripts, whereas those confirming such a presence evaluated the percentage as high as 84% (P <0.001). Only 5% of respondents said that their journal uses SAMPL (Statistical Analyses and Methods in the Published Literature) recommendations.
    CONCLUSIONS: Nowadays, members of editorial boards face significant problems in conducting reviews in their journals. For this reason, it is imperative to start implementing statistical peer review for biomedical journals.
    DOI:  https://doi.org/10.20452/pamw.16778
  5. Health Care Sci. 2022 Aug;1(1): 4-6
      
    Keywords:  medical journals; peer review; publication
    DOI:  https://doi.org/10.1002/hcs2.8
  6. Am J Med. 2024 Jul;pii: S0002-9343(24)00130-X. [Epub ahead of print]137(7): e133
      
    DOI:  https://doi.org/10.1016/j.amjmed.2024.03.006
  7. J Oral Pathol Med. 2024 Jun 25.
      The challenges faced by the massive increase in scientific publications draw parallels to the Larsen effect, where an amplified sound loop leads to escalating noise. This phenomenon has resulted in information overload, making it difficult for researchers to stay updated and identify significant findings. To address this, knowledge synthesis techniques are recommended. These methods help synthesize and visualize large bodies of literature, aiding researchers in navigating the expanding information landscape. Furthermore, artificial intelligence (AI) and natural language processing tools, such as text summarization, offer innovative solutions for managing information overload. However, the overuse of AI in producing scientific literature raises concerns about the quality and integrity of research. This manuscript highlights the need for balanced use of AI tools and collaborative efforts to maintain high-quality scientific output while leveraging the benefits of extensive research.
    DOI:  https://doi.org/10.1111/jop.13569
  8. Int J Older People Nurs. 2024 Jul;19(4): e12625
      
    DOI:  https://doi.org/10.1111/opn.12625
  9. Res Integr Peer Rev. 2024 Jun 25. 9(1): 7
      BACKGROUND: As the production of scientific manuscripts and journal options both increase, the peer review process remains at the center of quality control. Recent advances in understanding reviewer biases and behaviors along with electronic manuscript handling records have allowed unprecedented investigations into the peer review process.METHODS: We examined a sample of six journals within the field of fisheries science (and all published by the American Fisheries Society) specifically looking for changes in reviewer invitation rates, review time, patterns of reviewer agreements, and rejection rates relative to different forms of blinding.
    RESULTS: Data from 6,606 manuscripts from 2011-2021 showed significant increases in reviewer invitations. Specifically, four journals showed statistically significant increases in reviewer invitations while two showed no change. Review times changed relatively little (± 2 weeks), and we found no concerning patterns in reviewer agreement. However, we documented a consistently higher rejection rate-around 20% higher-of double-blinded manuscripts when compared to single-blinded manuscripts.
    CONCLUSIONS: Our findings likely represent broader trends across fisheries science publications, and possibly extend to other life science disciplines. Because peer review remains a primary tool for scientific quality control, authors and editors are encouraged to understand the process and evaluate its performance at whatever level can help in the creation of trusted science. Minimally, our findings can help the six journals we investigated to better understand and improve their peer review processes.
    Keywords:  Double-blinding; Rejection rate; Reviewer invitations; Time in review
    DOI:  https://doi.org/10.1186/s41073-024-00146-8
  10. J Am Acad Orthop Surg. 2024 Jun 27.
      INTRODUCTION: While most orthopaedic journals permit the use of artificial intelligence (AI) in article development, they require that AI not be listed as an author, that authors take full responsibility for its accuracy, and that AI use be disclosed. This study aimed to assess the prevalence and disclosure of AI-generated text in abstracts published in high-impact orthopaedic journals.METHODS: Abstracts published from January 1, 2024, to February 19, 2024, in five orthopaedic journals were analyzed: the American Journal of Sports Medicine; the Journal of Arthroplasty; the Journal of Bone and Joint Surgery; the Knee Surgery, Sports, Traumatology, and Arthroscopy (KSSTA) journal; and the BMC Musculoskeletal Disorders (BMC MD) journal. Artificial intelligence detection software was used to evaluate each abstract for AI-generated text. Disclosure of AI use, country of origin, and article type (clinical, preclinical, review, or AI/machine learning) were documented. To evaluate the accuracy of AI detection software, 60 consecutive articles published in the Journal of Bone and Joint Surgery in 2014, before AI writing software was available, were also evaluated. These abstracts were evaluated again after being rewritten with AI writing software. The sensitivity and specificity of the software program for AI-generated text were calculated.
    RESULTS: A total of 577 abstracts were included in the analysis. AI-generated text was detected in 4.8% of abstracts, ranging from 0% to 12% by journal. Only one (3.6%) of the 28 abstracts with AI-generated text disclosed its use. Abstracts with AI-generated text were more likely to be from the Asian continent (57.1% vs. 28.0%, P = 0.001) and to involve topics of AI or machine learning (21.4% vs. 0.6%, P < 0.0001). The sensitivity and specificity of the AI detection software program were determined to be 91.7% (55/60) and 100% (60/60).
    DISCUSSION: A small percentage of abstracts published in high-impact orthopaedic journals contained AI-generated text, and most did not report the use of AI despite journal requirements.
    LEVEL OF EVIDENCE: Diagnostic Level III.
    DOI:  https://doi.org/10.5435/JAAOS-D-24-00318
  11. Farm Hosp. 2024 Jun 25. pii: S1130-6343(24)00096-5. [Epub ahead of print]
      The article examines the impact of artificial intelligence on scientific writing, with a particular focus on its application in hospital pharmacy. It analyzes artificial intelligence tools that enhance information retrieval, literature analysis, writing quality, and manuscript drafting. Chatbots like Consensus, along with platforms such as Scite and SciSpace, enable precise searches in scientific databases, providing evidence-based responses and references. SciSpace facilitates the generation of comparative tables and the formulation of queries regarding studies, while ResearchRabbit maps the scientific literature to identify trends. Tools like DeepL and ProWritingAid improve writing quality by correcting grammatical, stylistic, and plagiarism errors. A.R.I.A. enhances reference management, and Jenny AI assists in overcoming writer's block. Python libraries such as LangChain enable advanced semantic searches and the creation of agents. Despite their benefits, artificial intelligence raises ethical concerns including biases, misinformation, and plagiarism. The importance of responsible use and critical review by experts is emphasized. In hospital pharmacy, artificial intelligence can enhance efficiency and precision in research and scientific communication. Pharmacists can use these tools to stay updated, enhance the quality of their publications, optimize information management, and facilitate clinical decision-making. In conclusion, artificial intelligence is a powerful tool for hospital pharmacy, provided it is used responsibly and ethically.
    Keywords:  AI Tools; Artificial Intelligence; Chatbots; Escritura científica; Ethics; Farmacia Hospitalaria; Herramientas inteligencia artificial; Hospital Pharmacy; Inteligencia artificial; Investigación; Publicación científica; Research; Scientific Publications; Scientific Writing; Ética
    DOI:  https://doi.org/10.1016/j.farma.2024.06.002
  12. J Med Internet Res. 2024 Jun 26. 26 e52001
      BACKGROUND: Due to recent advances in artificial intelligence (AI), language model applications can generate logical text output that is difficult to distinguish from human writing. ChatGPT (OpenAI) and Bard (subsequently rebranded as "Gemini"; Google AI) were developed using distinct approaches, but little has been studied about the difference in their capability to generate the abstract. The use of AI to write scientific abstracts in the field of spine surgery is the center of much debate and controversy.OBJECTIVE: The objective of this study is to assess the reproducibility of the structured abstracts generated by ChatGPT and Bard compared to human-written abstracts in the field of spine surgery.
    METHODS: In total, 60 abstracts dealing with spine sections were randomly selected from 7 reputable journals and used as ChatGPT and Bard input statements to generate abstracts based on supplied paper titles. A total of 174 abstracts, divided into human-written abstracts, ChatGPT-generated abstracts, and Bard-generated abstracts, were evaluated for compliance with the structured format of journal guidelines and consistency of content. The likelihood of plagiarism and AI output was assessed using the iThenticate and ZeroGPT programs, respectively. A total of 8 reviewers in the spinal field evaluated 30 randomly extracted abstracts to determine whether they were produced by AI or human authors.
    RESULTS: The proportion of abstracts that met journal formatting guidelines was greater among ChatGPT abstracts (34/60, 56.6%) compared with those generated by Bard (6/54, 11.1%; P<.001). However, a higher proportion of Bard abstracts (49/54, 90.7%) had word counts that met journal guidelines compared with ChatGPT abstracts (30/60, 50%; P<.001). The similarity index was significantly lower among ChatGPT-generated abstracts (20.7%) compared with Bard-generated abstracts (32.1%; P<.001). The AI-detection program predicted that 21.7% (13/60) of the human group, 63.3% (38/60) of the ChatGPT group, and 87% (47/54) of the Bard group were possibly generated by AI, with an area under the curve value of 0.863 (P<.001). The mean detection rate by human reviewers was 53.8% (SD 11.2%), achieving a sensitivity of 56.3% and a specificity of 48.4%. A total of 56.3% (63/112) of the actual human-written abstracts and 55.9% (62/128) of AI-generated abstracts were recognized as human-written and AI-generated by human reviewers, respectively.
    CONCLUSIONS: Both ChatGPT and Bard can be used to help write abstracts, but most AI-generated abstracts are currently considered unethical due to high plagiarism and AI-detection rates. ChatGPT-generated abstracts appear to be superior to Bard-generated abstracts in meeting journal formatting guidelines. Because humans are unable to accurately distinguish abstracts written by humans from those produced by AI programs, it is crucial to exercise special caution and examine the ethical boundaries of using AI programs, including ChatGPT and Bard.
    Keywords:  AI; Bard; ChatGPT; abstract; artificial intelligence; chatbot; ethics; formatting guidelines; journal guidelines; language model; orthopedic surgery; plagiarism; scientific abstract; spine; spine surgery; surgery
    DOI:  https://doi.org/10.2196/52001
  13. JACC Adv. 2023 Nov;2(9): 100647
      
    Keywords:  research; statistics; virtual education
    DOI:  https://doi.org/10.1016/j.jacadv.2023.100647
  14. Front Bioeng Biotechnol. 2024 ;12 1409763
      Women and racial minorities are underrepresented in the synthetic biology community. Developing a scholarly identity by engaging in a scientific community through writing and communication is an important component for STEM retention, particularly for underrepresented individuals. Several excellent pedagogical tools have been developed to teach scientific literacy and to measure competency in reading and interpreting scientific literature. However, fewer tools exist to measure learning gains with respect to writing, or that teach the more abstract processes of peer review and scientific publishing, which are essential for developing scholarly identity and publication currency. Here we describe our approach to teaching scientific writing and publishing to undergraduate students within a synthetic biology course. Using gold standard practices in project-based learning, we created a writing project in which students became experts in a specific application area of synthetic biology with relevance to an important global problem or challenge. To measure learning gains associated with our learning outcomes, we adapted and expanded the Student Attitudes, Abilities, and Beliefs (SAAB) concept inventory to include additional questions about the process of scientific writing, authorship, and peer review. Our results suggest the project-based approach was effective in achieving the learning objectives with respect to writing and peer reviewed publication, and resulted in high student satisfaction and student self-reported learning gains. We propose that these educational practices could contribute directly to the development of scientific identity of undergraduate students as synthetic biologists, and will be useful in creating a more diverse synthetic biology research enterprise.
    Keywords:  authorship; higher education; peer review; primary literature; synthetic biology
    DOI:  https://doi.org/10.3389/fbioe.2024.1409763
  15. J Infect Dis. 2024 Jun 24. pii: jiae326. [Epub ahead of print]
      
    DOI:  https://doi.org/10.1093/infdis/jiae326
  16. IJTLD Open. 2024 Jan;1(1): 1-2
      
    Keywords:  International Journal of Tuberculosis and Lung Disease; The Union; cOAlition S; open access
    DOI:  https://doi.org/10.5588/ijtldopen.23.0598
  17. Indian J Dent Res. 2024 Jan 01. 35(1): 18-22
      BACKGROUND: Epistemic injustice and the so-called "predators" or illegitimate publishers are the challenges of Southern scholarly publishing. Even though open access (OA) publishing is revolutionary in academic publishing, increased compensation from authors in the form of author processing charges (APCs) by commercial publishers has marginalized knowledge creation in the Global South. The purpose of this study was to map the nature and scope of dental journal publishing in India.METHODS: We searched databases like Scopus, WoS, DOAJ, and the UGC CARE list for dental journals published in India.
    RESULT: There are currently 35 active dental journals, which mostly belong to or are affiliated with non-profit organizations (26, 55.9%) or educational institutions (9, 25.8%). The publication of 25 journals has been outsourced to international commercial publishers, with most of these linked to non-profit organizations. About 39.8% of Indian dental journals are OA and almost half charge APCs. Around 60% of the Indian journals are indexed in Scopus, and slightly less than half (12) are included in the Web of Science (WoS).
    DISCUSSION: The monopoly of international commercial publishers and the presence of APCs are the real culprits of epistemic injustice in Indian dental journal publishing. Besides, the identification of regional legitimate publishers would help demarcate the term "predatory publishing".
    CONCLUSION: The post-colonial world witnessed an emergence in Southern scholarly publishing. However, the hegemony or neoliberal exploitation of international commercial publishers and the prolonged use of "predators" in scholarly debates marginalized the knowledge produced in the Global South.
    DOI:  https://doi.org/10.4103/ijdr.ijdr_738_23