bims-skolko Biomed News
on Scholarly communication
Issue of 2024‒10‒27
thirty papers selected by
Thomas Krichel, Open Library Society



  1. Anesthesiol Clin. 2024 Dec;pii: S1932-2275(24)00012-0. [Epub ahead of print]42(4): 607-616
      This review highlights the increasing prevalence of fraudulent data and publications in medical research, emphasizing the potential harm to patients and the erosion of trust in the medical community. It discusses the impact of low-quality studies on clinical guidelines and patient safety, emphasizing the need for prompt identification. The review proposes using machine learning and artificial intelligence as potential tools to detect anomalies, plagiarism, and data manipulation, potentially improving the peer review process. Despite the acknowledgment of this problem and the growing number of retractions, the review notes a lack of focus on the clinical implications of forged evidence.
    Keywords:  Artificial intelligence; Fabrication; Fraud; Research; Retraction
    DOI:  https://doi.org/10.1016/j.anclin.2024.02.004
  2. J Am Acad Dermatol. 2024 Oct 21. pii: S0190-9622(24)03022-6. [Epub ahead of print]
      
    Keywords:  authorship; citation count; citation manipulation; citations; co-authorships; ethics; fraudulent research; h-index; impact factor; research
    DOI:  https://doi.org/10.1016/j.jaad.2024.10.015
  3. Nature. 2024 Oct 22.
      
    Keywords:  Authorship; Publishing; Research management; Scientific community
    DOI:  https://doi.org/10.1038/d41586-024-03321-5
  4. J Chem Inf Model. 2024 Oct 22.
      This application note explores how to address a challenging problem faced by many academics and publishing professionals in recent years: ensuring the integrity of academic writing in universities and publishing houses due to advances in Artificial Intelligence (AI). It distinguishes AI- and human-generated English manuscripts using classifier models such as decision tree, random forest, extra trees, and AdaBoost. It utilizes Scikit learn libraries to provide statistics (precision, accuracy, recall, F1, MCC, and Cohen's kappa scores) and the confusion matrix to guarantee confidence to the user. The accuracy of the model evaluation for classification ranges from 0.97 to 0.99. There is a text data set of approximately 400 AI-generated texts and around 400 human-generated texts used for training and testing (50/50 random split). The AI texts were generated using detailed prompts that describe the text format of abstracts, introductions, discussions, and conclusions of scientific manuscripts in specific subjects. The tutorials for Gotcha GPT are written in Python by using the highly versatile Google Colaboratory platform. They are made freely available via GitHub (https://github.com/andresilvapimentel/Gotcha-GPT).
    DOI:  https://doi.org/10.1021/acs.jcim.4c01203
  5. Indian J Ophthalmol. 2024 Nov 01. 72(Suppl 5): S719-S720
      
    DOI:  https://doi.org/10.4103/IJO.IJO_2465_24
  6. Skinmed. 2024 ;22(5): 361-364
      Judging whether an editor is good at the job is essential; however, this task may be difficult or even impossible. Several factors are involved, many of which are beyond the control of an editor. We examined some of such situations, which are as follows: (1) Reviewer's abuse of privileged information, in which a reviewer or an associate, who is likely to be a competitor, directs members of their laboratory to rapidly replicate the data and submit the resulting paper in the same or another journal while delaying publication of the submitted paper; (2) defective micromanagement by a stakeholder or owner, such as failure to order paper for the publication of a journal; (3) penny-wise dollar-foolish mismanagement by the owner, such as limiting the figures allowed to an absurdly low number in a dermatology journal (we have a visual specialty); (4) factional abuse, such as when members of a society use a gimmick to exercise outsized influence to effect a change in journal's content, and (5) "sto tavo (who is in charge)?," in which changes in the governance of an ownership society or publisher affect quality of the journal.
  7. Neurosurg Rev. 2024 Oct 23. 47(1): 814
      Peer review stands as a cornerstone of academic publishing, especially in the era of evidence-based neurosurgery - the scientific literature relies on proficient peer reviewers. Providing a constructive peer review is an art and learned skill that requires knowledge of study design and expertise in the neurosurgical subspeciality. Peer reviewers guard against arbitrary decision-making and are essential in ensuring that published manuscripts are of the highest quality. However, there remains a scarcity in the formal training relating to the peer review process. The objective of this article is therefore to shed light on this process through the lens of the Editorial Board. We encourage our invited peer reviewers to make use of this guide when appraising potential manuscripts.
    Keywords:  Neurosurgery; Peer review; Research methodology; Systematic review
    DOI:  https://doi.org/10.1007/s10143-024-03047-y
  8. World Neurosurg. 2024 Oct 21. pii: S1878-8750(24)01745-5. [Epub ahead of print]
      
    Keywords:  ChatGPT; GPT Builder; artificial intelligence; large language models; medical literature; scientific writing
    DOI:  https://doi.org/10.1016/j.wneu.2024.10.041
  9. Anesthesiol Clin. 2024 Dec;pii: S1932-2275(24)00014-4. [Epub ahead of print]42(4): 617-630
      The medical literature guides ethical clinical care by providing information on medical innovations, clinical care, the history of medical advances, explanations for past mistakes and inspiration for future discoveries. Ethical authorship practices are thus imperative to preserving the integrity of medical publications and fulfilling our obligations to ethical patient care. Unethical authorship practices such as plagiarism, guest authorship, and ghost authorship are increasing and pose serious threats to the medical literature. The rise of artificial intelligence in assisting scholarly work poses particular concerns. Authors may face severe and career-changing penalties for engaging in unethical authorship.
    Keywords:  Artificial intelligence; Authorship misconduct; Copyright; Ethics; Ghost authorship; Guest authorship; Plagiarism; Publication fraud
    DOI:  https://doi.org/10.1016/j.anclin.2024.02.006
  10. Account Res. 2024 Oct 24. 1-21
      Background: This autoethnographic study examines email invitations for health researchers to publish in journals outside their expertise, exploring implications for interdisciplinary research and knowledge production.Methods: Over three months, email invitations to publish outside the author's field were documented and analysed thematically and through reflexive journaling.Results: Five main themes in publication invitations were identified: emphasising novelty, promising rapid publication, appealing to research impact, flattering language, and persistent messaging. Reflexive analysis revealed complex factors shaping responses, including publication pressures, desires for prestige, and tensions between disciplinary norms and interdisciplinary collaboration. While invitations may present opportunities for novel collaborations, they often reflect predatory publishing practices.Conclusions: Navigating this landscape requires careful discernment, commitment to academic integrity, and reflexivity about one's positionality. The study underscores the need for researchers to critically interrogate the motivations behind such invitations. Further research could explore decision-making processes across disciplines and implications for academic publishing integrity and equity.
    Keywords:  Publication ethics; academic pressures; academic publishing; autoethnography; decision-making; interdisciplinary; research integrity
    DOI:  https://doi.org/10.1080/08989621.2024.2419823
  11. Fam Med. 2024 Oct 16.
      BACKGROUND AND OBJECTIVES: Case reports are a popular publication type, especially for medical learners. They also are an excellent educational vehicle that can spark a long-term interest in scholarship for medical learners. To maximize publication potential, authors need a framework when writing a case report.METHODS: We did a manifest content analysis on case reports published in 12 peer-reviewed medical journals between 2010 and 2019. We classified the case reports as detection, extension, diffusion, or fascination. The objective of our study was to determine whether case reports can successfully be classified by their primary contribution to the medial literature as detection, extension, diffusion, or fascination case reports.
    RESULTS: Using a predefined search strategy, we identified 1,005 manuscripts identified as case reports published from 2010 to 2019 in 12 journals from a variety of medical specialties. Only 673 of the 1,005 (67.0%) met our criteria for a case report. Of these, 59.1% most closely fit the category of diffusion case reports. Fascination case reports were the least common (1.2%). The format of published case reports varied widely among journals.
    CONCLUSIONS: Case reports can be categorized according to their main contribution to the medical literature. Nearly 60% of all published case reports in this study were not published for the purpose of introducing a novel clinical entity. Instead, they were used as a vehicle to educate clinicians about previously described phenomena. Authors seeking to publish case reports should understand how the framing of their report is likely to influence their chances of being published.
    DOI:  https://doi.org/10.22454/FamMed.2024.976230
  12. Lab Anim. 2024 Oct 24. 236772241271039
      For over a decade, the non-publication of negative results from preclinical studies has been identified as a significant concern in biomedical research. Such underreporting is considered a contributor to the reproducibility crisis in the field and has been recognized by significant journals such as Science and Nature. In response to the consistently high non-publication rates of preclinical animal research in Europe, a survey was conducted among the biomedical research community to gather their views on publishing negative results. Using the EUSurvey platform, over 200 researchers directly working with animals were surveyed. The study aimed to understand the frequency of negative results, the reasons behind their non-publication, and the perceived pros and cons of making such results public. Insights from the survey could guide steps toward promoting transparency in science, refining research methodologies, reducing animal usage in experiments and minimizing research waste.
    Keywords:  3Rs; ethics and welfare; public policy; reduction
    DOI:  https://doi.org/10.1177/00236772241271039
  13. J Law Med Ethics. 2024 ;52(2): 399-411
      As the federal government continues to expand upon and improve its data sharing policies over the past 20 years, complex challenges remain. Our interviews with U.S. academic genetic researchers (n=23) found that the burden, translation, industry limitations, and consent structure of data sharing remain major governance challenges.
    Keywords:  Data Sharing; Genetic Testing; Genetics; National Institutes Of Health
    DOI:  https://doi.org/10.1017/jme.2024.123
  14. Clin Med (Lond). 2024 Oct 18. pii: S1470-2118(24)05442-3. [Epub ahead of print] 100257
      BACKGROUND: Contemporary observations indicate insufficient quality in the reporting of statistical data. Despite the publication of the SAMPL guidelines in 2015, they have not been widely adopted. The aim of this article is to highlight the incorporation of SAMPL Guidelines in the statistical reviews of articles related to clinical medicine, as well as the changes implemented by authors in revised manuscripts as a result of such guidance. An additional objective is to provide recommendations for biomedical journals regarding the necessity of integrating SAMPL Guidelines into their daily practices.METHODS: The study incorporated 100 selected statistical reviews of original clinical medicine articles from 8 biomedical journals, conducted between 2016 and 2023. Each of these reviews suggested specific SAMPL Guidelines to be implemented in the revised manuscript. It was evaluated which specific SAMPL Guidelines were most frequently enforced and what changes resulted from their implementation.
    RESULTS: Seventy-five percent of the manuscripts in question garnered acceptance following a solitary round of statistical evaluation. Among the most frequently recommended and subsequently implemented SAMPL Guidelines by the authors are a more thorough description of the purpose of the applied statistical tests (65%), indication of the practical significance of the obtained results, including calculation of relevant effect size measures (64%), analysis of assumptions necessary for the application of specific statistical tests (58%), and consideration of the impact of outlier values on the obtained results (34%).
    CONCLUSION: To improve the quality of statistical reporting in biomedical journals, greater emphasis should be placed on implementing SAMPL Guidelines.
    Keywords:  Biostatistics; SAMPL Guidelines; Statistical analysis; Statistical reviews
    DOI:  https://doi.org/10.1016/j.clinme.2024.100257
  15. Zookeys. 2024 ;1215 65-90
      Large numbers of new taxa are described annually and while there is a great need to make them identifiable, there seems little consistency in how this might be facilitated. 427 papers published in 2021 and 2022 were surveyed, which described 587 new insect genera. Only 136 of these papers included keys, and these allowed the identification of 233 of the new genera (31.9% of papers and 39.7% of the new genera). The proportion of papers that included a key varied significantly among insect orders but not among the handful of journals wherein the bulk of the new genera were described. Overall, for 17 key-related variables assessed in a binary fashion (optimal vs suboptimal), the average key had almost six criteria that were scored as being suboptimal. For example, less than one-fifth facilitated retracing and less than 12% had illustrated keys where the images were conveniently located close to the relevant key couplets. Progress towards confirming a putative identification was possible in all papers, through the inclusion of a diagnosis, habitus images, or both. Based upon this analysis, and expanding on previous suggestions for key construction, 23 recommendations are made on how to make an identification key maximally useful for users and I indicate the relative ease with which each could be adhered to. Identification keys should accompany all new taxon descriptions, guidelines for effective key construction should be added to journals' instructions to authors, editors and reviewers should check keys carefully, and publishers should be attentive to the needs of users through, for example, permitting duplication of images to make keys easier to use. Recommendations are likely relevant to all levels in the taxonomic hierarchy for all organisms, despite the data being derived from generic-level keys for insects.
    Keywords:  Best practices; biodiversity assessment; ease-of-use; entomology; identification keys; images; key construction guidelines; taxonomy
    DOI:  https://doi.org/10.3897/zookeys.1215.130416
  16. Clin Hematol Int. 2024 ;6(4): 67-68
      
    Keywords:  audience engagement; body language; presentations; scientific communication
    DOI:  https://doi.org/10.46989/001c.124436
  17. Arthroscopy. 2024 Oct 18. pii: S0749-8063(24)00794-1. [Epub ahead of print]
      Orthopaedic surgeons are fascinated with artificial intelligence (AI). Since the release of ChatGPT to the general public on November 30, 2022, there have been a flurry of articles regarding use of large language models (LLM) in our field. Most of these revolve around the accuracy of the models regarding orthopaedic topics (spoiler alert: the accuracy is good, yet unreliable, but improving). Unfortunately, the research around LLM is largely repetitive, applying the LLMs to the same essential tasks. LLM AI systems show amazing capabilities in data processing collating and organizing and recognizing patterns. Now, research scientists need to innovate. Journals must encourage authors to investigate how AI systems can improve patient care.
    DOI:  https://doi.org/10.1016/j.arthro.2024.10.010
  18. J Obstet Gynecol Neonatal Nurs. 2024 Oct 21. pii: S0884-2175(24)00301-0. [Epub ahead of print]
      JOGNN's Associate Editor, Qualitative Methods, addresses the issues surrounding the use of positionality statements in published articles.
    DOI:  https://doi.org/10.1016/j.jogn.2024.09.010
  19. Int J Cardiol. 2024 Oct 22. pii: S0167-5273(24)01296-8. [Epub ahead of print] 132674
      
    Keywords:  Cardiology board; ChatGPT; MKSAP; Review,question
    DOI:  https://doi.org/10.1016/j.ijcard.2024.132674
  20. Science. 2024 Oct 25. 386(6720): 372-375
      Indigenous researchers and communities are reshaping how Western science thinks about open access to data.
    DOI:  https://doi.org/10.1126/science.adu0429