bims-skolko Biomed News
on Scholarly communication
Issue of 2024‒03‒31
25 papers selected by
Thomas Krichel, Open Library Society



  1. J Biomed Inform. 2024 Mar 26. pii: S1532-0464(24)00046-7. [Epub ahead of print] 104628
      OBJECTIVE: Acknowledging study limitations in a scientific publication is a crucial element in scientific transparency and progress. However, limitation reporting is often inadequate. Natural language processing (NLP) methods could support automated reporting checks, improving research transparency. In this study, our objective was to develop a dataset and NLP methods to detect and categorize self-acknowledged limitations (e.g., sample size, blinding) reported in randomized controlled trial (RCT) publications.METHODS: We created a data model of limitation types in RCT studies and annotated a corpus of 200 full-text RCT publications using this data model. We fine-tuned BERT-based sentence classification models to recognize the limitation sentences and their types. To address the small size of the annotated corpus, we experimented with data augmentation approaches, including Easy Data Augmentation (EDA) and Prompt-Based Data Augmentation (PromDA). We applied the best-performing model to a set of about 12K RCT publications to characterize self-acknowledged limitations at larger scale.
    RESULTS: Our data model consists of 15 categories and 24 sub-categories (e.g., Population and its sub-category DiagnosticCriteria). We annotated 1090 instances of limitation types in 952 sentences (4.8 limitation sentences and 5.5 limitation types per article). A fine-tuned PubMedBERT model for limitation sentence classification improved upon our earlier model by about 1.5 absolute percentage points in F1 score (0.821 vs. 0.8) with statistical significance (p<.001). Our best-performing limitation type classification model, PubMedBERT fine-tuning with PromDA (Output View), achieved an F1 score of 0.7, improving upon the vanilla PubMedBERT model by 2.7 percentage points, with statistical significance (p<.001).
    CONCLUSION: The model could support automated screening tools which can be used by journals to draw the authors' attention to reporting issues. Automatic extraction of limitations from RCT publications could benefit peer review and evidence synthesis, and support advanced methods to search and aggregate the evidence from the clinical trial literature.
    Keywords:  Large language models; Natural language processing; Randomized controlled trials; Reporting quality; Self-acknowledged limitations; Text classification
    DOI:  https://doi.org/10.1016/j.jbi.2024.104628
  2. Tunis Med. 2024 Jan 05. 102(1): 13-18
      INTRODUCTION: Peer review is a crucial process in ensuring the quality and accuracy of scientific research. It allows experts in the field to assess manuscripts submitted for publication and provide feedback to authors to improve their work.AIM: To describe mistakes encountered while peer reviewing scientific manuscripts submitted to "La Tunisie Médicale" journal.
    METHOD: This was a bibliometric study of research manuscripts submitted to "La Tunisie Médicale" and reviewed during 2022. The data collected included the type of the manuscripts and the number of reviews conducted per manuscript. The study also identified variables related to writing mistakes encountered during the peer review process.
    RESULTS: A total of 155 manuscripts (68% original articles) were peer reviewed and 245 reviews were delivered, by two reviewers. Out of 62 mistakes detected, 21% concerned the results section. In 60% of the manuscripts, the keywords used were not MeSH (Medical Subject Headings) terms. The introduction lacked in-text citations in 30% of the reviewed manuscripts, while the method section did not have a clear study framework (27%). The two major mistakes detected in the results section were the misuse of abbreviations in tables/figures, and the non-respect of the scientific nomenclature of tables/figures with respectively 39% and 19% of manuscripts.
    CONCLUSION: This study identified 62 mistakes while reviewing scientific manuscripts submitted to "La Tunisie Médicale" journal. Scholars can benefit from participation in scientific writing seminars and the use of a safety checklist for scientific medical writing to avoid basic mistakes.
    Keywords:  Manuscripts, Medical as Topic; Medical Writing; Peer Review; Tunisia; Writing style
    DOI:  https://doi.org/10.62438/tunismed.v102i1.4715
  3. Cureus. 2024 Mar;16(3): e56920
      In the competitive arena of medical publishing, manuscript rejection remains a significant barrier to disseminating research findings. This editorial delves into the multifaceted nature of manuscript rejection, elucidating common reasons and proposing actionable strategies for authors to enhance their chances of acceptance. Key rejection factors include a mismatch with journal scope, lack of novelty, methodological flaws, inconclusive results, ethical issues, poor presentation, data inaccessibility, author misconduct, and plagiarism. Ethical lapses, such as lacking informed consent, or submissions fraught with grammatical errors, further doom manuscripts. In addressing these pitfalls, authors are advised to ensure content originality, methodological rigor, ethical compliance, and clear presentation. Aligning the manuscript with the journal's audience, scope, and editorial standards is crucial, as is professional conduct and responsiveness to feedback. Leveraging technological tools for citation management, grammar checking, and plagiarism detection can also significantly bolster manuscript quality. Ultimately, understanding and addressing common rejection reasons can empower authors to improve their submissions, contributing to the advancement of medical knowledge and their professional growth.
    Keywords:  manuscript rejection; medical writing; peer reviews; publication success; systematic review
    DOI:  https://doi.org/10.7759/cureus.56920
  4. J Am Med Inform Assoc. 2024 Mar 26. pii: ocae063. [Epub ahead of print]
      OBJECTIVES: Advances in informatics research come from academic, nonprofit, and for-profit industry organizations, and from academic-industry partnerships. While scientific studies of commercial products may offer critical lessons for the field, manuscripts authored by industry scientists are sometimes categorically rejected. We review historical context, community perceptions, and guidelines on informatics authorship.PROCESS: We convened an expert panel at the American Medical Informatics Association 2022 Annual Symposium to explore the role of industry in informatics research and authorship with community input. The panel summarized session themes and prepared recommendations.
    CONCLUSIONS: Authorship for informatics research, regardless of affiliation, should be determined by International Committee of Medical Journal Editors uniform requirements for authorship. All authors meeting criteria should be included, and categorical rejection based on author affiliation is unethical. Informatics research should be evaluated based on its scientific rigor; all sources of bias and conflicts of interest should be addressed through disclosure and, when possible, methodological mitigation.
    Keywords:  authorship; conflict of interest; health care sector; informatics; publication bias
    DOI:  https://doi.org/10.1093/jamia/ocae063
  5. J Health Psychol. 2024 Mar 28. 13591053241239109
      Qualitative research plays a pivotal role in health psychology, offering insights into the intricacies of health-related issues. However, the specificity of qualitative methodology presents challenges in adhering to standard open science principles, including data sharing. The guidelines to address these issues are limited. Drawing from the author's experience in conducting in-depth interviews with middle-aged and older adults regarding their sexuality, this article discusses various challenges in implementing data sharing requirements. It emphasizes factors like participants' reasonable reluctance to share in specific populations, the depth of personal information gleaned from comprehensive interviews, concerns surrounding potential data misuse both within and outside academic circles, and the complex issue of obtaining informed consent. A universal approach to data sharing in qualitative research proves impractical, emphasizing the necessity for adaptable, context-specific guidelines that acknowledge the methodology's nuances. Striking a balance between transparency and ethical responsibility requires tailored strategies and thoughtful consideration.
    Keywords:  data sharing; ethics; informed consent; older adults; qualitative methods; sexuality
    DOI:  https://doi.org/10.1177/13591053241239109
  6. Teach Learn Med. 2024 Mar 29. 1-15
      Problem: Syrian medical research synthesis lags behind that of neighboring countries. The Syrian war has exacerbated the situation, creating obstacles such as destroyed infrastructure, inflated clinical workload, and deteriorated medical training. Poor scientific writing skills have ranked first among perceived obstacles that could be modified to improve Syrian research conduct at every academic level. However, limited access to personal and physical resources in conflict areas consistently hampers the implementation of standard professional-led interventions. Intervention: We designed a peer-run online academic writing and publishing workshop as a feasible, affordable, and sustainable training method to use in low-resource settings. This workshop covered the structure of scientific articles, academic writing basics, plagiarism, and the publication process. It was also supplemented by six practical assignments to exercise the learned skills. Context: The workshop targeted healthcare professionals and medicine, dentistry, and pharmacy trainees (undergraduate and postgraduate) at all Syrian universities. We employed a systematic design to evaluate the workshop's short- and long-term impact when using different instructional delivery methods and assignment formats. Participants were assigned in a stratified manner to four groups; two groups attended the workshop synchronously, and the other two groups attended asynchronously. One arm in each group underwent a supervised peer-review evaluation for the practical writing exercises (active), while the other arm in each group self-reviewed their work on the same exercises using exemplary solutions (passive). We assessed knowledge (30 questions), confidence in the learned skills (11 questions), and the need for further guidance in academic writing (1 question) before the workshop and one month and one year after it. Impact: One-hundred-twenty-one participants completed the workshop, showing improved knowledge, confidence, and need for guidance. At one-year follow-up, participants showed stability in these gains. Outcomes for the synchronous and asynchronous groups were similar. Completing practical assignments was associated with greater knowledge and confidence only in the active arms. Participants in the active arms engaging in the peer-review process showed greater knowledge increase and reported less need for guidance compared to those who did not engage in the peer-review. Lessons learned: Peer-run interventions can provide an effective, affordable alternative to improving scientific writing skills in settings with limited resources and expertise. Online academic writing training can show improvements regardless of method of attendance (i.e., synchronous versus asynchronous). Participation in supplementary practical exercises, especially when associated with peer-review, may improve knowledge and confidence.
    Keywords:  Academic writing; e-learning; medical education; online course; peer training
    DOI:  https://doi.org/10.1080/10401334.2024.2332890
  7. J Nurs Scholarsh. 2024 Mar 30.
      INTRODUCTION: Systematic reviews are considered the highest level of evidence that can help guide evidence-informed decisions in nursing practice, education, and even health policy. Systematic review publications have increased from a sporadic few in 1980s to more than 10,000 systematic reviews published every year and around 30,000 registered in prospective registries.METHODS: A cross-sectional design and a variety of data sources were triangulated to identify the journals from which systematic reviews would be evaluated for adherence to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 reporting guidelines and scope. Specifically, this study used the PRISMA 2020 reporting guidelines to assess the reporting of the introduction, methods, information sources and search strategy, study selection process, quality/bias assessments, and results and discussion aspects of the included systematic reviews.
    RESULTS: Upon review of the 215 systematic reviews published in 10 top-tier journals in the field of nursing in 2019 and 2020, this study identified several opportunities to improve the reporting of systematic reviews in the context of the 2020 PRISMA statement. Areas of priority for reporting include the following key areas: (1) information sources, (2) search strategies, (3) study selection process, (4) bias reporting, (5) explicit discussion of the implications to policy, and lastly, the need for (6) prospective protocol registration.
    DISCUSSION: The use of the PRISMA 2020 guidelines by authors, peer reviewers, and editors can help to ensure the transparent and detailed reporting of systematic reviews published in the nursing literature.
    CLINICAL RELEVANCE: Systematic reviews are considered strong research evidence that can guide evidence-based practice and even clinical decision-making. This paper addresses some common methodological and process issues among systematic reviews that can guide clinicians and practitioners to be more critical in appraising research evidence that can shape nursing practice.
    Keywords:  nursing; systematic reviews
    DOI:  https://doi.org/10.1111/jnu.12969
  8. Int Med Case Rep J. 2024 ;17 195-200
      Case reports provide scientific knowledge and opportunities for new clinical research. However, it is estimated that less than 5% of cases presented by Japanese generalists at academic conferences are published due to various barriers such as the complex process of writing articles, conducting literature searches, the significant time required, the reluctance to write in English, and the challenge of selecting appropriate journals for publication. Therefore, the purpose of this opinion paper is to provide clinicians with practical tips for writing case reports that promote diagnostic excellence. In recent years, clinical practitioners have been striving for diagnostic excellence and optimal methods to accurately and comprehensively understand the patient's condition. To write a case report, it is essential to be mindful of the elements of diagnostic excellence and consider the quality of the diagnostic reasoning process. We (the authors) are seven academic generalists who are members of the Japanese Society of Hospital General Medicine (JSHGM) - Junior Doctors Association, with a median of 7 years after graduation and extensive experience publishing case reports in international peer-reviewed journals. We conducted a narrative review and discussed ways to write case reports to promote diagnostic excellence, leveraging our unique perspectives as academic generalists. Our review did not identify any reports addressing the critical points in writing case reports that embody diagnostic excellence. Therefore, this report proposes a methodology that describes the process involved in writing diagnostic excellence-promoting case reports and provides an overview of the lessons learned. Based on our review and discussion, we explain the essential points for promoting diagnostic excellence through case reports categorized into seven components of clinical reasoning. These strategies are useful in daily clinical practice and instrumental in promoting diagnostic excellence through case reports.
    Keywords:  case reports; clinical reasoning; diagnostic excellence
    DOI:  https://doi.org/10.2147/IMCRJ.S449310
  9. J Am Coll Radiol. 2024 Mar 23. pii: S1546-1440(24)00302-8. [Epub ahead of print]
      BACKGROUND: The accuracy and completeness of self-disclosures for value of industry payments by authors publishing in radiology journals are not well known.OBJECTIVE: The aim of this study was to assess the accuracy of financial disclosures by US authors in five prominent radiology journals.
    METHODS: We reviewed financial disclosures provided by US-based authors in five prominent radiology journals from original research and review articles published in 2021. For each author, payment reports were extracted from the Open Payments Database (OPD) in the previous 36 months related to general category, research, and ownership payments categories. We analyzed each author individually to determine if the reported disclosures matched results from OPD.
    RESULTS: A total of 4076 authorships, including 3406 unique authors, were selected from 643 articles across the five journals. 1388 (1032 unique authors) received industry payments within the previous 36 months, with a median total amount received per authorship of $6,650 (interquartile range = $355 to $87,725). 61 (4.4%) authors disclosed all industry relationships, 205 (14.8%) disclosed some of the OPD-reported relationships, and 1122 (80.8%) failed to disclose any relationship. Undisclosed payments totaled $186,578,350 representing 67.2% of all payments. Radiology had the highest proportion of authorships who disclosed some or all OPD-reported relationships (32.3%), compared to JVIR (18.2%), AJNR (17.3%), JACR (13.1%), and AJR (10.3%).
    CONCLUSIONS: Financial relationships with industry are common among US physician authors in prominent radiology journals and non-disclosure rates are high.
    Keywords:  Industry; open payments; radiology journals
    DOI:  https://doi.org/10.1016/j.jacr.2024.01.027
  10. J Am Coll Radiol. 2024 Mar 23. pii: S1546-1440(24)00303-X. [Epub ahead of print]
      Lack of disclosure of Conflicts of Interest (COI) in radiology research can undermine trust in medical recommendations and patient care. A recent study found significant discrepancies between disclosed COIs and those listed in the Open Payments Database (OPD). This commentary discusses the importance of transparency in financial and nonfinancial COIs, the implications of undisclosed COIs on research integrity and clinical decision-making, and the challenges and controversies surrounding current disclosure practices. The field of radiology should discuss and update COI management and ethical standards, for more practical accountability in radiology publishing.
    DOI:  https://doi.org/10.1016/j.jacr.2024.03.014
  11. Acad Radiol. 2024 Mar 26. pii: S1076-6332(24)00145-4. [Epub ahead of print]
      
    DOI:  https://doi.org/10.1016/j.acra.2024.03.003
  12. Nature. 2024 Mar;627(8005): 703-704
      
    Keywords:  Authorship; Careers; Lab life; Publishing
    DOI:  https://doi.org/10.1038/d41586-024-00891-2
  13. J Stomatol Oral Maxillofac Surg. 2024 Mar 21. pii: S2468-7855(24)00078-8. [Epub ahead of print] 101842
      The attainment of academic superiority relies heavily upon the accessibility of scholarly resources and the expression of research findings through faultless language usage. Although modern tools, such as the Publish or Perish software program, are proficient in sourcing academic papers based on specific keywords, they often fall short of extracting comprehensive content, including crucial references. The challenge of linguistic precision remains a prominent issue, particularly for research papers composed by non-native English speakers who may encounter word usage errors. This manuscript serves a twofold purpose: firstly, it reassesses the effectiveness of ChatGPT-4 in the context of retrieving pertinent references tailored to specific research topics. Secondly, it introduces a suite of language editing services that are skilled in rectifying word usage errors, ensuring the refined presentation of research outcomes. The article also provides practical guidelines for formulating precise queries to mitigate the risks of erroneous language usage and the inclusion of spurious references. In the ever-evolving realm of academic discourse, leveraging the potential of advanced AI, such as ChatGPT-4, can significantly enhance the quality and impact of scientific publications.
    Keywords:  AI finding references; ChatGPT-4; Language editing services
    DOI:  https://doi.org/10.1016/j.jormas.2024.101842