bims-skolko Biomed News
on Scholarly communication
Issue of 2022‒05‒08
27 papers selected by
Thomas Krichel
Open Library Society


  1. Inquiry. 2022 Jan-Dec;59:59 469580221090393
      According to research lore, the second peer reviewer (Reviewer 2) is believed to rate research manuscripts more harshly than the other reviewers. The purpose of this study was to empirically investigate this common belief. We measured word count, positive phrases, negative phrases, question marks, and use of the word "please" in 2546 open peer reviews of 796 manuscripts published in the British Medical Journal. There was no difference in the content of peer reviews between Reviewer 2 and other reviewers for word count (630 vs 606, respectively, P = .16), negative phrases (8.7 vs 8.4, P = .29), positive phrases (4.2 vs 4.1, P = .10), question marks (4.8 vs 4.6, P = .26), and uses of "please" (1.0 vs 1.0, P = .86). In this study, Reviewer 2 provided reviews of equal sentiment to other reviewers, suggesting that popular beliefs surrounding Reviewer 2 may be unfounded.
    Keywords:  journals; peer review; publication; research; reviewer 2
    DOI:  https://doi.org/10.1177/00469580221090393
  2. Indian J Psychol Med. 2022 Jan;44(1): 59-65
      Background: A proportion of manuscripts submitted to scientific journals get rejected, for varied reasons. A systematic analysis of the reasons for rejection will be relevant to editors, reviewers, and prospective authors. We aimed to analyze the reasons for rejection of manuscripts submitted to the Indian Journal of Psychological Medicine, the flagship journal of Indian Psychiatric Society South Zonal Branch.Methods: We performed a content analysis of the rejection reports of all the articles submitted to the journal between January 1, 2018, and May 15, 2020. Rejection reports were extracted from the manuscript management website and divided into three types: desk rejections, post-peer-review rejections, and post-editorial-re-review rejections. They were analyzed separately for the rejection reasons, using a predefined coding frame.
    Results: A total of 898 rejection reports were available for content analysis. Rejection was a common fate for manuscripts across the types of submission; figures ranged from 26.7% for viewpoint articles to 72.1% for review articles. The median time to desk rejection was 3 days, while the median time to post-peer-review rejection and post-editorial-re-review rejection was 42 days and 96 days, respectively. The most common reasons for desk rejection were lack of novelty or being out of the journal's scope. Inappropriate study designs, poor methodological descriptions, poor quality of writing, and weak study rationale were the most common rejection reasons mentioned by both peer reviewers and editorial re-reviewers.
    Conclusions: Common reasons for rejection included poor methodology and poorly written manuscripts. Prospective authors should pay adequate attention to conceptualization, design, and presentation of their study, apart from selecting an appropriate journal, to avoid rejection and enhance their manuscript's chances of publication.
    Keywords:  India; Peer review; manuscript; psychiatry; rejection; research
    DOI:  https://doi.org/10.1177/0253717620965845
  3. Proc (Bayl Univ Med Cent). 2022 ;35(3): 394-396
      Peer review continues to be a crucial part of the scientific publishing process. Many editors have reported difficulty recruiting potential reviewers and receiving timely recommendations. Poor reviewer acceptance and completion rates can complicate and delay publication. However, few studies have examined these rates in detail. Here we analyze reviewer invitation, acceptance, and completion data from the Baylor University Medical Proceedings.
    Keywords:  Medical publication; peer review
    DOI:  https://doi.org/10.1080/08998280.2022.2035189
  4. J Comp Physiol A Neuroethol Sens Neural Behav Physiol. 2022 May 07.
      Peer review, a core element of the editorial processing of manuscripts submitted for publication in scientific journals, is widely criticized as being flawed. One major criticism is that many journals allow or request authors to suggest reviewers, and that these 'preferred reviewers' assess papers more favorably than do reviewers not suggested by the authors. To test this hypothesis, a retrospective analysis was conducted of 162 manuscripts submitted to the Journal of Comparative Physiology A between 2015 and 2021. Out of these manuscripts, 83 were finally rejected and 79 were finally accepted for publication. In neither group could a statistically significant difference be detected in the rating of manuscripts between reviewers suggested by the authors and reviewers not suggested by the authors. Similarly, pairwise comparison of the same manuscripts assessed by one reviewer suggested by the authors and one reviewer not suggested by the authors did not reveal any significant difference in the median recommendation scores between these two reviewer types. Thus, author-suggested reviewers are not necessarily, as commonly assumed, less neutral than reviewers not suggested by the authors, especially if their qualification and impartiality is vetted by the editor before they are selected for peer review.
    Keywords:  Editor; Peer review; Preferred reviewer; Research evaluation; Scientific publishing
    DOI:  https://doi.org/10.1007/s00359-022-01553-2
  5. PLoS One. 2022 ;17(5): e0267971
      Retractions have been on the rise in the life and clinical sciences in the last decade, likely due to both broader accessibility of published scientific research and increased vigilance on the part of publishers. In this same period, there has been a greater than ten-fold increase in the posting of preprints by researchers in these fields. While this development has significantly accelerated the rate of research dissemination and has benefited early-career researchers eager to show productivity, it has also introduced challenges with respect to provenance tracking, version linking, and, ultimately, back-propagation of events such as corrigenda, expressions of concern, and retractions that occur on the journal-published version. The aim of this study was to understand the extent of this problem among preprint servers that routinely link their preprints to the corollary versions published in journals. To present a snapshot of the current state of downstream retractions of articles preprinted in three large preprint servers (Research Square, bioRxiv, and medRxiv), the DOIs of the journal-published versions linked to preprints were matched to entries in the Retraction Watch database. A total of 30 retractions were identified, representing only 0.01% of all content posted on these servers. Of these, 11 retractions were clearly noted by the preprint servers; however, the existence of a preprint was only acknowledged by the retracting journal in one case. The time from publication to retraction averaged 278 days, notably lower than the average for articles overall (839 days). In 70% of cases, retractions downstream of preprints were due-at least in part-to ethical or procedural misconduct. In 63% of cases, the nature of the retraction suggested that the conclusions were no longer reliable. Over time, the lack of propagation of critical information across the publication life cycle will pose a threat to the scholarly record and to scientific integrity. It is incumbent on preprint servers, publishers, and the systems that connect them to address these issues before their scale becomes untenable.
    DOI:  https://doi.org/10.1371/journal.pone.0267971
  6. PLoS One. 2022 ;17(5): e0267312
      The proliferation of team-authored academic work has led to the proliferation of two kinds of authorship misconduct: ghost authorship, in which contributors are not listed as authors and honorary authorship, in which non-contributors are listed as authors. Drawing on data from a survey of 2,222 social scientists from around the globe, we study the prevalence of authorship misconduct in the social sciences. Our results show that ghost and honorary authorship occur frequently here and may be driven by social scientists' misconceptions about authorship criteria. Our results show that they frequently deviate from a common point of authorship reference (the ICMJE authorship criteria). On the one hand, they tend to award authorship more broadly to more junior scholars, while on the other hand, they may withhold authorship from senior scholars if those are engaged in collaborations with junior scholars. Authorship misattribution, even if it is based on a misunderstanding of authorship criteria rather than egregious misconduct, alters academic rankings and may constitute a threat to the integrity of science. Based on our findings, we call for journals to implement contribution disclosures and to define authorship criteria more explicitly to guide and inform researchers as to what constitutes authorship in the social sciences. Our results also hold implications for research institutions, universities, and publishers to move beyond authorship-based citation and publication rankings in hiring and tenure processes and instead to focus explicitly on contributions in team-authored publications.
    DOI:  https://doi.org/10.1371/journal.pone.0267312
  7. Clin Rheumatol. 2022 May 06.
      "Paper mills" are unethical outsourcing agencies proficient in fabricating fraudulent manuscripts submitted to scholarly journals. In earlier years, the activity of such companies involved plagiarism, but their processes have gained complexity, involving the fabrication of images and fake results. The objective of this study is to examine the main features of retracted paper mills' articles registered in the Retraction Watch database, from inception to the present, analyzing the number of articles per year, their number of citations, and their authorship network. Eligibility criteria for inclusion: retracted articles in any language due to paper mill activity. Retraction letters, notes, and notices, for exclusion. We collected the associated citations and the journals' impact factors of the retracted papers from Web of Science (Clarivate) and performed a data network analysis using VOSviewer software. This scoping review complies with PRISMA 2020 statement and main extensions. After a thorough analysis of the data, we identified 325 retracted articles due to suspected operations published in 31 journals (with a mean impact factor of 3.1). These retractions have produced 3708 citations. Nearly all retracted papers have come from China. Journal's impact factor lower than 7, life sciences journals, cancer, and molecular biology topics were common among retracted studies. The rapid increase of retractions is highly challenging. Paper mills damage scientific research integrity, exacerbating fraud, plagiarism, fake images, and simulated results. Rheumatologists should be fully aware of this growing phenomenon.
    Keywords:  Ethics; Ethics in publishing; Paper mills; Plagiarism; Retraction of publication; Scientific misconduct
    DOI:  https://doi.org/10.1007/s10067-022-06198-9
  8. Cell Stem Cell. 2022 May 05. pii: S1934-5909(22)00168-0. [Epub ahead of print]29(5): 663-666
      This Backstory describes the development of a research article published in Cell Stem Cell that was originally submitted to Community Review, a program wherein a manuscript is simultaneously considered at multiple Cell Press journals. The article, a demonstration of stem cell-derived trophoblast organoids from the group of Thorold Theunissen (https://www.cell.com/cell-stem-cell/fulltext/S1934-5909(22)00157-6), was the first Community Review submission to be published at Cell Stem Cell. In this Backstory, I introduce the topic of the research article, discuss how the article was improved during peer review, and relate the authors' experience with the Community Review process.
    DOI:  https://doi.org/10.1016/j.stem.2022.04.015
  9. iScience. 2022 Apr 15. 25(4): 104080
      What happens when a researcher finds out that research very similar to their own is already being conducted? What if they find out that the said research is also very close to being published? First, there is probably anxiety and panic. Maybe, there are frantic calls to collaborators. Perhaps Twitter rants about the phenomenon of scooping that plagues all researchers, especially those early-career researchers who often feel they are in a race to get their best work out to the world.
    DOI:  https://doi.org/10.1016/j.isci.2022.104080
  10. Can J Kidney Health Dis. 2022 ;9 20543581221080327
      Peer review aims to select articles for publication and to improve articles before publication. We believe that this process can be infused by kindness without losing rigor. In 2014, the founding editorial team of the Canadian Journal of Kidney Health and Disease (CJKHD) made an explicit commitment to treat authors as we would wish to be treated ourselves. This broader group of authors reaffirms this principle, for which we suggest the terminology "supportive review."
    Keywords:  humility; kindness; peer review; supportive review; truth
    DOI:  https://doi.org/10.1177/20543581221080327
  11. Wilderness Environ Med. 2022 Apr 27. pii: S1080-6032(22)00056-4. [Epub ahead of print]
      
    DOI:  https://doi.org/10.1016/j.wem.2022.03.010
  12. Radiologia (Engl Ed). 2022 Mar-Apr;64(2):pii: S2173-5107(22)00049-0. [Epub ahead of print]64(2): 101-102
      
    DOI:  https://doi.org/10.1016/j.rxeng.2022.03.003
  13. Front Vet Sci. 2022 ;9 810989
      Animal science researchers have the obligation to reduce, refine, and replace the usage of animals in research (3R principles). Adherence to these principles can be improved by transparently publishing research findings, data and protocols. Open Science (OS) can help to increase the transparency of many parts of the research process, and its implementation should thus be considered by animal science researchers as a valuable opportunity that can contribute to the adherence to these 3R-principles. With this article, we want to encourage animal science researchers to implement a diverse set of OS practices, such as Open Access publishing, preprinting, and the pre-registration of test protocols, in their workflows.
    Keywords:  3R; Open Access; Registered Report; pre-registration; preprints
    DOI:  https://doi.org/10.3389/fvets.2022.810989
  14. Indian J Ophthalmol. 2022 May;70(5): 1801-1807
      Purpose: This retrospective database analysis study aims to present the scientometric data of journals publishing in the field of ophthalmology and to compare the scientometric data of ophthalmology journals according to the open access (OA) publishing policies.Methods: The scientometric data of 48 journals were obtained from Clarivate Analytics InCites and Scimago Journal & Country Rank websites. Journal impact factor (JIF), Eigenfactor score (ES), scientific journal ranking (SJR), and Hirsch index (HI) were included. The OA publishing policies were separated into full OA with publishing fees, full OA without fees, and hybrid OA. The fees were stated as US dollars (USD).
    Results: Four scientometric indexes had strong positive correlations; the highest correlation coefficients were observed between the SJR and JIF (R = 0.906) and the SJR and HI (R = 0.798). However, journals in the first quartile according to JIF were in the second and third quartiles according to the SJR and HI and in the fourth quartile in the ES. The OA articles published in hybrid journals received a median of 1.17-fold (0.15-2.71) more citations. Only HI was higher in hybrid OA; other scientometric indexes were similar with full OA journals. Full OA journals charged a median of 1525 USD lower than hybrid journals.
    Conclusion: Full OA model in ophthalmology journals does not have a positive effect on the scientometric indexes. In hybrid OA journals, choosing to publish OA may increase citations, but it would be more accurate to evaluate this on a journal basis.
    Keywords:  Journal Impact Factor; open access publishing; ophthalmology; publishing; scientometrics
    DOI:  https://doi.org/10.4103/ijo.IJO_2720_21
  15. J Surg Res. 2022 Apr 29. pii: S0022-4804(22)00209-8. [Epub ahead of print]277 200-210
      INTRODUCTION: The prospective registration of systematic reviews represent an effective strategy for reducing the selective reporting of outcomes. However, the relationship between registration and the reporting quality of systematic reviews on surgical interventions remains unclear.METHODS: MEDLINE was searched for relevant systematic reviews of randomized controlled trials investigating surgical interventions published in 2020. Data concerning general characteristics and registration information were independently extracted. The reporting quality was evaluated in accordance with pre-established evaluation criteria. Univariate and multivariate linear regression were performed to identify factors associated with improved reporting quality.
    RESULTS: A total of 135 systematic reviews were analyzed, of which 50 (37%) were registered. Registered systematic reviews achieved a significantly higher compliance rate on all items compared with non-registered reviews. Registered reviews also demonstrated significantly higher proportions of the reporting of seven items. Multivariate regression analysis showed that registration status and funding support were associated with better reporting quality.
    CONCLUSIONS: Although prospective registration associates with higher reporting quality in systematic reviews, the number of prospective registrations remains low. Therefore, prospective registration should be encouraged among authors, peer reviewers, and journal editors, as well as institutions, to enhance the value of systematic reviews in evidence-based surgical practice.
    Keywords:  Epidemiology; Meta-Analysis; Protocol; Reporting; Surgery; Systematic review
    DOI:  https://doi.org/10.1016/j.jss.2022.04.026
  16. J Med Internet Res. 2022 May 04. 24(5): e33591
      BACKGROUND: Although well recognized for its scientific value, data sharing from clinical trials remains limited. Steps toward harmonization and standardization are increasing in various pockets of the global scientific community. This issue has gained salience during the COVID-19 pandemic. Even for agencies willing to share data, data exclusivity practices complicate matters; strict regulations by funders affect this even further. Finally, many low- and middle-income countries (LMICs) have weaker institutional mechanisms. This complex of factors hampers research and rapid response during public health emergencies. This drew our attention to the need for a review of the regulatory landscape governing clinical trial data sharing.OBJECTIVE: This review seeks to identify regulatory frameworks and policies that govern clinical trial data sharing and explore key elements of data-sharing mechanisms as outlined in existing regulatory documents. Following from, and based on, this empirical analysis of gaps in existing policy frameworks, we aimed to suggest focal areas for policy interventions on a systematic basis to facilitate clinical trial data sharing.
    METHODS: We followed the JBI scoping review approach. Our review covered electronic databases and relevant gray literature through a targeted web search. We included records (all publication types, except for conference abstracts) available in English that describe clinical trial data-sharing policies, guidelines, or standard operating procedures. Data extraction was performed independently by 2 authors, and findings were summarized using a narrative synthesis approach.
    RESULTS: We identified 4 articles and 13 policy documents; none originated from LMICs. Most (11/17, 65%) of the clinical trial agencies mandated a data-sharing agreement; 47% (8/17) of these policies required informed consent by trial participants; and 71% (12/17) outlined requirements for a data-sharing proposal review committee. Data-sharing policies have, a priori, milestone-based timelines when clinical trial data can be shared. We classify clinical trial agencies as following either controlled- or open-access data-sharing models. Incentives to promote data sharing and distinctions between mandated requirements and supportive requirements for informed consent during the data-sharing process remain gray areas, needing explication. To augment participant privacy and confidentiality, a neutral institutional mechanism to oversee dissemination of information from the appropriate data sets and more policy interventions led by LMICs to facilitate data sharing are strongly recommended.
    CONCLUSIONS: Our review outlines the immediate need for developing a pragmatic data-sharing mechanism that aims to improve research and innovations as well as facilitate cross-border collaborations. Although a one-policy-fits-all approach would not account for regional and subnational legislation, we suggest that a focus on key elements of data-sharing mechanisms can be used to inform the development of flexible yet comprehensive data-sharing policies so that institutional mechanisms rather than disparate efforts guide data generation, which is the foundation of all scientific endeavor.
    Keywords:  clinical trial; data sharing; policy; scoping review
    DOI:  https://doi.org/10.2196/33591
  17. J Neurochem. 2022 May 06.
      In this editorial, we are happy to connect with our community to explain the changes introduced to the Journal of Neurochemistry over the last year and provide some insights into new developments and exciting opportunities. We anticipate these developments, which are strongly guided to increase transparency and support early career researchers, will increase the value of the Journal of Neurochemistry for the authors and readers. Ultimately, we hope to improve the author experience with the Journal of Neurochemistry and continue to be the leading venue for fast dissemination of exciting new research focusing on how molecules, cells and circuits regulate the nervous system in health and disease.
    Keywords:  Neurochemistry
    DOI:  https://doi.org/10.1111/jnc.15595