bims-librar Biomed News
on Biomedical librarianship
Issue of 2024‒10‒06
23 papers selected by
Thomas Krichel, Open Library Society



  1. J Healthc Leadersh. 2024 ;16 343-364
      Introduction: Data and information quality play a critical role in the managed healthcare sector, where accurate and reliable information is crucial for optimal decision-making, operations, and patient outcomes. However, managed care organizations face significant challenges in ensuring information quality due to the complexity of data sources, regulatory requirements, and the need for effective data management practices. The goal of this article is to develop and justify an information quality framework for managed healthcare, thereby enabling the sector to better meet its unique information quality challenges.Methods: The information quality framework provided here was designed using other information quality frameworks as exemplars, as well as a qualitative survey involving interviews of twenty industry leaders structured around 17 questions. The responses were analyzed and tabulated to obtain insights into the information quality needs of the managed healthcare domain.
    Results: The novel framework we present herein encompasses strategies for data integration, standardization and validation, and is followed by a justification section that draws upon existing literature and information quality frameworks in addition to the survey of leaders in the industry.
    Discussion: Emphasizing objectivity, utility, integrity, and standardization as foundational pillars, the proposed framework provides practical guidelines to empower healthcare organizations in effectively managing information quality within the managed care model.
    Keywords:  data governance; data management; data quality; information quality; information quality framework; managed health care; managed healthcare
    DOI:  https://doi.org/10.2147/JHL.S473833
  2. Artif Intell Med. 2024 Sep 26. pii: S0933-3657(24)00231-8. [Epub ahead of print]157 102989
      Systematic Review (SR) are foundational to influencing policies and decision-making in healthcare and beyond. SRs thoroughly synthesise primary research on a specific topic while maintaining reproducibility and transparency. However, the rigorous nature of SRs introduces two main challenges: significant time involved and the continuously growing literature, resulting in potential data omission, making most SRs become outmoded even before they are published. As a solution, AI techniques have been leveraged to simplify the SR process, especially the abstract screening phase. Active learning (AL) has emerged as a preferred method among these AI techniques, allowing interactive learning through human input. Several AL software have been proposed for abstract screening. Despite its prowess, how the various parameters involved in AL influence the software's efficacy is still unclear. This research seeks to demystify this by exploring how different AL strategies, such as initial training set, query strategies etc. impact SR automation. Experimental evaluations were conducted on five complex medical SR datasets, and the GLM model was used to interpret the findings statistically. Some AL variables, such as the feature extractor, initial training size, and classifiers, showed notable observations and practical conclusions were drawn within the context of SR and beyond where AL is deployed.
    Keywords:  Abstract screening; Active learning; Evidence-based medicine; Human-in-the-loop; Machine learning; Systematic reviews
    DOI:  https://doi.org/10.1016/j.artmed.2024.102989
  3. Rev Esp Med Nucl Imagen Mol (Engl Ed). 2024 Sep 28. pii: S2253-8089(24)00093-4. [Epub ahead of print] 500065
      PURPOSE: This study aimed to evaluate the reliability and readability of responses generated by two popular AI-chatbots, 'ChatGPT-4.0' and 'Google Gemini', to potential patient questions about PET/CT scans.MATERIALS AND METHODS: Thirty potential questions for each of [18F]FDG and [68Ga]Ga-DOTA-SSTR PET/CT, and twenty-nine potential questions for [68Ga]Ga-PSMA PET/CT were asked separately to ChatGPT-4 and Gemini in May 2024. The responses were evaluated for reliability and readability using the modified DISCERN (mDISCERN) scale, Flesch Reading Ease (FRE), Gunning Fog Index (GFI), and Flesch-Kincaid Reading Grade Level (FKRGL). The inter-rater reliability of mDISCERN scores provided by three raters (ChatGPT-4, Gemini, and a nuclear medicine physician) for the responses was assessed.
    RESULTS: The median [min-max] mDISCERN scores reviewed by the physician for responses about FDG, PSMA and DOTA PET/CT scans were 3.5 [2-4], 3 [3-4], 3 [3-4] for ChatPT-4 and 4 [2-5], 4 [2-5], 3.5 [3-5] for Gemini, respectively. The mDISCERN scores assessed using ChatGPT-4 for answers about FDG, PSMA, and DOTA-SSTR PET/CT scans were 3.5 [3-5], 3 [3-4], 3 [2-3] for ChatGPT-4, and 4 [3-5], 4 [3-5], 4 [3-5] for Gemini, respectively. The mDISCERN scores evaluated using Gemini for responses FDG, PSMA, and DOTA-SSTR PET/CTs were 3 [2-4], 2 [2-4], 3 [2-4] for ChatGPT-4, and 3 [2-5], 3 [1-5], 3 [2-5] for Gemini, respectively. The inter-rater reliability correlation coefficient of mDISCERN scores for ChatGPT-4 responses about FDG, PSMA, and DOTA-SSTR PET/CT scans were 0.629 (95% CI = 0,32-0,812), 0.707 (95% CI = 0.458-0.853) and 0.738 (95% CI = 0.519-0.866), respectively (p < 0.001). The correlation coefficient of mDISCERN scores for Gemini responses about FDG, PSMA, and DOTA-SSTR PET/CT scans were 0.824 (95% CI = 0.677-0.910), 0.881 (95% CI = 0.78-0.94) and 0.847 (95% CI = 0.719-0.922), respectively (p < 0.001). The mDISCERN scores assessed by ChatGPT-4, Gemini, and the physician showed that the chatbots' responses about all PET/CT scans had moderate to good statistical agreement according to the inter-rater reliability correlation coefficient (p < 0,001). There was a statistically significant difference in all readability scores (FKRGL, GFI, and FRE) of ChatGPT-4 and Gemini responses about PET/CT scans (p < 0,001). Gemini responses were shorter and had better readability scores than ChatGPT-4 responses.
    CONCLUSION: There was an acceptable level of agreement between raters for the mDISCERN score, indicating agreement with the overall reliability of the responses. However, the information provided by AI-chatbots cannot be easily read by the public.
    Keywords:  Artificial intelligence; Cancer; ChatGPT-4; Cáncer; Google Gemini; Información para pacientes; Inteligencia artificial; PET-TC; PET/CT; Patient information
    DOI:  https://doi.org/10.1016/j.remnie.2024.500065
  4. Cureus. 2024 Aug;16(8): e67996
      Purpose Artificial intelligence (AI) has rapidly gained popularity with the growth of ChatGPT (OpenAI, San Francisco, USA) and other large-language model chatbots, and these programs have tremendous potential to impact medicine. One important area of consequence in medicine and public health is that patients may use these programs in search of answers to medical questions. Despite the increased utilization of AI chatbots by the public, there is little research to assess the reliability of ChatGPT and alternative programs when queried for medical information. This study seeks to elucidate the accuracy and readability of AI chatbots in answering patient questions regarding urology. As vasectomy is one of the most common urologic procedures, this study investigates AI-generated responses to frequently asked vasectomy-related questions. For this study, five popular and free-to-access AI platforms were utilized to undertake this investigation. Methods Fifteen vasectomy-related questions were individually queried to five AI chatbots from November-December 2023: ChatGPT (OpenAI, San Francisco, USA), Bard (Google Inc., Mountainview, USA) Bing (Microsoft, Redmond, USA) Perplexity (Perplexity AI Inc., San Francisco, USA), and Claude (Anthropic, San Francisco, USA). Responses from each platform were graded by two attending urologists, two urology research faculty, and one urological resident physician using a Likert (1-6) scale: (1-completely inaccurate, 6-completely accurate) based on comparison to existing American Urological Association guidelines. Flesch-Kincaid Grade levels (FKGL) and Flesch Reading Ease scores (FRES) (1-100) were calculated for each response. To assess differences in Likert, FRES, and FKGL, Kruskal-Wallis tests were performed using GraphPad Prism V10.1.0 (GraphPad, San Diego, USA) with Alpha set at 0.05. Results Analysis shows that ChatGPT provided the most accurate responses across the five AI chatbots with an average score of 5.04 on the Likert scale. Subsequently, Microsoft Bing (4.91), Anthropic Claude (4.65), Google Bard (4.43), and Perplexity (4.41) followed. All five chatbots were found to score, on average, higher than 4.41 corresponding to a score of at least "somewhat accurate." Google Bard received the highest Flesch Reading Ease score (49.67) and lowest Grade level (10.1) when compared to the other chatbots. Anthropic Claude scored 46.7 on the FRES and 10.55 on the FKGL. Microsoft Bing scored 45.57 on the FRES and 11.56 on the FKGL. Perplexity scored 36.4 on the FRES and 13.29 on the FKGL. ChatGPT had the lowest FRES of 30.4 and highest FKGL of 14.2. Conclusion This study investigates the use of AI in medicine, specifically urology, and it helps to determine whether large-language model chatbots can be reliable sources of freely available medical information. All five AI chatbots on average were able to achieve at least "somewhat accurate" on a 6-point Likert scale. In terms of readability, all five AI chatbots on average had Flesch Reading Ease scores of less than 50 and were higher than a 10th-grade level. In this small-scale study, there were several significant differences identified between the readability scores of each AI chatbot. However, there were no significant differences found among their accuracies. Thus, our study suggests that major AI chatbots may perform similarly in their ability to be correct but differ in their ease of being comprehended by the general public.
    Keywords:  artificial intelligence ai; chat gpt; chat-gpt; readability measures; vasectomy; vasectomy knowledge
    DOI:  https://doi.org/10.7759/cureus.67996
  5. Cureus. 2024 Aug;16(8): e68307
      Introduction The study assesses the readability of AI-generated brochures for common emergency medical conditions like heart attack, anaphylaxis, and syncope. Thus, the study aims to compare the AI-generated responses for patient information guides of common emergency medical conditions using ChatGPT and Google Gemini. Methodology Brochures for each condition were created by both AI tools. Readability was assessed using the Flesch-Kincaid Calculator, evaluating word count, sentence count and ease of understanding. Reliability was measured using the Modified DISCERN Score. The similarity between AI outputs was determined using Quillbot. Statistical analysis was performed with R (v4.3.2). Results ChatGPT and Gemini produced brochures with no statistically significant differences in word count (p= 0.2119), sentence count (p=0.1276), readability (p=0.3796), or reliability (p=0.7407). However, ChatGPT provided more detailed content with 32.4% more words (582.80 vs. 440.20) and 51.6% more sentences (67.00 vs. 44.20). In addition, Gemini's brochures were slightly easier to read with a higher ease score (50.62 vs. 41.88). Reliability varied by topic with ChatGPT scoring higher for Heart Attack (4 vs. 3) and Choking (3 vs. 2), while Google Gemini scored higher for Anaphylaxis (4 vs. 3) and Drowning (4 vs. 3), highlighting the need for topic-specific evaluation. Conclusions Although AI-generated brochures from ChatGPT and Gemini are comparable in readability and reliability for patient information on emergency medical conditions, this study highlights that there is no statistically significant difference in the responses generated by the two AI tools.
    Keywords:  #patient education; ai-generated brochures; chatgpt; discern score; flesch-kincaid; google gemini; heart attack; life threatening anaphylaxis; readability measures; syncope
    DOI:  https://doi.org/10.7759/cureus.68307
  6. BMJ Qual Saf. 2024 Oct 01. pii: bmjqs-2024-017476. [Epub ahead of print]
      BACKGROUND: Search engines often serve as a primary resource for patients to obtain drug information. However, the search engine market is rapidly changing due to the introduction of artificial intelligence (AI)-powered chatbots. The consequences for medication safety when patients interact with chatbots remain largely unexplored.OBJECTIVE: To explore the quality and potential safety concerns of answers provided by an AI-powered chatbot integrated within a search engine.
    METHODOLOGY: Bing copilot was queried on 10 frequently asked patient questions regarding the 50 most prescribed drugs in the US outpatient market. Patient questions covered drug indications, mechanisms of action, instructions for use, adverse drug reactions and contraindications. Readability of chatbot answers was assessed using the Flesch Reading Ease Score. Completeness and accuracy were evaluated based on corresponding patient drug information in the pharmaceutical encyclopaedia drugs.com. On a preselected subset of inaccurate chatbot answers, healthcare professionals evaluated likelihood and extent of possible harm if patients follow the chatbot's given recommendations.
    RESULTS: Of 500 generated chatbot answers, overall readability implied that responses were difficult to read according to the Flesch Reading Ease Score. Overall median completeness and accuracy of chatbot answers were 100.0% (IQR 50.0-100.0%) and 100.0% (IQR 88.1-100.0%), respectively. Of the subset of 20 chatbot answers, experts found 66% (95% CI 50% to 85%) to be potentially harmful. 42% (95% CI 25% to 60%) of these 20 chatbot answers were found to potentially cause moderate to mild harm, and 22% (95% CI 10% to 40%) to cause severe harm or even death if patients follow the chatbot's advice.
    CONCLUSIONS: AI-powered chatbots are capable of providing overall complete and accurate patient drug information. Yet, experts deemed a considerable number of answers incorrect or potentially harmful. Furthermore, complexity of chatbot answers may limit patient understanding. Hence, healthcare professionals should be cautious in recommending AI-powered search engines until more precise and reliable alternatives are available.
    Keywords:  Clinical pharmacology; Information technology; Medication safety; Patient safety; Polypharmacy
    DOI:  https://doi.org/10.1136/bmjqs-2024-017476
  7. Cureus. 2024 Aug;16(8): e68085
      BACKGROUND: Patients seeking orthodontic treatment may use large language models (LLMs) such as Chat-GPT for self-education, thereby impacting their decision-making process. This study assesses the reliability and validity of Chat-GPT prompts aimed at informing patients about orthodontic side effects and examines patients' perceptions of this information.MATERIALS AND METHODS: To assess reliability, n = 28 individuals were asked to generate information from GPT-3.5 and Generative Pretrained Transformer 4 (GPT-4) about side effects related to orthodontic treatment using both self-formulated and standardized prompts. Three experts evaluated the content generated based on these prompts regarding its validity. We asked a cohort of 46 orthodontic patients about their perceptions after reading an AI-generated information text about orthodontic side effects and compared it with the standard text from the postgraduate orthodontic program at Aarhus University.
    RESULTS: Although the GPT-generated answers mentioned several relevant side effects, the replies were diverse. The experts rated the AI-generated content generally as "neither deficient nor satisfactory," with GPT-4 achieving higher scores than GPT-3.5. The patients perceived the GPT-generated information as more useful and more comprehensive and experienced less nervousness when reading the GPT-generated information. Nearly 80% of patients preferred the AI-generated information over the standard text.
    CONCLUSIONS: Although patients generally prefer AI-generated information regarding the side effects of orthodontic treatment, the tested prompts fall short of providing thoroughly satisfactory and high-quality education to patients.
    Keywords:  ai orthodontics; artificial intelligence in dentistry; digital orthodontics; large language models (llm); patient education
    DOI:  https://doi.org/10.7759/cureus.68085
  8. Cureus. 2024 Aug;16(8): e68064
      Background In our age of technology, millions of people use the Internet daily for health-related searches and guidance, both patients and caregivers alike. However, health literacy remains notably low among U.S. adults, and this issue is particularly critical for individuals with severe mental illnesses. Poor health literacy is often linked to low socioeconomic status and correlates with adverse patient outcomes and limited healthcare access. With the average reading level of the U.S. adult at the eighth-grade level, guidelines recommend health information be written to match. This study focuses on the readability of top Google search results for common psychotic disorders, emphasizing the need for accessible online health information to support vulnerable populations with severe mental illnesses. Methods The top five most visited websites for eight psychiatric conditions were included in this study. These conditions included schizophrenia, schizoaffective disorder, schizophreniform disorder, delusional disorder, bipolar 1 disorder, major depressive disorder (MDD) with psychotic features, substance-induced psychotic disorder, and psychotic disorder due to a general medical condition. The Flesch-Kincaid (FK) reading ease and grade level score were calculated for each webpage. Additionally, all institutions and organizations that created each webpage were noted. Results The average FK grade level was 9.9 (corresponding to a 10th-grade level), while the overall FK reading ease was 37.3 (corresponding to college-level difficulty) across all disorders analyzed. Websites on MDD with psychotic features had the lowest average FK grade level, 8.6, and best reading ease score. Websites discussing delusional disorder had the highest average FK grade level, 11.2, while those with information on schizophreniform disorder had the lowest average reading ease with a score of 31.7, corresponding to "difficult (college)" level reading. Conclusion Both patient education and compliance can be improved with more accessible and readable patient educational materials. Our study shows significant opportunities for improvement in the readability and comprehensibility of online educational materials for eight of the most common psychotic disorders. Physicians and other healthcare providers should be aware of this barrier, recommending specific websites, literature, and resources for patients and their caregivers. Further efforts should be aimed at creating new and easy-to-comprehend online material for mental health disorders, ensuring the best quality and care for these patients.
    Keywords:  flesch-kincaid; mental health education; online patient education; psychiatry & mental health; psychotic disorders; readability measures; schizophrenia and other psychotic disorders
    DOI:  https://doi.org/10.7759/cureus.68064
  9. Hisp Health Care Int. 2024 Oct 03. 15404153241286720
      Introduction: Because there is limited online health information in Spanish and it is critical to raise health literacy among Spanish-speaking people, it is essential to assess the readability level of Spanish material. Method: This systematic review included all articles published up to January 3, 2024, and used the CINAHL, MEDLINE, and PubMed databases. The objective was to include the body of knowledge on published articles on the readability levels of Spanish-language, web-based health information intended for lay audiences. Results: There were 27 articles in the final review. Within these articles, 11 tools were used in the Spanish language text. Of the tools, INFLESZ was the most frequently used and the FRY formula, Flesch-Szigriszt Index, and Flesch Formula Index were least used. Most materials (85.2%) reported readability levels of online Spanish information above the 8th grade reading level. Conclusions: The findings show the lack of internet-based Spanish language health information and materials at a recommended (e.g., 5th to 8th grade) reading level. More research is needed to determine which readability tests are more accurate for calculating the readability of Spanish web health information.
    Keywords:  Hispanic-Americans; Spanish; consumer health information; limited English; readability
    DOI:  https://doi.org/10.1177/15404153241286720
  10. J Pregnancy. 2024 ;2024 4040825
      Background: Accessible health information during pregnancy is important to positively affect maternal and fetal health. However, the quality and accuracy of health information can greatly vary across numerous sources. This narrative review is aimed at summarizing the literature on pregnant individuals' information sources and how these sources influence their habits toward GWG, PA, and nutrition. Such data will highlight preferences and needs, reveal challenges, and identify opportunities for improvement. Methods: We searched PubMed for studies published in the last decade. Out of 299 studies initially identified, 20 (16 quantitative and four qualitative) met the eligibility criteria (investigating information sources and their influence on health habits toward GWG, PA, nutrition, pregnant participants, adequate data reporting, and being available in full text). Results: Primary sources of health information varied. The Internet (26%-97%) and healthcare providers (HCPs) (14%-74%) predominated, followed by family/friends (12%71%), books/magazines (49%-65%), and guidelines/brochures (25%-53%). Despite the widespread use of the Internet, HCPs were considered the most reliable source. The use of the Internet to retrieve health information was reported to be 2-4 h a week, and < 50% discussed the online information with their HCP. The Internet was also used as a supplementary resource on topics raised by HCPs. Regarding the influence on health habits, the Internet, HCPs, media, and family positively influenced GWG and promoted adherence to recommended guidelines (OR = 0.55-15.5). Only one study showed a positive association between Internet use and PA level. The Internet, media, HCPs, and information brochures were associated with better adherence to nutritional recommendations. Conclusions: Pregnant individuals relied on the Internet and HCP, with a preference for the Internet despite trust in midwives. Several sources of health information were positively associated with adherence to GWG and nutrition recommendations. Improving the quality of online information should be a priority for policymakers and health authorities.
    DOI:  https://doi.org/10.1155/2024/4040825
  11. Strabismus. 2024 Oct 01. 1-8
      INTRODUCTION: Over one-third of US adults have never attended college, creating a large disparity in the readability of online health materials. Decreased health literacy and accessibility to medical information negatively affect patients and well-informed patients are more likely to experience better health outcomes (1). The NIH and AMA recommend patient-intended education materials be written at a sixth-grade reading level (2), therefore, this study analyzed the accessibility of the top ten web pages for "strabismus."METHODS: The first ten online resources returned in a Google search for "strabismus" were analyzed. Web pages were then assessed for the readability level (Simple Measure of Gobbledygook), complexity (PMOSE/IKIRSCH), and suitability (Suitability Assessment of Materials). Two independent raters assessed the complexity and suitability.
    RESULTS: Readability analysis of the strabismus resources revealed an average reading grade level of 11.4 ± 1.07. There was a statistical difference in the reading grade level between the .com and .gov, and the .org and .com websites (p = .029 and p = .031, respectively). Complexity analysis revealed a mean score of 6.50 ± 2.29, corresponding to an 8th-12th grade reading level. The suitability assessment showed a mean value of 70.3 ± 10.1%, representing a "superior" score for the information provided to the reader. The inter-rater agreement was similar for the complexity and fair for the suitability analysis.
    DISCUSSION: On average, online resources for strabismus have a low complexity level. However, the majority of the top ten articles reviewed are above the recommended literacy level, indicating a need for revision.
    CLINICAL IMPLICATIONS: The vast amount of available online health resources have significantly affected the field of medicine. Most patients research their disease process using online sources and many reference this material before their initial ophthalmologic consultation. Considering that more than half of Americans read below the equivalent of a sixth-grade level and that the AMA/NIH recommend all patient-intended materials to be written above this level, there is a health literacy disconnect. This limits patients' ability to educate themselves about their medical conditions and participate in informed conversations regarding their healthcare. Patients who are unable to interpret health information accurately have increased rates of hospitalization, develop more medical conditions, and experience a higher rate of mortality. This preventable impediment to informed healthcare care magnifies the urgency for easily readable online resources that are formatted in a manner that is clear to understand and suitable for patients with lower health literacy.
    Keywords:  Education; health literacy; ophthalmic health resources; patient education; patient-intended materials
    DOI:  https://doi.org/10.1080/09273972.2024.2408029
  12. BMC Health Serv Res. 2024 Sep 27. 24(1): 1124
      BACKGROUND: The quality and safety of information provided on online platforms for migraine treatment remains uncertain. We evaluated the top 10 trending websites accessed annually by Turkish patients seeking solutions for migraine treatment and assessed information quality, security, and readability in this cross-sectional study.METHODS: A comprehensive search strategy was conducted using Google starting in 2015, considering Türkiye's internet usage trends. Websites were evaluated using the DISCERN measurement tool and Ateşman Turkish readability index.
    RESULTS: Ninety websites were evaluated between 2015 and 2024. According to the DISCERN measurement tool, most websites exhibited low quality and security levels. Readability analysis showed that half of the websites were understandable by readers with 9th - 10th grade educational levels. The author distribution varied, with neurologists being the most common. A significant proportion of the websites were for profit. Treatment of attacks and preventive measures were frequently mentioned, but some important treatments, such as greater occipital nerve blockade, were rarely discussed.
    CONCLUSION: This study highlights the low quality and reliability of online information websites on migraine treatment in Türkiye. These websites' readability level remains a concern, potentially hindering patients' access to accurate information. This can be a barrier to migraine care for both patients with migraine and the physician. Better supervision and cooperation with reputable medical associations are needed to ensure the dissemination of reliable information to the public.
    Keywords:  Discern measurement tool; Migraine; Readability index; Türkiye
    DOI:  https://doi.org/10.1186/s12913-024-11599-4
  13. Cureus. 2024 Aug;16(8): e68141
      Introduction The aim of the study is to evaluate the quality and educational value of surgical videos on YouTube (Alphabet Inc., Mountain View, CA) demonstrating transurethral resection of the prostate (TURP). Methods A thorough YouTube search for "TURP" or "transurethral resection of the prostate" was performed. Each video's uploader, content, duration, date of upload, time since upload, views, comments, likes, and dislikes, and Video Power Index (VPI) scores were recorded and evaluated. Video analysis and rating followed the LAParoscopic Surgery Video Educational Guidelines (LAP-VEGaS) recommendations, which constitute nine items with values from 0 (absence) to 2 (complete presence). The guidelines' overall score can be 0 to 18. A higher score is indicative of a better level of educational value. Results There were a total of 43 videos included, 10 (23.3%) of which were academic publications. The average LAP-VEGaS score was 6.58, with 22 (51.2%), 18 (41.8%), and three (7%) videos classified as having low, medium, and high educational quality, respectively. None of the videos satisfied all the requirements outlined in the checklist. There was no statistically significant positive correlation observed between the educational score and the number of views. Conclusion A significant proportion of transurethral resection of the prostate (TURP) videos available on the YouTube platform exhibit limited educational value. Videos frequently lack comprehensive and in-depth descriptions of surgical operations. Those seeking information on TURP should carefully choose which videos to view. It is recommended that academic institutions establish comprehensive criteria aimed at enhancing the educational value of surgical videos on the YouTube platform.
    Keywords:  benign prostate hyperplasia; lap-vegas; laparoscopic surgery video educational guidelines; transurethral resection of the prostate; turp
    DOI:  https://doi.org/10.7759/cureus.68141
  14. PeerJ. 2024 ;12 e18183
      Background: Good oral hygiene is crucial for preventing dental caries and periodontal diseases. However, proper and regular application of oral hygiene practices requires adequate knowledge. In recent years, the internet has become one of the most popular places to find health-related information, necessitating studies that analyze the quality of the content available online. The purpose of the present study was to analyze the content quality and reliability of YouTube™ videos on the topic of adult oral hygiene practices and to guide oral health care professionals who use this platform for patient education.Methods: A YouTube™ search was performed of the most frequent search term, 'dental hygiene'. A total of 150 videos were screened, and 51 were included in the final study. The characteristics, sources, and content of the videos were analyzed using the Global Quality Score (GQS) and DISCERN reliability indices. The IBM SPSS 25 program was used for statistical analyses.
    Results: Most of the included videos were uploaded by oral health care professionals (63%). GQS revealed only 17.6% of the videos were excellent quality whereas 23.5% of them were poor quality. In the content analysis, 62.7% of the videos were deemed moderately useful. Video duration, total content score, and interaction indices were all significantly higher in the useful and very useful groups compared to the slightly useful group (p = 0.020, p < 0.001, p = 0.040). GQS had a positive, low-medium statistically significant correlation with both video duration and total content scores (r = 0.235, r = 0.517; p < 0.05). DISCERN score also had a positive, low-medium statistically significant correlation with total content score (r = 0.500; p < 0.05).
    Conclusion: The study concluded that most YouTube™ videos on oral hygiene practices for adults are moderately useful. When using YouTube™ for patient education, oral health care professionals and organizations should be aware of low-quality videos and seek out accurate, useful videos. There is also a need for quality videos with expanded oral health content.
    Keywords:  Dental hygiene; Online systems; Oral health
    DOI:  https://doi.org/10.7717/peerj.18183
  15. J Am Dent Assoc. 2024 Oct 04. pii: S0002-8177(24)00502-6. [Epub ahead of print]
      BACKGROUND: Periodontal surgery for gingival defects is widely recognized by dental care professionals and researchers for its effectiveness in treating gingival recession and improving oral health outcomes. YouTube (Google LLC) is 1 of the health information sources patients and clinicians use, and assessing its content quality is crucial. The authors aimed to examine the content and quality of YouTube videos on gingival graft procedures.METHODS: The online video streaming platform YouTube was searched using the key word gingival graft. Two independent examiners analyzed a total of 120 videos; a third examiner assessed interrater reliability. Fifty videos that met the inclusion criteria were included in the study. The assessed content topics for these YouTube videos consisted of 13 different categories, and their overall quality was evaluated using the Video Information and Quality Index (VIQI). Statistical analyses were performed using SAS software, Version 9.4 (SAS Institute).
    RESULTS: There were 23 videos in the high-quality content groups and 27 videos in the low-quality content group. Hospitals and universities uploaded most of the included videos. The most commonly discussed topics in the included videos were the patient's condition (36 [72%]) and area of tissue graft (34 [68%]). The total VIQI score and flow had a significant impact on the overall content score (P < .05).
    CONCLUSIONS: There was a direct correlation between total VIQI scores and total content scores and an inverse relationship between viewing rate and total content scores.
    PRACTICAL IMPLICATIONS: To ensure patients receive accurate and up-to-date information about treatment, the authors recommend guiding them toward reliable resources by means of providing direct links to trustworthy websites, creating and sharing playlists of reliable educational videos, and offering printed materials with quick-response codes linking to verified sources. These actions will help patients easily access and trust the information they need for their treatment decisions.
    Keywords:  Gingival graft; YouTube; periodontal surgery; social media
    DOI:  https://doi.org/10.1016/j.adaj.2024.09.004
  16. Cureus. 2024 Aug;16(8): e68243
      OBJECTIVE: The aim of our study was to evaluate the content and quality of heart failure posts on the video-sharing site YouTube, which is an easily accessible source of information and is becoming increasingly popular in society for obtaining health information.METHODS: In December 2023, we evaluated 162 videos after applying the exclusion criteria as a result of our search with the keyword "Heart Failure" on English-sharing sites. In addition to the technical data of the videos, such as views, duration, upload day, likes, and dislikes, we also used indices such as power index and popularity score in the analyses. We evaluated the quality of the videos using the DISCERN and global quality score scales, and the content using our content score scale. We classified the videos in the study into three quality subgroups according to the scores they received on all three scales.
    RESULTS: The median number of views of the videos included in our study was 31092 (interquartile range (IQR): 3929-127758) and the median video duration was 336 (IQR: 189-843) seconds. The median popularity score was 28.25 (IQR: 4.7-143) and the median power index was 35139 (2061-308128). Group 1 (low quality) included 54 videos with a total score between 10 and 14, group 2 (medium quality) included 54 videos with a total score between 15 and 22, and group 3 (high quality) included 54 videos with a total score between 23 and 30. Views, upload day, and video duration were significantly higher for group 3 videos (p = 0.008, p = 0.001, and p < 0.001, respectively). Likes and dislikes were not significantly different between groups. The popularity score was significantly higher for group 1 videos (39.5 (IQR: 6.5-200), p = 0.023), while the power index was significantly higher for group 3 videos (74206 (IQR: 9477-221408), p = 0.006).
    CONCLUSIONS: Our findings confirm that YouTube, a video-sharing website, is essential for easily sharing and spreading health-related information to a broad audience. Increased attention to videos with scientific content and high-quality scores suggests that YouTube provides accurate and quality information about heart failure. While the number of quality posts tends to increase daily, healthcare professionals should be encouraged to share high-quality scientific videos more frequently.
    Keywords:  heart failure; online videos; quality; social network; youtube
    DOI:  https://doi.org/10.7759/cureus.68243
  17. BMC Public Health. 2024 Sep 27. 24(1): 2620
      BACKGROUND: Considering the adverse clinical consequences of pathologic tachycardia and the potential anxiety caused by physiological tachycardia in some heathy individuals, it is imperative to disseminate health information related to tachycardia for promotion in early diagnosis and appropriate management. YouTube has been increasingly used to access health care information. The aim of this study is to assess the quality and reliability of English YouTube videos focusing on tachycardia and further delve into strategies to enhance the quality of online health resources.METHODS: We conducted a search using the specific key words "tachycardia" in YouTube online library on December 2, 2023. The first 150 videos, ranked by "relevance", were initially recorded. After exclusions, a total of 113 videos were included. All videos were extracted for characteristics and categorized based on different topics, sources or contents. Two independent raters assessed the videos using Journal of American Medical Association (JAMA) benchmark criteria, Modified DISCERN (mDISCERN) tool, Global Quality Scale (GQS) and Tachycardia-Specific Scale (TSS), followed by statistical analyses. All continuous data in the study were presented as median (interquartile range).
    RESULTS: The videos had a median JAMA score of 2.00 (1.00), mDISCERN of 3.00 (1.00), GQS of 2.00 (1.00), and TSS of 6.00 (4.50). There were significant differences in JAMA (P < 0.001), mDISCERN (P = 0.004), GQS (P = 0.001) and TSS (P < 0.001) scores among different sources. mDISCERN (P = 0.002), GQS (P < 0.001) and TSS (P = 0.030) scores significantly differed among various contents. No significant differences were observed in any of the scores among video topics. Spearman correlation analysis revealed that VPI exhibited significant correlations with quality and reliability. Multiple linear regression analysis suggested that longer video duration, sources of academics and healthcare professionals were independent predictors of higher reliability and quality, while content of ECG-specific information was an independent predictor of lower quality.
    CONCLUSIONS: The reliability and educational quality of current tachycardia-related videos on YouTube are low. Longer video duration, sources of academics and healthcare professionals were closely associated with higher video reliability and quality. Improving the quality of internet medical information and optimizing online patient education necessitates collaborative efforts.
    Keywords:  Online videos; Patient education; Tachycardia; YouTube
    DOI:  https://doi.org/10.1186/s12889-024-20062-2
  18. Eur J Obstet Gynecol Reprod Biol. 2024 Sep 26. pii: S0301-2115(24)00532-3. [Epub ahead of print]302 301-305
      OBJECTIVE: This study aimed to evaluate the quality of surgical content in laparoscopic radical hysterectomy (LRH) videos on YouTube.STUDY DESIGN: On February 20, 2024, a search was conducted on YouTube using the keyword "laparoscopic radical hysterectomy," filtering videos with durations over 20 min and sorting by relevance. Two experienced gynecologists assessed the first 250 videos retrieved to determine if they illustrated anatomical landmarks and surgical procedures in a standardized step-by-step manner.
    RESULTS: Forty videos met the inclusion criteria for analysis. Sixty percent (24 out of 40) of these videos presented the complete list of predetermined surgical steps. According to the LAP-VEGaS assessment tool, only 32.5 % (13 out of 40) of the videos achieved a total score of 11 or higher, and 12.5 % (5 out of 40) scored 12 or higher. Videos with a LAP-VEGaS score of 11 or above had a statistically higher number of views per day (4.64 [IQR: 10.47]) compared to those with a lower score (1.48 [IQR: 3.40], p = 0.019). Additionally, videos featuring a didactic voice were significantly more popular, with higher views per day compared to those with music or no audio (8.66 [IQR: 32.75] vs. 1.69 [IQR: 3.12], p = 0.001).
    CONCLUSION: The majority of LRH videos on YouTube lacked comprehensive surgical content and received low LAP-VEGaS scores. Videos with a didactic voice and higher LAP-VEGaS scores tended to attract more viewers.
    Keywords:  Cervical cancer; LAPVEGaS; Laparoscopic radical hysterectomy; YouTube
    DOI:  https://doi.org/10.1016/j.ejogrb.2024.09.038
  19. Front Public Health. 2024 ;12 1379094
      Introduction: Online health communities have become the main source for people to obtain health information. However, the existence of poor-quality health information, misinformation, and rumors in online health communities increases the challenges in governing information quality. It not only affects users' health decisions but also undermines social stability. It is of great significance to explore the factors that affect users' ability to discern information in online health communities.Methods: This study integrated the Stimulus-Organism-Response Theory, Information Ecology Theory and the Mindsponge Theory to constructed a model of factors influencing users' health information discernment abilities in online health communities. A questionnaire was designed based on the variables in the model, and data was collected. Utilizing Structural Equation Modeling (SEM) in conjunction with fuzzy-set Qualitative Comparative Analysis (fsQCA), the study analyzed the complex causal relationships among stimulus factors, user perception, and the health information discernment abilities.
    Results: The results revealed that the dimensions of information, information environment, information technology, and information people all positively influenced health information discernment abilities. Four distinct configurations were identified as triggers for users' health information discernment abilities. The core conditions included information source, informational support, technological security, technological facilitation, and perceived risk. It was also observed that information quality and emotional support can act as substitutes for one another, as can informational support and emotional support.
    Discussion: This study provides a new perspective to study the influencing factors of health information discernment abilities of online health community users. It can provide experiences and references for online health community information services, information resource construction and the development of users' health information discernment abilities.
    Keywords:  fsQCA; health information discernment abilities; information ecology theory; online health communities; perceived value
    DOI:  https://doi.org/10.3389/fpubh.2024.1379094
  20. Digit Health. 2024 Jan-Dec;10:10 20552076241282622
      Objective: The primary aim of this study is to analyze health information seeking behaviors of users related to child fever within online health communities. The findings will serve as a foundation for the development of targeted interventions and resources for addressing the specific information needs related to child fever. Ultimately, this will enhance parental capabilities in managing fever in children and for improving the quality of communication between healthcare professionals and parents dealing with feverish children.Methods: This study employed data crawling to gather Q&A data on childhood fever from online health communities, specifically "haodf.com" between March 15, 2022, and March 15, 2023. A total of 47,781 texts were analyzed using a mixed research approach that combines qualitative text topic analysis with BERTopic algorithm.
    Results: The health information needs regarding children's fever can be categorized into 6 primary topics and 17 secondary topics. Among them, parents' demand for medication consultation and medical guidance (Topic A) was the highest at 45.40%, followed by information concerning the management of fever symptoms and body temperature in children (Topic B) at 30.35%. 13.24% of the data focused on examination recommendations and interpretation of results (Topic C).
    Conclusions: This study proposes a mixed thematic analysis method combining qualitative text thematic analysis and the BERTopic topic model, which reveals parents' information-seeking behaviors about children with fever. It emphasizes the challenges faced by parents in assessing their children's condition and highlights the necessity of continuous health information support and evidence-based medical knowledge. This can promote the improvement of medical services, optimize doctor-patient communication, strengthen patient information support, and optimize the content of online health communities.
    Keywords:  BERTopic; Health information seeking behaviors; childhood fever; online health community; topic analysis
    DOI:  https://doi.org/10.1177/20552076241282622