bims-librar Biomed News
on Biomedical librarianship
Issue of 2024‒10‒20
twenty-six papers selected by
Thomas Krichel, Open Library Society



  1. Heliyon. 2024 Oct 15. 10(19): e38089
      The quality of the indoor lighting environment in the reading space of the university library is essential for students' visual perception, emotional evaluation, and cognitive efficiency. The design of the lighting environment considering the change of natural light is one of the important development directions to improve the light environment quality of the reading room and meet the health requirements of students. This paper from the perspective of reading vision, emotion, and cognitive probes into the natural light environment lighting design, builds a reading lighting environment laboratory, using subjective evaluation, task performance, and physiological indicators of the measurement method, first analyses the weight of different indicators under different time, establish lighting environment comprehensive evaluation index I and artificial illumination and desktop illumination prediction model. The experimental results show that experimental results show that visual discomfort and clarity are the key factors in the early and later stages. In the transition phase, emotional indicators become particularly important. The model of dynamic lighting design scheme during 17:30-20:30 is: y = 227753.1746-900120.63492x+ 1179428.57142x2-512000x3, R2 = 0.99702; the fitting formula of desktop illumination in 17:30-20:30 period is: y = 566.666667-197.08995x + 158.73016x2-37.03704x3, R2 = 0.95238. The results aim to provide a theoretical basis for the design and optimization of dynamic lighting under natural light in the library reading room.
    Keywords:  Cognitive efficiency; Emotional evaluation; In the reading room; Lighting environment; Visual perception
    DOI:  https://doi.org/10.1016/j.heliyon.2024.e38089
  2. BMJ Health Care Inform. 2024 Oct 11. pii: e101017. [Epub ahead of print]31(1):
      BACKGROUND: Research commentaries have the potential for evidence appraisal in emphasising, correcting, shaping and disseminating scientific knowledge.OBJECTIVES: To identify the appropriate bibliographic source for capturing commentary information, this study compares comment data in PubMed and Web of Science (WoS) to assess their applicability in evidence appraisal.
    METHODS: Using COVID-19 as a case study, with over 27 k COVID-19 papers in PubMed as a baseline, we designed a comparative analysis for commented-commenting relations in two databases from the same dataset pool, making a fair and reliable comparison. We constructed comment networks for each database for network structural analysis and compared the characteristics of commentary materials and commented papers from various facets.
    RESULTS: For network comparison, PubMed surpasses WoS with more closed feedback loops, reaching a deeper six-level network compared with WoS' four levels, making PubMed well-suited for evidence appraisal through argument mining. PubMed excels in identifying specialised comments, displaying significantly lower author count (mean, 3.59) and page count (mean, 1.86) than WoS (authors, 4.31, 95% CI of difference of two means = [0.66, 0.79], p<0.001; pages, 2.80, 95% CI of difference of two means = [0.87, 1.01], p<0.001), attributed to PubMed's CICO comment identification algorithm. Commented papers in PubMed also demonstrate higher citations and stronger sentiments, especially significantly elevated disputed rates (PubMed, 24.54%; WoS, 18.8%; baseline, 8.3%; all p<0.0001). Additionally, commented papers in both sources exhibit superior network centrality metrics compared with WoS-only counterparts.
    CONCLUSION: Considering the impact and controversy of commented works, the accuracy of comments and the depth of network interactions, PubMed potentially serves as a valuable resource in evidence appraisal and detection of controversial issues compared with WoS.
    Keywords:  COVID-19; Data Management; Evidence-Based Medicine
    DOI:  https://doi.org/10.1136/bmjhci-2024-101017
  3. ALTEX. 2024 Oct 10.
      Systematic reviews (SRs) are an important tool in implementing the 3Rs in preclinical research. With the ever-increasing amount of scientific literature, SRs require increasing time-investments. Thus, using the most efficient review tools is essential. Most available tools aid the screening process, tools for data-extraction and / or multiple review phases are relatively scarce. Using a single platform for all review phases allows for auto-transfer of references from one phase to the next, which enables work on multiple phases at the same time. We performed succinct formal tests of four multiphase review tools that are free or relatively affordable: Covidence, Eppi, SRDR+ and SYRF. Our tests comprised full-text screening, sham data extraction and discrepancy resolution in the context of parts of a systematic review. Screening was performed as per protocol. Sham data extraction comprised free text, numerical and categorial data. Both reviewers kept a log of their experiences with the platforms throughout. These logs were qualitatively summarized and supplemented with further user experiences. We show value of all tested tools in the SR process. Which tool is optimal depends on multiple factors, comprising previous experience with the tool, but also review type, review questions and review team member enthusiasm.
    Keywords:  data extraction; literature screening; systematic review
    DOI:  https://doi.org/10.14573/altex.2409251
  4. Heliyon. 2024 Oct 15. 10(19): e38448
      This study presents a comprehensive framework to enhance Wikidata as an open and collaborative knowledge graph by integrating Open Biological and Biomedical Ontologies (OBO) and Medical Subject Headings (MeSH) keywords from PubMed publications. The primary data sources include OBO ontologies and MeSH keywords, which were collected and classified using SPARQL queries for RDF knowledge graphs. The semantic alignment between OBO ontologies and Wikidata was evaluated, revealing significant gaps and distorted representations that necessitate both automated and manual interventions for improvement. We employed pointwise mutual information to extract biomedical relations among the 5000 most common MeSH keywords in PubMed, achieving an accuracy of 89.40 % for superclass-based classification and 75.32 % for relation type-based classification. Additionally, Integrated Gradients were utilized to refine the classification by removing irrelevant MeSH qualifiers, enhancing overall efficiency. The framework also explored the use of MeSH keywords to identify PubMed reviews supporting unsupported Wikidata relations, finding that 45.8 % of these relations were not present in PubMed, indicating potential inconsistencies in Wikidata. The contributions of this study include improved methodologies for enriching Wikidata with biomedical information, validated semantic alignments, and efficient classification processes. This work enhances the interoperability and multilingual capabilities of biomedical ontologies and demonstrates the critical role of MeSH keywords in verifying semantic relations, thereby contributing to the robustness and accuracy of collaborative biomedical knowledge graphs.
    Keywords:  Biomedical relation identification; Crowdsourcing; MeSH keywords; Open biological and biomedical ontologies; PubMed; Wikidata
    DOI:  https://doi.org/10.1016/j.heliyon.2024.e38448
  5. Res Synth Methods. 2024 Oct 15.
      While geographic search filters exist, few of them are validated and there are currently none that focus on Germany. We aimed to develop and validate a highly sensitive geographic search filter for MEDLINE (PubMed) that identifies studies about Germany. First, using the relative recall method, we created a gold standard set of studies about Germany, dividing it into 'development' and 'testing' sets. Next, candidate search terms were identified using (i) term frequency analyses in the 'development set' and a random set of MEDLINE records; and (ii) a list of German geographic locations, compiled by our team. Then, we iteratively created the filter, evaluating it against the 'development' and 'testing' sets. To validate the filter, we conducted a number of case studies (CSs) and a simulation study. For this validation we used systematic reviews (SRs) that had included studies about Germany but did not restrict their search strategy geographically. When applying the filter to the original search strategies of the 17 SRs eligible for CSs, the median precision was 2.64% (interquartile range [IQR]: 1.34%-6.88%) versus 0.16% (IQR: 0.10%-0.49%) without the filter. The median number-needed-to-read (NNR) decreased from 625 (IQR: 211-1042) to 38 (IQR: 15-76). The filter achieved 100% sensitivity in 13 CSs, 85.71% in 2 CSs and 87.50% and 80% in the remaining 2 CSs. In a simulation study, the filter demonstrated an overall sensitivity of 97.19% and NNR of 42. The filter reliably identifies studies about Germany, enhancing screening efficiency and can be applied in evidence syntheses focusing on Germany.
    Keywords:  MEDLINE; PubMed; bibliographic databases; geographic search filters; literature searching
    DOI:  https://doi.org/10.1002/jrsm.1763
  6. J Shoulder Elbow Surg. 2024 Oct 15. pii: S1058-2746(24)00723-7. [Epub ahead of print]
      BACKGROUND: Increasingly, patients are turning to artificial intelligence (AI) programs such as ChatGPT to answer medical questions either before or after consulting a physician. Although ChatGPT's popularity implies its potential in improving patient education, concerns exist regarding the validity of the chatbot's responses. Therefore, the objective of this study was to evaluate the quality and accuracy of ChatGPT's answers to commonly asked patient questions surrounding total shoulder arthroplasty (TSA).METHODS: Eleven trusted healthcare websites were searched to compose a list of the 15 most frequently asked patient questions about TSA. Each question was posed to the ChatGPT user interface, with no follow-up questions or opportunity for clarification permitted. Individual response accuracy was graded by three board-certified orthopedic surgeons using an alphabetical grading system (i.e., A-F). Overall grades, descriptive analyses, and commentary were provided for each of the ChatGPT responses.
    RESULTS: Overall, ChatGPT received a cumulative grade of B-. The question responses surrounding general/preoperative and postoperative questions received a grade of B- and B-, respectively. ChatGPT's responses adequately responded to patient questions with sound recommendations. However, the chatbot neglected recent research in its responses, resulting in recommendations that warrant professional clarification. The interface deferred specific questions to orthopedic surgeons in 8/15 questions, suggesting its awareness of its own limitations. Moreover, ChatGPT often went beyond the scope of the question after the first two sentences, and generally made errors when attempting to supplement its own response.
    CONCLUSION: Overall, this is the first study to our knowledge to utilize AI to answer the most common patient questions surrounding TSA. ChatGPT achieved an overall grade of B-. Ultimately, while AI is an attractive tool for initial patient inquiries, at this time it cannot provide responses to TSA-specific questions that can substitute the knowledge of an orthopedic surgeon.
    Keywords:  ChatGPT; Total shoulder replacement; anatomic; artificial intelligence; patient education; reverse
    DOI:  https://doi.org/10.1016/j.jse.2024.08.025
  7. BMJ Open Ophthalmol. 2024 Oct 17. pii: e001824. [Epub ahead of print]9(1):
      OBJECTIVE: To conduct a head-to-head comparative analysis of cataract surgery patient education material generated by Chat Generative Pre-trained Transformer (ChatGPT-4) and Google Bard.METHODS AND ANALYSIS: 98 frequently asked questions on cataract surgery in English were taken in November 2023 from 5 trustworthy online patient information resources. 59 of these were curated (20 augmented for clarity and 39 duplicates excluded) and categorised into 3 domains: condition (n=15), preparation for surgery (n=21) and recovery after surgery (n=23). They were formulated into input prompts with 'prompt engineering'. Using the Patient Education Materials Assessment Tool-Printable (PEMAT-P) Auto-Scoring Form, four ophthalmologists independently graded ChatGPT-4 and Google Bard responses. The readability of responses was evaluated using a Flesch-Kincaid calculator. Responses were also subjectively examined for any inaccurate or harmful information.
    RESULTS: Google Bard had a higher mean overall Flesch-Kincaid Level (8.02) compared with ChatGPT-4 (5.75) (p<0.001), also noted across all three domains. ChatGPT-4 had a higher overall PEMAT-P understandability score (85.8%) in comparison to Google Bard (80.9%) (p<0.001), which was also noted in the 'preparation for cataract surgery' (85.2% vs 75.7%; p<0.001) and 'recovery after cataract surgery' (86.5% vs 82.3%; p=0.004) domains. There was no statistically significant difference in overall (42.5% vs 44.2%; p=0.344) or individual domain actionability scores (p>0.10). None of the generated material contained dangerous information.
    CONCLUSION: In comparison to Google Bard, ChatGPT-4 fared better overall, scoring higher on the PEMAT-P understandability scale and exhibiting more faithfulness to the prompt engineering instruction. Since input prompts might vary from real-world patient searches, follow-up studies with patient participation are required.
    Keywords:  Cataract; Medical Education; Treatment Surgery
    DOI:  https://doi.org/10.1136/bmjophth-2024-001824
  8. Cureus. 2024 Oct;16(10): e71691
      Introduction It is now commonplace for patients to consult the internet with health-related questions. Unfortunately, the quality of information provided to them online is highly variable. Ensuring that patients get high-quality, reliable information is essential for all pathologies. Gastric cancer (GC), with its often subtle early symptoms and signs, is one such pathology where early identification is crucial. Ensuring high-quality information availability online for GC is thus essential to increasing rates of early detection. Aims This study aimed to assess the quality and readability of information posted on websites related to GC. Materials and methods We applied the search term "gastric cancer" or "stomach cancer" to the top three search engines, namely Google, Yahoo, and Bing. Using predefined inclusion and exclusion criteria, we identified 20 unique websites posting information related to gastric cancer (GC). We then assessed the quality and readability of the information posted on these websites. We used recognized tools to complete these assessments, including the JAMA benchmark criteria, the DISCERN tool, the Flesch Reading Ease score (FRES), and the Flesch-Kincaid Grade Level (FKGL). We also developed and used a novel GC-specific content assessment tool. Furthermore, we assessed whether or not each website was awarded the Health on the Internet Seal of Approval. Results The average JAMA score was 1.55, with none of the twenty unique websites scoring the maximum 4 points. The average DISCERN score was 54.8 (68.5%), with no website achieving the maximum of 80. The HON seal was present in only six websites (30%). The average GCSCS score was 11, with only five websites achieving a maximum score of 13 (25%). The average FRES and FKGL were 52.7 and 9.7, respectively. Conclusion Our study underscores the critical need for more high-quality, reliable information about GC online. We also emphasize the importance of ensuring this information is comprehensible to most patients, as it directly impacts their health outcomes.
    Keywords:  gastric cancer; health information; health literacy; health on the internet; human factors
    DOI:  https://doi.org/10.7759/cureus.71691
  9. Neurol Clin Pract. 2025 Feb;15(1): e200366
      Background and Objectives: We evaluated the performance of 3 large language models (LLMs) in generating patient education materials (PEMs) and enhancing the readability of prewritten PEMs on idiopathic intracranial hypertension (IIH).Methods: This cross-sectional comparative study compared 3 LLMs, ChatGPT-3.5, ChatGPT-4, and Google Bard, for their ability to generate PEMs on IIH using 3 prompts. Prompt A (control prompt): "Can you write a patient-targeted health information handout on idiopathic intracranial hypertension that is easily understandable by the average American?", Prompt B (modifier statement + control prompt): "Given patient education materials are recommended to be written at a 6th-grade reading level, using the SMOG readability formula, can you write a patient-targeted health information handout on idiopathic intracranial hypertension that is easily understandable by the average American?", and Prompt C: "Given patient education materials are recommended to be written at a 6th-grade reading level, using the SMOG readability formula, can you rewrite the following text to a 6th-grade reading level: [insert text]." We compared generated and rewritten PEMs, along with the first 20 googled eligible PEMs on IIH, on readability (Simple Measure of Gobbledygook [SMOG] and Flesch-Kincaid Grade Level [FKGL]), quality (DISCERN and Patient Education Materials Assessment tool [PEMAT]), and accuracy (Likert misinformation scale).
    Results: Generated PEMs were of high quality, understandability, and accuracy (median DISCERN score ≥4, PEMAT understandability ≥70%, Likert misinformation scale = 1). Only ChatGPT-4 was able to generate PEMs at the specified 6th-grade reading level (SMOG: 5.5 ± 0.6, FKGL: 5.6 ± 0.7). Original published PEMs were rewritten to below a 6th-grade reading level with Prompt C, without a decrease in quality, understandability, or accuracy only by ChatGPT-4 (SMOG: 5.6 ± 0.6, FKGL: 5.7 ± 0.8, p < 0.001, DISCERN ≥4, Likert misinformation = 1).
    Discussion: In conclusion, LLMs, particularly ChatGPT-4, can produce high-quality, readable PEMs on IIH. They can also serve as supplementary tools to improve the readability of prewritten PEMs while maintaining quality and accuracy.
    DOI:  https://doi.org/10.1212/CPJ.0000000000200366
  10. Natl Med J India. 2024 May-Jun;37(3):pii: 10.25259/NMJI_327_2022. [Epub ahead of print]37(3): 124-130
      Background There are concerns over the reliability and comprehensibility of health-related information on the internet. We analyzed the readability, reliability and quality of online patient education materials obtained from websites associated with chronic low back pain (cLBP). Methods On 26 April 2022, the term 'cLBP' was used to perform a search on Google, and 95 eligible websites were identified. The Flesch Reading Ease Score (FRES) and Gunning Fog (GFOG) index were used to evaluate the readability. The Journal of the American Medical Association (JAMA) score was used to assess the reliability and the Health on the Net Foundation code of conduct (HONcode) was used to assess quality. Results The mean (SD) FRES was 55.74 (13.57) (very difficult) and the mean (SD) GFOG was 12.76 (2.8) (very difficult) of the websites reviwed. According to the JAMA scores, 28.4% of the websites had a high reliability rating and 33.7% adhered to the HONcode. Websites of different typologies were found to significantly differ in their reliability and the quality scores (p<0.05). Conclusion The reading ability required for cLBP-related information on the internet was found to be considerably higher than that recommended by the National Health Institute and had low reliability and poor quality. We believe that online information should have readability appropriate for most readers and must have reliable content that is appropriate to educate the public, particularly for websites that provide patient education material.
    DOI:  https://doi.org/10.25259/NMJI_327_2022
  11. Dent Traumatol. 2024 Oct 17.
      AIM: This study aimed to assess the validity and reliability of AI chatbots, including Bing, ChatGPT 3.5, Google Gemini, and Claude AI, in addressing frequently asked questions (FAQs) related to dental trauma.METHODOLOGY: A set of 30 FAQs was initially formulated by collecting responses from four AI chatbots. A panel comprising expert endodontists and maxillofacial surgeons then refined these to a final selection of 20 questions. Each question was entered into each chatbot three times, generating a total of 240 responses. These responses were evaluated using the Global Quality Score (GQS) on a 5-point Likert scale (5: strongly agree; 4: agree; 3: neutral; 2: disagree; 1: strongly disagree). Any disagreements in scoring were resolved through evidence-based discussions. The validity of the responses was determined by categorizing them as valid or invalid based on two thresholds: a low threshold (scores of ≥ 4 for all three responses) and a high threshold (scores of 5 for all three responses). A chi-squared test was used to compare the validity of the responses between the chatbots. Cronbach's alpha was calculated to assess the reliability by evaluating the consistency of repeated responses from each chatbot.
    CONCLUSION: The results indicate that the Claude AI chatbot demonstrated superior validity and reliability compared to ChatGPT and Google Gemini, whereas Bing was found to be less reliable. These findings underscore the need for authorities to establish strict guidelines to ensure the accuracy of medical information provided by AI chatbots.
    Keywords:  AI chatbots; Artificial Intelligence; Bing; ChatGPT; Claude AI; Dental trauma; Google Gemini; Traumatic dental injury
    DOI:  https://doi.org/10.1111/edt.13000
  12. JMIR Form Res. 2024 Oct 18. 8 e57720
      BACKGROUND: Oral diabetes medications are important for glucose management in people with diabetes. Although there are many health-related videos on Douyin (the Chinese version of TikTok), the quality of information and the effects on user comment attitudes are unclear.OBJECTIVE: The purpose of this study was to analyze the quality of information and user comment attitudes related to oral diabetes medication videos on Douyin.
    METHODS: The key phrase "oral diabetes medications" was used to search Douyin on July 24, 2023, and the final samples included 138 videos. The basic information in the videos and the content of user comments were captured using Python. Each video was assigned a sentiment category based on the predominant positive, neutral, or negative attitude, as analyzed using the Weiciyun website. Two independent raters assessed the video content and information quality using the DISCERN (a tool for assessing health information quality) and PEMAT-A/V (Patient Education Materials Assessment Tool for Audiovisual Materials) instruments.
    RESULTS: Doctors were the main source of the videos (136/138, 98.6%). The overall information quality of the videos was acceptable (median 3, IQR 1). Videos on Douyin showed relatively high understandability (median 75%, IQR 16.6%) but poor actionability (median 66.7%, IQR 48%). Most content on oral diabetes medications on Douyin related to the mechanism of action (75/138, 54.3%), precautions (70/138, 50.7%), and advantages (68/138, 49.3%), with limited content on indications (19/138, 13.8%) and contraindications (14/138, 10.1%). It was found that 10.1% (14/138) of the videos contained misinformation, of which 50% (7/14) were about the method of administration. Regarding user comment attitudes, the majority of videos garnered positive comments (81/138, 58.7%), followed by neutral comments (46/138, 33.3%) and negative comments (11/138, 8%). Multinomial logistic regression revealed 2 factors influencing a positive attitude: user comment count (adjusted odds ratio [OR] 1.00, 95% CI 1.00-1.00; P=.02) and information quality of treatment choices (adjusted OR 1.49, 95% CI 1.09-2.04; P=.01).
    CONCLUSIONS: Despite most videos on Douyin being posted by doctors, with generally acceptable information quality and positive user comment attitudes, some content inaccuracies and poor actionability remain. Users show more positive attitudes toward videos with high-quality information about treatment choices. This study suggests that health care providers should ensure the accuracy and actionability of video content, enhance the information quality of treatment choices of oral diabetes medications to foster positive user attitudes, help users access accurate health information, and promote medication adherence.
    Keywords:  Douyin; diabetes; information quality; oral diabetes medication; user comment attitude; video analysis
    DOI:  https://doi.org/10.2196/57720
  13. Ocul Immunol Inflamm. 2024 Oct 14. 1-7
      PURPOSE: Birdshot uveitis is a rare ophthalmic condition that can be challenging to control. The readability of online patient resources may impact the management of patients with birdshot. Thus, we examined the readability of online patient resources and identified differences in readability among sources and sections of websites.METHODS: We queried 3 search engines (Google, Yahoo, Bing) for search results based on a series of terms related to birdshot uveitis. One hundred and twenty results were retrieved and 17 articles were assessed for readability analysis using validated readability and grade-level metrics. Articles were scored based on their entire textual content and, when feasible, also based on sections (e.g. background, diagnosis, treatment). Statistical analyses were conducted using ANOVA and Tukey's honestly significant difference.
    RESULTS: The websites analyzed were from hospitals and academic centers (5), private practices (3), patient advocacy organizations (4), and other non-profits (5). On average, online patient resources are too difficult to read according to readability scores and grade levels ranging from late high school to college graduate. Articles written by non-profits other than advocacy organizations had an average of 6.5% more complex words than articles written by hospitals and academic centers (p < 0.05). Multiple metrics revealed that the treatment sections were less readable than the causes and symptoms sections.
    CONCLUSION: The readability of online patient resources for birdshot far exceeds reading levels recommended by the AMA, NIH, and patient safety organizations. Efforts should be made to improve the readability of patient education materials and patient understanding of their disease.
    Keywords:  Birdshot uveitis; online patient resources; patient education; readability
    DOI:  https://doi.org/10.1080/09273948.2024.2413904
  14. J Pediatr Nurs. 2024 Oct 16. pii: S0882-5963(24)00367-1. [Epub ahead of print]
      AIM: This study aimed to analyze the accuracy, quality, and reliability of the content of YouTube videos on safe sleep for infants in relation to the safe sleep recommendations from the American Academy of Pediatrics (AAP).METHODS: The research was conducted by searching the video-sharing platform YouTube for the keywords "safe sleep." The videos were subjected to a review and evaluation process conducted by two independent reviewers. The modified DISCERN and Global Quality Scale (GQS) were employed to assess the quality and reliability of the videos. The content of the videos was evaluated using an eight-item checklist prepared by the researchers in accordance with the recommendations of the AAP. The Kruskal-Wallis-H, Mann-Whitney U, and Pearson correlation analyses were employed for the purpose of data analysis. All statistical data were deemed significant at the 0.05 level.
    RESULTS: The 100 most relevant videos were viewed, and 85 videos that met the inclusion criteria were subjected to analysis. The mean values for the quality and reliability of the videos are 2.98 for the modified DISCERN score and 3.26 for the GQS. The mean value for the total checklist score was 4.78 out of 8. As indicated by the checklist developed in this study for the assessment of safe sleep video content, four of the eight items were present in over 80 % of the videos. The remaining four items were present in less than 42 % of the videos. A strong correlation was observed between the total score on the checklist and the modified DISCERN score (r = 0.915, p < 0.001) and the GQS (r = 0.918, p < 0.001).
    CONCLUSION: The evidence presented in this study indicates that improvements are needed in the quality and reliability of content on safe sleep practices for infants on YouTube.
    Keywords:  E-health; Infant; Safe sleep; YouTube
    DOI:  https://doi.org/10.1016/j.pedn.2024.10.007
  15. Disabil Health J. 2024 Oct 12. pii: S1936-6574(24)00168-7. [Epub ahead of print] 101719
      BACKGROUND: In the digital age, social media platforms such as YouTube have become significant channels for disseminating health information, including content related to autism spectrum disorder (ASD). The quality and reliability of this information, especially when produced by healthcare professionals, are crucial for public health education and promotion. This study aims the content of Portuguese-language videos about the treatment of ASD on YouTube, produced by healthcare providers from 2019 to 2023, assessing their quality and alignment with evidence-based practices.METHODS: A qualitative exploratory descriptive approach was used, with content analysis based on Bardin's method. A total of 41 videos were selected using keywords related to ASD. Transcriptions were analyzed for discussions on treatment approaches, best practices, and professional recommendations according to DSM-V and ICD-10 guidelines. The quality of information was assessed using the DISCERN questionnaire.
    RESULTS: The analysis revealed significant variability in the quality of the information. Videos were categorized into four quality groups based on DISCERN scores: good (n = 6), moderate (n = 11), poor (n = 20), and very poor (n = 4). Good quality videos had the highest engagement metrics and overall quality scores. Common themes identified included defining and understanding ASD, ABA interventions and strategies, family and social impact, skills development, and challenges and solutions.
    CONCLUSION: While some videos provided accurate, evidence-based information, a substantial portion did not meet minimum quality criteria. This highlights the need for improved mechanisms to ensure the dissemination of reliable health information on social media platforms.
    Keywords:  Autism spectrum disorder; Content analysis; DISCERN; Health information quality; Healthcare providers; Social media; YouTube
    DOI:  https://doi.org/10.1016/j.dhjo.2024.101719
  16. J Pediatr Nurs. 2024 Oct 16. pii: S0882-5963(24)00363-4. [Epub ahead of print]
      PURPOSE: The purpose of this study was to evaluate the quality, content, and reliability of YouTube videos that address ostomy bag change techniques in children. As digital platforms are increasingly used for health-related information, especially for those caring for pediatric ostomy patients, this study aims to identify the strengths and limitations of available online resources.DESIGN: A descriptive, retrospective, and cross-sectional research design was used to evaluate YouTube videos focused on pediatric ostomy bag change techniques.
    SUBJECTS AND SETTING: The study included a total of 33 YouTube videos identified through searches conducted between May 3 and May 30, 2024. Videos included infants, children, and adolescents and were selected based on their relevance to pediatric double pouch ostomy care.
    METHODS: Videos were scored using the modified DISCERN score and the Global Quality Scale (GQS) to assess video quality and reliability. A checklist based on established ostomy care guidelines was used for content analysis and identification of common procedural errors. The view rates, video/likes ratio, and popularity of the videos were calculated as the video power index. Data were analyzed using SPSS 27 and statistical significance was determined with a p-value of less than 0.05.
    RESULTS: The analysis showed that 54.5 % of the videos were uploaded by independent publishers and 45.5 % by healthcare institutions. Videos aimed at caregivers were the most common (66.7 %). The mean number of views was 24,026.57, with a mean modified DISCERN score of 2.53 and a GQS score of 2.80. There was also a positive correlation between video length and quality scores. Significant differences in video quality were found between those published by healthcare organizations and independent publishers, with healthcare organization videos generally scoring higher. The most common errors in the videos included inadequate stoma coverage and improper disposal procedures.
    CONCLUSIONS: The study shows that there is significant variability in the quality and reliability of YouTube videos on how to change an ostomy pouch in children. Compared to videos produced by independent publishers, videos produced by healthcare institutions had higher quality and reliability. The findings underscore the need for improved educational resources and quality control in digital platforms in order to better support the caregivers of pediatric ostomy patients.
    Keywords:  Pediatric ostomy pouch changing techniques; Pediatric stoma care; Video quality assessment; YouTube videos
    DOI:  https://doi.org/10.1016/j.pedn.2024.10.002
  17. PLoS One. 2024 ;19(10): e0310508
      BACKGROUND: Recently, there has been an increase in scabies cases among young children in low- and middle-income countries. With the rise of online health information, platforms such as YouTube have become popular sources of disease-related content, but the accuracy of this information remains a concern.AIM: This study evaluates the reliability and quality of YouTube videos concerning scabies in children to address the lack of research in this area.
    MATERIALS AND METHODS: A cross-sectional analysis was conducted on April 1, 2024, reviewing the first 200 relevant YouTube videos with the search terms "scabies" and "scabies in children." Videos were assessed using modified DISCERN (mDISCERN), Global Quality Score (GQS), and Journal of the American Medical Association (JAMA) scoring systems. Statistical analysis included descriptive statistics, Kruskal-Wallis tests, and Spearman correlation analysis.
    RESULTS: Out of 200 videos, 107 met the inclusion criteria. The average mDISCERN score was 2.17, GQS was 2.63, and JAMA was 2.05, indicating generally poor quality. Videos by patients had the highest quality scores, while those from academic institutions had the highest JAMA scores. Longer videos with higher view counts were associated with better quality.
    CONCLUSION: This study reveals that the majority of YouTube videos on scabies in children are of low quality. There is a need for healthcare professionals to produce more accurate and reliable content to improve the quality of information available on YouTube. Further research should focus on enhancing the quality of health information on digital platforms.
    DOI:  https://doi.org/10.1371/journal.pone.0310508
  18. BMC Public Health. 2024 Oct 18. 24(1): 2880
      BACKGROUND: Osteoporosis is currently considered the most common bone disease in the world and is characterized by low bone mass, deterioration of the bone tissue microstructure, and decreased bone strength. With the increasing popularity of smartphones and short videos, many patients search for various types of health information through social media, such as short videos. As one of China's short video giants, TikTok has played a significant role in spreading health information. We found that there are many videos about osteoporosis on TikTok; however, the quality of these short videos has not yet been evaluated.OBJECTIVE: The purpose of this study was to evaluate the information quality of osteoporosis videos on the domestic TikTok platform.
    METHODS: We retrieved and screened 100 videos about osteoporosis from TikTok, extracted the basic information, encoded the video content, and recorded the source of each video. Two independent raters evaluated the information quality of each video via the DISCERN rating scale.
    RESULT: The videos were divided into three groups according to their source: medical personnel, science communicators, and news media, with medical personnel posting the most videos. The content of the video is divided into 7 groups, namely, disease prevention, disease diagnosis, disease symptoms, disease overview, life-style, drug knowledge, and drug treatment, with the most videos related to disease overview. The average DISCERN score of the videos is 37.69 (SD = 6.78), mainly within the 'poor' (54/100, 54%) and 'appropriate' (43/100, 43%) rating ranges, with overall quality being low. Further analysis revealed a positive correlation between the number of shares, comments, likes, and favorites, and a positive correlation between the DISCERN score and the number of shares and favorites.
    CONCLUSION: The overall quality of videos concerning osteoporosis on TikTok is lower, but the quality of videos varies significantly across different sources. We should be selective and cautious when watching videos about osteoporosis on TikTok.
    Keywords:  Information quality; Internet; Osteoporosis; Social media; TikTok
    DOI:  https://doi.org/10.1186/s12889-024-20375-2
  19. Australas Psychiatry. 2024 Oct 18. 10398562241291956
      OBJECTIVES: TikTok is being increasingly used as an easily accessible source of information on Attention-Deficit/Hyperactivity Disorder (ADHD). This study aimed to find the quality of information on ADHD screening or self-test in TikTok videos with the hashtag #adhdtest and the engagement of these videos with their viewers.METHOD: The content of the top 50 TikTok videos with the "hashtag #ADHDtest" was analyzed cross-sectionally and categorized as "useful" or "misleading" after comparison of its content with the "Adult ADHD Self-Report Scale" (ASRS-v1.1). The videos were categorized as "useful" if its contents had at least 4 out of the 6 questions on the ASRS-v1.1 screener. Its level of engagement was quantified by measuring the number of times the video was liked, commented on, or added to favorites. Descriptive statistics were used for analysis.
    RESULT: Out of the 50 included #adhdtest videos, 92% (n = 46) were misleading. Furthermore, useful videos had minimal engagement, with only 4% of the total likes, 1% of the total comments, and 7% of the total favorites.
    CONCLUSION: There is misleading information related to adult ADHD screening and testing on TikTok. There is a need to address this misinformation.
    Keywords:  ADHD misinformation; ADHD social media; ADHD test; TikTok; social media misinformation
    DOI:  https://doi.org/10.1177/10398562241291956
  20. JMIR Form Res. 2024 Oct 18. 8 e54827
      BACKGROUND: Stroke is a leading cause of death and disability worldwide. As health resources become digitized, it is important to understand how people who have experienced stroke engage with online health information. This understanding will aid in guiding the development and dissemination of online resources to support people after stroke.OBJECTIVE: This study aims to explore the online health information-seeking behaviors of people who have experienced stroke and any related barriers or navigational needs.
    METHODS: Purposeful sampling was used to recruit participants via email between March and November 2022. The sampling was done from an existing cohort of Australian stroke survivors who had previously participated in a randomized controlled trial of an online secondary prevention program. The cohort consisted of people with low levels of disability. Semistructured one-on-one interviews were conducted via phone or video calls. These calls were audio recorded and transcribed verbatim. The data were analyzed by 2 independent coders using a combined inductive-deductive approach. In the deductive analysis, responses were mapped to an online health information-seeking behavior framework. Inductive thematic analysis was used to analyze the remaining raw data that did not fit within the deductive theoretical framework.
    RESULTS: A sample of 15 relatively independent, high-functioning people who had experienced stroke from 4 Australian states, aged between 29 and 80 years, completed the interview. A broad range of online health information-seeking behaviors were identified, with most relating to participants wanting to be more informed about medical conditions and symptoms of their own or of a family member or a friend. Barriers included limited eHealth literacy and too much generalization of online information. Online resources were described to be more appealing and more accessible if they were high-quality, trustworthy, easy to use, and suggested by health care providers or trusted family members and friends. Across the interviews, there was an underlying theme of disconnection that appeared to impact not only the participants' online health information seeking, but their overall experience after stroke. These responses were grouped into 3 interrelated subthemes: disconnection from conventional stroke narratives and resources, disconnection from the continuing significance of stroke, and disconnection from long-term supports.
    CONCLUSIONS: People who have experienced stroke actively engage with the internet to search for health information with varying levels of confidence. The underlying theme of disconnection identified in the interviews highlights the need for a more comprehensive and sustained framework for support after stroke beyond the initial recovery phase. Future research should explore the development of tailored and relatable internet-based resources, improved communication and education about the diversity of stroke experiences and ongoing risks, and increased opportunities for long-term support.
    Keywords:  consumer health information; digital health; eHealth; health-risk behaviors; information-seeking behavior; long-term care; mobile phone; online health information seeking; qualitative research; stroke
    DOI:  https://doi.org/10.2196/54827
  21. Otolaryngol Head Neck Surg. 2024 Oct 16.
      OBJECTIVE: No studies describe what patients search for online in relation to retrograde cricopharyngeal dysfunction (RCPD). Our objectives were to describe the Google search volume for RCPD, identify the most common queries related to RCPD, and evaluate the available online resources.STUDY DESIGN: Observational.
    SETTING: Google Database.
    METHODS: Using Ahrefs and Search Response, Google search volume for RCPD and "People Also Ask" (PAA) questions were documented. PAA questions were categorized based on intent, and the websites were categorized on source. The quality and readability of the sources were determined using the Journal of the American Medical Association (JAMA) benchmark criteria, Flesch Reading Ease score, and Flesch-Kincaid Grade Level.
    RESULTS: Search volume for RCPD-related content has continually increased since 2021, with a combined average volume of 6287 searches per month. Most PAA questions were related to technical details (61.07%) and treatments (32.06%) for RCPD. Websites provided to answer these questions were most often from academic (25.95%) and commercial (22.14%) sources. None of the sources met the criteria for universal readability, and only 15% met all quality metrics set forth by JAMA.
    CONCLUSION: Interest in RCPD is at an all-time high, with information related to its diagnosis and treatment most popular among Google users. Significantly, none of the resources provided by Google met the criteria for universal readability, preventing many patients from fully comprehending the information presented. Future work should aim to address questions related to RCPD in a suitable way for all patient demographics.
    Keywords:  Internet; RCPD; quality; readability; retrograde cricopharyngeus dysfunction; search engine
    DOI:  https://doi.org/10.1002/ohn.1022
  22. Nutrients. 2024 Sep 30. pii: 3314. [Epub ahead of print]16(19):
      INTRODUCTION: Therapeutic nutrition plays an imperative role during a patient's hospital course. There is a tremendous body of literature that emphasizes the systematic delivery of information regarding hospital nutrition diets. A major component of delivering healthcare information is the principle of providing quality healthcare information, but this has not yet been investigated on hospital nutrition diets. This study aimed to evaluate the comprehension and readability of patient education materials regarding therapeutic hospital diets.METHODOLOGY: The methodology employed the use of publicly available questions regarding hospital nutrition diets and categorized them per Rothwell's Classification of Questions. Additionally, the questions were extracted online and have an associated digital article linked to the question. These articles underwent analysis for readability scores.
    RESULTS: This study's findings reveal that most hospital diets do not meet the recommended grade-reading levels.
    CONCLUSIONS: This underscores the need for healthcare providers to enhance patient education regarding hospital diets. The prevalence of "Fact" questions showcases the importance of clearly explaining diets and dietary restrictions to patients.
    Keywords:  digital education; hospital diets; nutrition; patient education; readability
    DOI:  https://doi.org/10.3390/nu16193314
  23. J Med Internet Res. 2024 Oct 18. 26 e54135
      BACKGROUND: The internet has become an increasingly vital platform for health-related information, especially in upper-middle-income countries such as China. While previous research has suggested that online health information seeking (OHIS) can significantly impact individuals' engagement in health behaviors, most research focused on patient-centered health communication.OBJECTIVE: This study aims to examine how OHIS influences health behavior engagement among Chinese internet users, focusing on the role of eHealth literacy and perceived information quality in influencing relationships.
    METHODS: An online cross-sectional survey was conducted in November 2021 among 10,000 Chinese internet users, using quota sampling based on sex, age, and urban and rural residence, in line with the 48th Statistical Report on Internet Development of China. Nonparametric tests were used to examine the differences in eHealth literacy across sociodemographic groups. Partial correlation analysis and stepwise linear regression were conducted to test the associations between key variables. Confirmatory factor analysis and structural equation modeling were conducted to test the hypotheses.
    RESULTS: Our study identified significant disparities in functional and critical eHealth literacy between urban and rural residents across age groups, income levels, education backgrounds, and health conditions (all P<.001). In terms of sex and regional differences, we found higher functional literacy among female users than male users, and critical literacy varied significantly across different regions. The proposed structural model showed excellent fit (χ2404=4183.6, χ2404=10.4,P<.001; root mean square error of approximation value of 0.031, 95% CI 0.030-.031; standardized root mean square residual value of 0.029; and comparative fit index value of 0.955), highlighting reciprocal associations between 2 types of eHealth literacy and OHIS. Participants' functional eHealth literacy, critical eHealth literacy, and OHIS have positive impacts on their health behavioral engagement. Perceived information quality was found to mediate the influence of OHIS on health behavior (b=0.003, 95% CI 0.002-0.003; P<.001).
    CONCLUSIONS: The study revealed the pathways linking sociodemographic factors, eHealth literacy, OHIS, and perceived information quality and how they together influenced health outcomes. The findings underscore the significance of enhancing eHealth literacy and improving information quality to promote better health outcomes among Chinese internet users.
    Keywords:  China; eHealth literacy; health behavior; health promotion; mobile phone; online health information seeking; perceived information quality
    DOI:  https://doi.org/10.2196/54135
  24. Psychol Psychother. 2024 Oct 14.
      BACKGROUND: Many young people (YP) struggle with their mental health and look online for help. To capitalise on their digital presence, we need to better understand how and where they seek information online and what they think of what they find.METHOD: We recruited 24 YP (aged 13-18 years). Online interviews were co-conducted by research team members and trained young researchers. We presented a persona with depression symptoms and asked about potential sources of information/support they might seek. They were also asked to think aloud while searching online and reviewing mental health resources (NHS, Young Minds). We used reflexive thematic analysis.
    RESULTS: Analysis generated four themes: (1) the online help-seeking process, showcasing where YP look for information and why; (2) the mismatch between the information YP expected to find and the reality; (3) the strategies YP employed to determine a source's trust and credibility and (4) individual differences that can influence help-seeking.
    CONCLUSION: Participants initiated their online search by Googling symptoms. They trusted NHS websites for basic medical information, while charities provided detailed content. Despite scepticism about content, social media offered validation. Online resources should prioritise visual appeal, user-friendliness, age-appropriate and personalised content and peer insights. Codesign is imperative to ensure high-quality, impactful research.
    Keywords:  adolescents; coproduction; depression symptoms; early help; mental health; online help‐seeking; qualitative; think aloud
    DOI:  https://doi.org/10.1111/papt.12550
  25. J Adv Nurs. 2024 Oct 18.
      AIM: To develop and test the validity of an artificial intelligence-assisted patient education material for ostomy patients.DESIGN: A methodological study.
    METHODS: The study was carried out in two main stages and five steps: (1) determining the information needs of ostomy patients, (2) creating educational content, (3) converting the educational content into patient education material, (4) validation of patient education material based on expert review and (5) measuring the readability of the patient education material. We used ChatGPT 4.0 to determine the information needs and create patient education material content, and Publuu Online Flipbook Maker was used to convert the educational content into patient education material. Understandability and applicability scores were assessed using the Patient Education Materials Assessment Tool submitted to 10 expert reviews. The tool inter-rater reliability was determined via the intraclass correlation coefficient. Readability was analysed using the Flesch-Kincaid Grade Level, Gunning Fog Index and Simple Measure of Gobbledygook formula.
    RESULTS: The mean Patient Education Materials Assessment Tool understandability score of the patient education material was 81.91%, and the mean Patient Education Materials Assessment Tool actionability score was 85.33%. The scores for the readability indicators were calculated to be Flesch-Kincaid Grade Level: 8.53, Gunning Fog: 10.9 and Simple Measure of Gobbledygook: 7.99.
    CONCLUSIONS: The AI-assisted patient education material for ostomy patients provided accurate information with understandable and actionable responses to patients, but is at a high reading level for patients.
    IMPLICATIONS FOR THE PROFESSION AND PATIENT CARE: Artificial intelligence-assisted patient education materials can significantly increase patient information rates in the health system regarding ease of practice. Artificial intelligence is currently not an option for creating patient education material, and their impact on the patient is not fully known.
    REPORTING METHOD: The study followed the STROBE checklist guidelines.
    PATIENT OR PUBLIC CONTRIBUTION: No patient or public contributions.
    Keywords:  artificial intelligence; methodological study; nursing; ostomy; patient education; patient education handout; readability; validity
    DOI:  https://doi.org/10.1111/jan.16542