bims-librar Biomed News
on Biomedical librarianship
Issue of 2024‒08‒25
eighteen papers selected by
Thomas Krichel, Open Library Society



  1. PLoS One. 2024 ;19(8): e0307699
      In the pursuit of digital transformation, college libraries have increasingly embraced the promotion of digital reading as a critical initiative. While numerous studies have delved into the strategies employed by college libraries in their digital transformation endeavors, there remains a lack of research elucidating the direct influence of digital reading on reader service satisfaction within these institutions. Drawing upon the service quality model, this paper aims to address this gap by examining the multifaceted influence of digital reading on reader service satisfaction in college libraries. By examining the various dimensions of digital reading services, this study employs the fsQCA approach to uncover specific combinations that contribute to heightened levels of reader service satisfaction. The results reveal three distinct configurations that can explain the high level of reader service satisfaction. By elucidating these critical relationships, this research not only provides a contribution to the research regarding the evolving role of college libraries but also provides practical insights for college libraries aspiring to realize digital transformation by promoting digital reading.
    DOI:  https://doi.org/10.1371/journal.pone.0307699
  2. Stud Health Technol Inform. 2024 Aug 22. 316 652-653
      This study explores the application of Retriever-Augmented Generation (RAG) in enhancing medical information retrieval from the PubMed database. By integrating RAG with Large Language Models (LLMs), we aim to improve the accuracy and relevance of medical information provided to healthcare professionals. Our evaluation on a labeled dataset of 1,000 queries demonstrates promising results in answer relevance, while highlighting areas for improvement in groundedness and context relevance.
    Keywords:  LLM; PubMed; Retriever-Augmented Generation (RAG)
    DOI:  https://doi.org/10.3233/SHTI240498
  3. Health Info Libr J. 2024 Sep;41(3): 213-215
      Core collections have been produced by CILIP's Health Libraries Group, then called the Library Association's Medical Section, since 1952. Maintained by a Working Group of health librarians based in the UK NHS, higher education and specialist libraries, the collections provide an up-to-date curated list of reliable titles essential to health libraries. The core collections currently include nursing, midwifery, medicine and dentistry. The newest core collection is being developed in collaboration with the African Hospital Libraries to provide a list of key resources relevant to sub-Saharan Africa. Expressions of interest to help develop this latest collection are invited.
    Keywords:  Africa, east; Africa, south; Africa, west; collection development; libraries, health care; libraries, medical; midwifery; nursing
    DOI:  https://doi.org/10.1111/hir.12547
  4. Campbell Syst Rev. 2024 Sep;20(3): e1432
      The search methods used in systematic reviews provide the foundation for establishing the body of literature from which conclusions are drawn and recommendations made. Searches should aim to be comprehensive and reporting of search methods should be transparent and reproducible. Campbell Collaboration systematic reviews strive to adhere to the best methodological guidance available for this type of searching. The current work aims to provide an assessment of the conduct and reporting of searches in Campbell Collaboration systematic reviews. Our objectives were to examine how searches are currently conducted in Campbell systematic reviews, how search strategies, search methods and search reporting adhere to the Methodological Expectations of Campbell Collaboration Intervention Reviews (MECCIR) and PRISMA standards, and identify emerging or novel methods used in searching in Campbell systematic reviews. We also investigated the role of information specialists in Campbell systematic reviews. We handsearched the Campbell Systematic Reviews journal tables of contents from January 2017 to March 2024. We included all systematic reviews published since 2017. We excluded other types of evidence synthesis (e.g., evidence and gap maps), updates to systematic reviews when search methods were not changed from the original pre-2017 review, and systematic reviews that did not conduct their own original searches. We developed a data extraction form in part based on the conduct and reporting items in MECCIR and PRISMA. In addition, we extracted information about the general quality of searches based on the use of Boolean operators, keywords, database syntax and subject headings. Data extraction included information about reporting of sources searched, some aspects of search quality, the use and reporting of supplementary search methods, reporting of the search strategy, the involvement of information specialists, date of the most recent search, and citation of the Campbell search methods guidance. Items were rated as fully, partially or not conducted or reported. We cross-walked our data extraction items to the 2019 MECCIR standards and 2020 PRISMA guidelines and provide descriptive analyses of the conduct and reporting of searches in Campbell systematic reviews, indicating level of adherence to standards where applicable. We included 111 Campbell systematic reviews across all coordinating groups published since 2017 up to the search date. Almost all (98%) included reviews searched at least two relevant databases and all reported the databases searched. All reviews searched grey literature and most (82%) provided a full list of grey literature sources. Detailed information about databases such as platform and date range coverage was lacking in 16% and 77% of the reviews, respectively. In terms of search strategies, most used Boolean operators, search syntax and phrase searching correctly, but subject headings in databases with controlled vocabulary were used in only about half of the reviews. Most reviews reported at least one full database search strategy (90%), with 63% providing full search strategies for all databases. Most reviews conducted some supplementary searching, most commonly searching the references of included studies, whereas handsearching of journals and forward citation searching were less commonly reported (51% and 62%, respectively). Twenty-nine percent of reviews involved an information specialist co-author and about 45% did not mention the involvement of any information specialist. When information specialists were co-authors, there was a concomitant increase in adherence to many reporting and conduct standards and guidelines, including reporting website URLs, reporting methods for forward citation searching, using database syntax correctly and using subject headings. No longitudinal trends in adherence to conducting and reporting standards were found and the Campbell search methods guidance published in 2017 was cited in only twelve reviews. We also found a median time lag of 20 months between the most recent search and the publication date. In general, the included Campbell systematic reviews searched a wide range of bibliographic databases and grey literature, and conducted at least some supplementary searching such as searching references of included studies or contacting experts. Reporting of mandatory standards was variable with some frequently unreported (e.g., website URLs and database date ranges) and others well reported in most reviews. For example, database search strategies were reported in detail in most reviews. For grey literature, source names were well reported but search strategies were less so. The findings will be used to identify opportunities for advancing current practices in Campbell reviews through updated guidance, peer review processes and author training and support.
    Keywords:  Campbell Collaboration; MECCIR; evidence synthesis methods; information retrieval; reporting standards; systematic review
    DOI:  https://doi.org/10.1002/cl2.1432
  5. Appl Ergon. 2024 Aug 16. pii: S0003-6870(24)00144-3. [Epub ahead of print]121 104367
      With the diversification of Internet uses, online content type has become richer. Alongside organic results, search engine results pages now provide tools to improve information searching and learning. The People also ask (PAA) box is intended to reduce users' cognitive costs by offering easily accessible information. Nevertheless, there has been scant research on how users actually process it, compared with more traditional content type (i.e., organic results and online documents). The present eye-tracking study explored this question by considering the search context (complex lookup task vs. exploratory task) and users' prior domain knowledge (high vs. low). Main results show that users fixated the PAA box and online documents more to achieve exploratory goals, and fixated organic results more to achieve lookup goals. Users with low knowledge process PAA content at an early stage in their search contrary to their counterparts with high knowledge. Given these results, information system developers should diversify PAA content according to search context and users' prior domain knowledge.
    Keywords:  People also ask; Prior domain knowledge; Search context
    DOI:  https://doi.org/10.1016/j.apergo.2024.104367
  6. Bioinformatics. 2024 Aug 22. pii: btae519. [Epub ahead of print]
      MOTIVATION: Integrating information from data sources representing different study designs has the potential to strengthen evidence in population health research. However, this concept of evidence "triangulation" presents a number of challenges for systematically identifying and integrating relevant information. These include the harmonization of heterogenous evidence with common semantic concepts and properties, as well as the priortization of the retrieved evidence for triangulation with the question of interest.RESULTS: We present ASQ (Annotated Semantic Queries), a natural language query interface to the integrated biomedical entities and epidemiological evidence in EpiGraphDB, which enables users to extract "claims" from a piece of unstructured text, and then investigate the evidence that could either support, contradict the claims, or offer additional information to the query.This approach has the potential to support the rapid review of preprints, grant applications, conference abstracts and articles submitted for peer review. ASQ implements strategies to harmonize biomedical entities in different taxonomies and evidence from different sources, to facilitate evidence triangulation and interpretation.
    AVAILABILITY AND IMPLEMENTATION: ASQ is openly available at https://asq.epigraphdb.org and its source code is available at https://github.com/mrcieu/epigraphdb-asq under GPL-3.0 license.
    SUPPLEMENTARY INFORMATION: Further information can be found in the Supplementary Materials as well as on the ASQ platform via https://asq.epigraphdb.org/docs.
    Keywords:  Annotation; Data mining; information retrieval; knowledge representation; natural language processing; ontology
    DOI:  https://doi.org/10.1093/bioinformatics/btae519
  7. JMIR AI. 2024 Aug 19. 3 e56537
      BACKGROUND: With the rapid evolution of artificial intelligence (AI), particularly large language models (LLMs) such as ChatGPT-4 (OpenAI), there is an increasing interest in their potential to assist in scholarly tasks, including conducting literature reviews. However, the efficacy of AI-generated reviews compared with traditional human-led approaches remains underexplored.OBJECTIVE: This study aims to compare the quality of literature reviews conducted by the ChatGPT-4 model with those conducted by human researchers, focusing on the relational dynamics between physicians and patients.
    METHODS: We included 2 literature reviews in the study on the same topic, namely, exploring factors affecting relational dynamics between physicians and patients in medicolegal contexts. One review used GPT-4, last updated in September 2021, and the other was conducted by human researchers. The human review involved a comprehensive literature search using medical subject headings and keywords in Ovid MEDLINE, followed by a thematic analysis of the literature to synthesize information from selected articles. The AI-generated review used a new prompt engineering approach, using iterative and sequential prompts to generate results. Comparative analysis was based on qualitative measures such as accuracy, response time, consistency, breadth and depth of knowledge, contextual understanding, and transparency.
    RESULTS: GPT-4 produced an extensive list of relational factors rapidly. The AI model demonstrated an impressive breadth of knowledge but exhibited limitations in in-depth and contextual understanding, occasionally producing irrelevant or incorrect information. In comparison, human researchers provided a more nuanced and contextually relevant review. The comparative analysis assessed the reviews based on criteria including accuracy, response time, consistency, breadth and depth of knowledge, contextual understanding, and transparency. While GPT-4 showed advantages in response time and breadth of knowledge, human-led reviews excelled in accuracy, depth of knowledge, and contextual understanding.
    CONCLUSIONS: The study suggests that GPT-4, with structured prompt engineering, can be a valuable tool for conducting preliminary literature reviews by providing a broad overview of topics quickly. However, its limitations necessitate careful expert evaluation and refinement, making it an assistant rather than a substitute for human expertise in comprehensive literature reviews. Moreover, this research highlights the potential and limitations of using AI tools like GPT-4 in academic research, particularly in the fields of health services and medical research. It underscores the necessity of combining AI's rapid information retrieval capabilities with human expertise for more accurate and contextually rich scholarly outputs.
    Keywords:  AI; AI vs. human; Chat GPT performance evaluation; OpenAIs; algorithm; algorithms; artificial intelligence; chatGPT; large language models; literature review; literature reviews; literature search; predictive model; predictive models
    DOI:  https://doi.org/10.2196/56537
  8. Medicine (Baltimore). 2024 Aug 16. 103(33): e39305
      There is no study that comprehensively evaluates data on the readability and quality of "palliative care" information provided by artificial intelligence (AI) chatbots ChatGPT®, Bard®, Gemini®, Copilot®, Perplexity®. Our study is an observational and cross-sectional original research study. In our study, AI chatbots ChatGPT®, Bard®, Gemini®, Copilot®, and Perplexity® were asked to present the answers of the 100 questions most frequently asked by patients about palliative care. Responses from each 5 AI chatbots were analyzed separately. This study did not involve any human participants. Study results revealed significant differences between the readability assessments of responses from all 5 AI chatbots (P < .05). According to the results of our study, when different readability indexes were evaluated holistically, the readability of AI chatbot responses was evaluated as Bard®, Copilot®, Perplexity®, ChatGPT®, Gemini®, from easy to difficult (P < .05). In our study, the median readability indexes of each of the 5 AI chatbots Bard®, Copilot®, Perplexity®, ChatGPT®, Gemini® responses were compared to the "recommended" 6th grade reading level. According to the results of our study answers of all 5 AI chatbots were compared with the 6th grade reading level, statistically significant differences were observed in the all formulas (P < .001). The answers of all 5 artificial intelligence robots were determined to be at an educational level well above the 6th grade level. The modified DISCERN and Journal of American Medical Association scores was found to be the highest in Perplexity® (P < .001). Gemini® responses were found to have the highest Global Quality Scale score (P < .001). It is emphasized that patient education materials should have a readability level of 6th grade level. Of the 5 AI chatbots whose answers about palliative care were evaluated, Bard®, Copilot®, Perplexity®, ChatGPT®, Gemini®, their current answers were found to be well above the recommended levels in terms of readability of text content. Text content quality assessment scores are also low. Both the quality and readability of texts should be brought to appropriate recommended limits.
    DOI:  https://doi.org/10.1097/MD.0000000000039305
  9. Am J Rhinol Allergy. 2024 Aug 21. 19458924241273055
      BACKGROUND: Despite National Institutes of Health and American Medical Association recommendations to publish online patient education materials at or below sixth-grade literacy, those pertaining to endoscopic skull base surgery (ESBS) have lacked readability and quality. ChatGPT is an artificial intelligence (AI) system capable of synthesizing vast internet data to generate responses to user queries but its utility in improving patient education materials has not been explored.OBJECTIVE: To examine the current state of readability and quality of online patient education materials and determined the utility of ChatGPT for improving articles and generating patient education materials.
    METHODS: An article search was performed utilizing 10 different search terms related to ESBS. The ten least readable existing patient-facing articles were modified with ChatGPT and iterative queries were used to generate an article de novo. The Flesch Reading Ease (FRE) and related metrics measured overall readability and content literacy level, while DISCERN assessed article reliability and quality.
    RESULTS: Sixty-six articles were located. ChatGPT improved FRE readability of the 10 least readable online articles (19.7 ± 4.4 vs. 56.9 ± 5.9, p < 0.001), from university to 10th grade level. The generated article was more readable than 48.5% of articles (38.9 vs. 39.4 ± 12.4) and higher quality than 94% (51.0 vs. 37.6 ± 6.1). 56.7% of the online articles had "poor" quality.
    CONCLUSIONS: ChatGPT improves the readability of articles, though most still remain above the recommended literacy level for patient education materials. With iterative queries, ChatGPT can generate more reliable and higher quality patient education materials compared to most existing online articles and can be tailored to match readability of average online articles.
    Keywords:  AI; ChatGPT; artificial intelligence; endoscopic surgery; skull base
    DOI:  https://doi.org/10.1177/19458924241273055
  10. Curr Probl Cardiol. 2024 Aug 17. pii: S0146-2806(24)00432-8. [Epub ahead of print] 102797
      BACKGROUND: Patient education plays a crucial role in improving the quality of life for patients with heart failure. As artificial intelligence continues to advance, new chatbots are emerging as valuable tools across various aspects of life. One prominent example is ChatGPT, a widely used chatbot among the public. Our study aims to evaluate the readability of ChatGPT answers for common patients' questions about heart failure.METHODS: We performed a comparative analysis between ChatGPT responses and existing heart failure educational materials from top US cardiology institutes. Validated readability calculators were employed to assess and compare the reading difficulty and grade level of the materials. Furthermore, blind assessment using The Patient Education Materials Assessment Tool (PEMAT) was done by four advanced heart failure attendings to evaluate the readability and actionability of each resource.
    RESULTS: Our study revealed that responses generated by ChatGPT were longer and more challenging to read compared to other materials. Additionally, these responses were written at a higher educational level (undergraduate and 9-10th grade), similar to those from the Heart Failure Society of America. Despite achieving a competitive PEMAT readability score (75%), surpassing the American Heart Association score (68%), ChatGPT's actionability score was the lowest (66.7%) among all materials included in our study.
    CONCLUSION: Despite its current limitations, artificial intelligence chatbots has the potential to revolutionize the field of patient education especially given theirs ongoing improvements. However, further research is necessary to ensure the integrity and reliability of these chatbots before endorsing them as reliable resources for patient education.
    Keywords:  ChatGPT; PEMAT; artificial intelligence; heart failure; patient education; readability
    DOI:  https://doi.org/10.1016/j.cpcardiol.2024.102797
  11. Cureus. 2024 Jul;16(7): e64880
      BACKGROUND: Osteoporosis is a prevalent metabolic bone disease in the Middle East. Middle Easterners rely on the Internet as a source of information about osteoporosis and its treatment. Adequate awareness can help to prevent osteoporosis and its complications. Websites covering osteoporosis in Arabic must be of good quality and readability to be beneficial for people in the Middle East.METHODS: Two Arabic terms for osteoporosis were searched on Google.com (Google Inc., Mountainview, CA), and the first 100 results for each term were examined for eligibility. Two independent raters evaluated the websites using DISCERN and the Journal of the American Medical Association (JAMA) criteria for quality and reliability. The Flesch Kincaid grade level (FKGL), Simple Measure of Gobbledygook (SMOG), and Flesch Reading Ease (FRE) scale were used to evaluate the readability of each website's content.
    RESULTS: Twenty-five websites were included and evaluated in our study. The average DISCERN score was 28.36±12.18 out of 80 possible scores. The average JAMA score was 1.05±1.15 out of four total scores. The readability scores of all websites were, on average, 50.71±21.96 on the FRE scale, 9.25±4.89 on the FKGL, and 9.74±2.94 on the SMOG. There was a significant difference (p = 0.026 and 0.044) in the DISCERN and JAMA scores, respectively, between the websites on the first Google page and the websites seen on later pages.
    CONCLUSION: The study found Arabic websites covering osteoporosis to be of low quality and difficult readability. Because these websites are a major source for patient education, improving their quality and readability is a must. The use of simpler language is needed, as is covering more aspects of the diseases, such as prevention.
    Keywords:  educational websites; osteoporosis; patient education; quality analysis; readability; reliability
    DOI:  https://doi.org/10.7759/cureus.64880
  12. J Burn Care Res. 2024 Aug 21. pii: irae161. [Epub ahead of print]
      Recent studies indicate that YouTube has become a primary source of healthcare information for patients. Videos about skin graft procedures on YouTube have accumulated millions of views, yet there lacks a publication investigating the educational quality of this content. With current literature revealing misleading healthcare information found on YouTube, this study aims to evaluate the educational quality of videos related to skin graft procedures. YouTube was searched for various terms such as "Skin Graft Procedures" and "Skin Graft Surgery." 105 videos were assessed, with 21 excluded. Four independent reviewers rated the material with the Global Quality Scale (5 = highest quality, 1 = lowest quality) to judge educational value. Viewership, source, modality, and date of upload were also collected from each video and compiled for further analysis. The average Global Quality Scale was 2.60 amongst all videos, with videos led by physicians recording significantly higher scores than those not led by physicians (p<0.01). In comparing educational modalities, physician-led presentations provided the highest educational value, whereas live surgeries and consumer-friendly content contained low educational quality (p<0.01). Assessing videos split into cohorts based on viewership noted a significantly higher Global Quality Scale in videos with lower view counts (p<0.05). Skin graft videos on YouTube largely provide low quality information. Videos performed by physicians, particularly physician-led presentations, significantly improved the educational quality of skin graft content. Physicians must involve themselves in enhancing the quality of online content to better guide patients in navigating treatment options and making healthcare decisions.
    Keywords:  Skin graft; YouTube; education; reconstruction; social media
    DOI:  https://doi.org/10.1093/jbcr/irae161
  13. Cureus. 2024 Jul;16(7): e64743
      Background The widespread availability of Internet access and the rising popularity of social media platforms have facilitated the dissemination of health-related information, including dental health practices. However, assessing the quality and effectiveness of such information remains a challenge, particularly concerning traditional practices such as Miswak (Salvadora persica) usage. This study aims to assess the description, use, and effectiveness of the Miswak (Salvadora persica) chewing stick posted as video clips on YouTube™ and provide considerations for future interventions. Methodology YouTube videos were searched using the terms "Miswak," "Siwak," "Salvadora persica," and "Chewing stick." Each video's descriptive features, i.e., title, links, country of origin, upload date, running time, views, comments, likes, and dislikes, were recorded. Content quality was assessed using the DISCERN tool, which rates the reliability, dependability, and trustworthiness of online sources across 16 items. Scores were aggregated for analysis. The statistical analysis examined video features and associations between the speaker, video type, source, and quality, with significance set at a p-value <0.05 using SPSS Statistics Version 20 (IBM Corp., Armonk, NY, USA). Results A total of 45 videos were included in the study, with the majority (62%) created by the "other professionals" category. Almost three-quarters (73.3%) of the videos were educational. The quality of the video clips was correlated with the speaker source and category of "other," revealing that high-quality information was considered such when the source was other than a dentist. Further, we found that a video's source did not elicit differences in the opinion of the video's quality. Conclusions This social media analysis provides considerations and implications for future research on the potential use of YouTube as a platform for Miswak educational interventions.
    Keywords:  dental health; internet; miswak; social media; youtube
    DOI:  https://doi.org/10.7759/cureus.64743
  14. Stud Health Technol Inform. 2024 Aug 22. 316 1891-1895
      INTRODUCTION: Autistic individuals, parents, organizations, and healthcare systems worldwide are actively sharing content aimed at increasing awareness about autism. This study aims at analyzing the type of contents presented in TikTok and YouTube Shorts videos under the hashtag #actuallyautistic and their potential to increase autism awareness.METHODS: A sample of 60 videos were downloaded and analyzed (n=30 from TikTok and n=30 from YouTube Shorts). Video contents were analyzed using both thematic analysis and the AFINN sentiment analysis tool. The understandability and actionability of the videos were assessed with The Patient Education Materials Assessment Tool for Audiovisual Materials (PEMAT A/V).
    RESULTS: The contents of these videos covered five main themes: Stigmatization; Sensory difficulties; Masking; Stimming; and Communication difficulties. No statistically significant differences were found on sentiment expressed on videos from both channels. TikTok videos received significantly more views, comments, and likes than videos on YouTube Shorts. The PEMAT A/V tool showed that there is a high level of understandability, but little reference to actionability.
    DISCUSSION: Autistic people videos content spread valid and reliable information in hopes of normalizing difficulties and provide hope and comfort to others in similar situations.
    CONCLUSIONS: Social media videos posted by autistic individuals provide accurate portrayals about autism but lack information on actionability. These shared personal stories can help increase public literacy about autism, dispel autism stigmas and emphasize individuality.
    Keywords:  Autism; Health Education; Health Literacy; Social Media
    DOI:  https://doi.org/10.3233/SHTI240802
  15. Plast Reconstr Surg Glob Open. 2024 Aug;12(8): e6056
      Background: With the rising influence of social media on healthcare perceptions, this study investigates TikTok's role in educating the public about autologous breast reconstruction, specifically focusing on deep inferior epigastric perforator flaps.Methods: We conducted a systematic analysis of 152 TikTok videos related to deep inferior epigastric perforator flap procedures, evaluating the accuracy of the content, viewer engagement metrics, and the influence of content creator characteristics on viewer interactions.
    Results: Our analysis identified a wide variance in the quality of information, with many videos lacking in-depth educational content, thereby posing a risk of misinformation. Despite the presence of high-quality educational videos, there was a discrepancy between the educational value provided and viewer engagement levels. Thematic analysis highlighted common concerns among patients, providing insights for healthcare professionals to better tailor their social media content.
    Conclusions: The study underscores the significant impact of platforms like TikTok on patient education and emphasizes the need for healthcare professionals to guide the narrative on social media and ensure the dissemination of accurate and helpful information, ultimately aiding patients in making informed decisions about their healthcare.
    DOI:  https://doi.org/10.1097/GOX.0000000000006056
  16. Cureus. 2024 Jul;16(7): e64704
      Introduction Fibromyalgia, characterized by chronic musculoskeletal pain and associated symptoms, poses significant challenges in diagnosis and management. While social media platforms like TikTok have emerged as popular sources of health information, their variable content quality necessitates critical evaluation. This study aimed to assess the quality and reliability of TikTok videos related to fibromyalgia, thereby enhancing the understanding of their impact on patient education and self-management. Methods A cross-sectional observational study was conducted in June 2024, which analyzed 150 TikTok videos using search terms like "Fibromyalgia", "Fibromyalgia Symptoms", and "Fibromyalgia Treatment". Videos were evaluated for inclusion based on relevance and language (English), by employing the Global Quality Scale (GQS) and Quality Criteria for Consumer Health Information (DISCERN) score for assessment. Statistical analysis was performed by using IBM SPSS Statistics v21.0 (IBM Corp., Armonk, NY). The Kruskall-Wallis test was employed, and a p-value less than 0.05 was deemed statistically significant. Results Of the 150 videos initially reviewed, 96 (64%) met the inclusion criteria. Content categories included disease description (34, 35.42%), symptoms (81, 84.38%), management (64, 66.67%), and personal experiences (63, 65.63%). The videos were uploaded by doctors (8, 8.33%), patients (63, 65.63%), healthcare workers ( 7, 7.29%), and others (18, 18.75%). Mean GQS scores varied significantly by uploader type: doctors (4.63 ± 0.52), healthcare workers (3.43 ± 0.79), patients (2.37 ± 0.81), and others (2.11 ± 0.47) (p<0.001). DISCERN scores followed a similar trend: doctors (3.88 ± 0.64), healthcare workers (2.14 ± 1.46), patients (1.08 ± 0.27), and others (1.61 ± 0.50) (p<0.001). Conclusions TikTok serves as a pivotal platform for fibromyalgia-related discourse, predominantly shaped by patient-generated content. However, even though it provides insights into symptoms and management strategies, gaps exist in comprehensive medical guidance and preventive measures. The study underscores the critical role of healthcare professionals in enhancing content reliability and educational value on social media. Future research should explore cultural and linguistic diversity to broaden the accessibility and relevance of health information on platforms like TikTok.
    Keywords:  fibromyalgia; health information; patient education; social media; tiktok
    DOI:  https://doi.org/10.7759/cureus.64704
  17. J Med Internet Res. 2024 Aug 20. 26 e55403
      BACKGROUND: In China, mitral valve regurgitation (MR) is the most common cardiovascular valve disease. However, patients in China typically experience a high incidence of this condition, coupled with a low level of health knowledge and a relatively low rate of surgical treatment. TikTok hosts a vast amount of content related to diseases and health knowledge, providing viewers with access to relevant information. However, there has been no investigation or evaluation of the quality of videos specifically addressing MR.OBJECTIVE: This study aims to assess the quality of videos about MR on TikTok in China.
    METHODS: A cross-sectional study was conducted on the Chinese version of TikTok on September 9, 2023. The top 100 videos on MR were included and evaluated using quantitative scoring tools such as the modified DISCERN (mDISCERN), the Journal of the American Medical Association (JAMA) benchmark criteria, the Global Quality Score (GQS), and the Patient Education Materials Assessment Tool for Audio-Visual Content (PEMAT-A/V). Correlation and stepwise regression analyses were performed to examine the relationships between video quality and various characteristics.
    RESULTS: We obtained 88 valid video files, of which most (n=81, 92%) were uploaded by certified physicians, primarily cardiac surgeons, and cardiologists. News agencies/organizations and physicians had higher GQS scores compared with individuals (news agencies/organizations vs individuals, P=.001; physicians vs individuals, P=.03). Additionally, news agencies/organizations had higher PEMAT understandability scores than individuals (P=.01). Videos focused on disease knowledge scored higher in GQS (P<.001), PEMAT understandability (P<.001), and PEMAT actionability (P<.001) compared with videos covering surgical cases. PEMAT actionability scores were higher for outpatient cases compared with surgical cases (P<.001). Additionally, videos focused on surgical techniques had lower PEMAT actionability scores than those about disease knowledge (P=.04). The strongest correlations observed were between thumbs up and comments (r=0.92, P<.001), thumbs up and favorites (r=0.89, P<.001), thumbs up and shares (r=0.87, P<.001), comments and favorites (r=0.81, P<.001), comments and shares (r=0.87, P<.001), and favorites and shares (r=0.83, P<.001). Stepwise regression analysis identified "length (P<.001)," "content (P<.001)," and "physicians (P=.004)" as significant predictors of GQS. The final model (model 3) explained 50.1% of the variance in GQSs. The predictive equation for GQS is as follows: GQS = 3.230 - 0.294 × content - 0.274 × physicians + 0.005 × length. This model was statistically significant (P=.004) and showed no issues with multicollinearity or autocorrelation.
    CONCLUSIONS: Our study reveals that while most MR-related videos on TikTok were uploaded by certified physicians, ensuring professional and scientific content, the overall quality scores were suboptimal. Despite the educational value of these videos, the guidance provided was often insufficient. The predictive equation for GQS developed from our analysis offers valuable insights but should be applied with caution beyond the study context. It suggests that creators should focus on improving both the content and presentation of their videos to enhance the quality of health information shared on social media.
    Keywords:  GQS; Global Quality Score; JAMA; Journal of American Medical Association; PEMAT- A/V; Poisson regression analysis; Spearman correlation analysis; TikTok; mitral valve regurgitation; video quality
    DOI:  https://doi.org/10.2196/55403
  18. Stud Health Technol Inform. 2024 Aug 22. 316 279-283
      The last few years the Internet has evolved into a prominent information source for many people worldwide. Latest research has shown that an ever increasing number of citizens and patients go online in order to access health information and seek support in managing their health, including understanding their condition, adopting life-saving lifestyle adjustments and keeping up with treatment or aftercare guidelines. Due to this rise on the demand of online health information, health-related sites have increased substantially, with each one of them striving to maintain the most comprehensive and reliable source of health and medical information on the Internet. This paper presents a survey conducted among Greek population aiming at exploring participants general attitudes towards using the Internet to access health information as well as their views regarding a specific Greek health-related website, namely Iatronet. To this end, an online Greek version of eHealth Impact Questionnaire has been used which was developed using RedCAP platform.
    Keywords:  Greek population; Health-related web sites; RedCAP; e-Health Impact Questionnaire; eHealth; health management
    DOI:  https://doi.org/10.3233/SHTI240398