bims-librar Biomed News
on Biomedical librarianship
Issue of 2024–12–22
thirty-one papers selected by
Thomas Krichel, Open Library Society



  1. JAMIA Open. 2024 Dec;7(4): ooae139
       Objectives: In public health, access to research literature is critical to informing decision-making and to identify knowledge gaps. However, identifying relevant research is not a straightforward task since public health interventions are often complex, can have positive and negative impacts on health inequalities and are applied in diverse and rapidly evolving settings. We developed a "living" database of public health research literature to facilitate access to this information using Natural Language Processing tools.
    Materials and Methods: Classifiers were identified to identify the study design (eg, cohort study or clinical trial) and relationship to factors that may be relevant to inequalities using the PROGRESS-Plus classification scheme. Training data were obtained from existing MEDLINE labels and from a set of systematic reviews in which studies were annotated with PROGRESS-Plus categories.
    Results: Evaluation of the classifiers showed that the study type classifier achieved average precision and recall of 0.803 and 0.930, respectively. The PROGRESS-Plus classification proved more challenging with average precision and recall of 0.608 and 0.534. The FAIR database uses information provided by these classifiers to facilitate access to inequality-related public health literature.
    Discussion: Previous work on automation of evidence synthesis has focused on clinical areas rather than public health, despite the need being arguably greater.
    Conclusion: The development of the FAIR database demonstrates that it is possible to create a publicly accessible and regularly updated database of public health research literature focused on inequalities. The database is freely available from https://eppi.ioe.ac.uk/eppi-vis/Fair.
    NETSCC ID number: NIHR133603.
    Keywords:  automatic database curation; evidence synthesis; inequalities; machine learning; public health; research synthesis
    DOI:  https://doi.org/10.1093/jamiaopen/ooae139
  2. Health Soc Work. 2024 Dec 18. pii: hlae037. [Epub ahead of print]
      
    DOI:  https://doi.org/10.1093/hsw/hlae037
  3. Health Info Libr J. 2024 Dec;41(4): 337-338
      Health librarians and knowledge specialists are well placed to make the most of policy work to develop and embed health libraries and information services. Search and evidence summary skills allow staff to identify existing policies that can be of benefit to health library services, respond to policy consultations and develop policies. This editorial introduces the importance of policy work to health library and information services and how policy can be used as a lever for change. It also provides practical tips on where to start in identifying relevant policies, policy consultations and developing policies for health libraries and information services.
    Keywords:  governance; health policy; leadership; libraries, health care
    DOI:  https://doi.org/10.1111/hir.12551
  4. BMC Med Res Methodol. 2024 Dec 18. 24(1): 302
       BACKGROUND: A barrier to evidence-informed exercise programming is locating studies of exercise training programs. The purpose of this study was to create a search filter for studies of exercise training programs for the PubMed electronic bibliographic database.
    METHODS: Candidate search terms were identified from three sources: exercise-relevant MeSH terms and their corresponding Entry terms, word frequency analysis of articles in a gold-standard reference set curated from systematic reviews focused on exercise training, and retrospective searching of articles retrieved in the search filter development and testing steps. These terms were assembled into an exercise training search filter, and its performance was assessed against a basic search string applied to six case studies. Search string performance was measured as sensitivity (relative recall), precision, and number needed to read (NNR). We aimed to achieve relative recall ≥ 85%, and a NNR ≥ 2.
    RESULTS: The reference set consisted of 71 articles drawn from six systematic reviews. Sixty-one candidate search terms were evaluated for inclusion, 21 of which were included in the finalized exercise-training search filter. The relative recall of the search filter was 96% for the reference set and the precision mean ± SD was 54 ± 16% across the case studies, with the corresponding NNR = ~ 2. The exercise training search filter consistently outperformed the basic search string.
    CONCLUSION: The exercise training search filter fosters more efficient searches for studies of exercise training programs in the PubMed electronic bibliographic database. This search string may therefore support evidence-informed practice in exercise programming.
    Keywords:  Evidence-based practice; Exercise training; Information storage and retrieval; Kinesiology; PubMed; Search filter; Search hedge
    DOI:  https://doi.org/10.1186/s12874-024-02414-z
  5. Rev Esp Anestesiol Reanim (Engl Ed). 2024 Dec 16. pii: S2341-1929(24)00160-4. [Epub ahead of print] 501656
      This report shows an example of using literature search for healthcare management decision making, specifically, how anesthesiologists can enhance operating room (OR) productivity. A search was conducted using Scopus to gather relevant research on increasing surgical case numbers. References and citations were then examined. The search identified strategies to reduce non-operative times, facilitate overlapping surgeries, and optimize OR scheduling. Key findings indicate that reducing anesthesia-controlled times alone is insufficient to reliably add extra surgical cases within an eight-hour workday. Instead, significant productivity gains are realized by managing OR turnover times, using induction rooms, and revising workflows to maximize efficiency. Studies show that overlapping surgeries and strategic use of adjacent spaces can significantly increase the number of daily surgical cases performed. Most surgical growth is driven by accommodating low caseload surgeons across multiple specialties. Facilitating OR time access for these surgeons through flexible scheduling and re-sequencing of cases is crucial. Additionally, anesthesiologists should be engaged in daily OR scheduling and case sequencing, particularly within two days of surgery. The dual goals are to increase OR utilization and reduce patient wait times. These results from the management case report underscores the importance of evidence-based OR management practices and proactive involvement of anesthesiologists in scheduling decisions to enhance surgical productivity effectively.
    Keywords:  OR scheduling; Productividad quirúrgica; Surgical productivity; anesthesia-controlled time; búsqueda de operaciones; epidemiología gerencial; gestión del quirófano; industrial engineering; ingeniería industrial; managerial epidemiology; operating room management; operations research; overlapping surgeries; programación del quirófano; solapamiento de cirugías; tiempo controlado por la anestesia; tiempo de rotación; turnover time
    DOI:  https://doi.org/10.1016/j.redare.2024.501656
  6. Cureus. 2024 Nov;16(11): e73874
       INTRODUCTION: Within plastic surgery, a patient's most commonly used first point of information before consulting a surgeon is the internet. Free-to-use artificial intelligence (AI) websites like ChatGPT (Generative Pre-trained Transformers) are attractive applications for patient information due to their ability to instantaneously answer almost any query. Although relatively new, ChatGPT is now one of the most popular artificial intelligence conversational software tools. The aim of this study was to evaluate the quality and readability of information given by ChatGPT-4 on key areas in plastic and reconstructive surgery.
    METHODS: The ten plastic and aesthetic surgery topics with the highest worldwide search volume in the 15 years were identified. These were rephrased into question format to create nine individual questions. These questions were then input into ChatGPT-4. The response quality was assessed using the DISCERN. The readability and grade reading level of the responses were calculated using the Flesch-Kincaid Reading Ease Index and Coleman-Liau Index. Twelve physicians working in a plastic and reconstructive surgery unit were asked to rate the clarity and accuracy of the answers on a scale of 1-10 and state 'yes or no' if they would share the generated response with a patient.
    RESULTS: All answers were scored as poor or very poor according to the DISCERN tool. The mean DISCERN score for all questions was 34. The responses also scored low in readability and understandability. The mean FKRE index was 33.6, and the CL index was 15.6. Clinicians working in plastics and reconstructive surgery rated the questions well in clarity and accuracy. The mean clarity score was 7.38, and the accuracy score was 7.4.
    CONCLUSION: This study found that according to validated quality assessment tools, ChatGPT-4 produced low-quality information when asked about popular queries relating to plastic and aesthetic surgery. Furthermore, the information produced was pitched at a high reading level. However, the responses were still rated well in clarity and accuracy, according to clinicians working in plastic surgery. Although improvements need to be made, this study suggests that language models such as ChatGPT could be a useful starting point when developing written health information. With the expansion of AI, improvements in content quality are anticipated.
    Keywords:  aesthetic surgery; artificial intelligence; chatbot; medical technology; plastic surgery
    DOI:  https://doi.org/10.7759/cureus.73874
  7. Colorectal Dis. 2024 Dec 17.
       AIM: Artificial intelligence (AI) chatbots such as Chat Generative Pretrained Transformer-4 (ChatGPT-4) have made significant strides in generating human-like responses. Trained on an extensive corpus of medical literature, ChatGPT-4 has the potential to augment patient education materials. These chatbots may be beneficial to populations considering a diagnosis of colorectal cancer (CRC). However, the accuracy and quality of patient education materials are crucial for informed decision-making. Given workforce demands impacting holistic care, AI chatbots can bridge gaps in CRC information, reaching wider demographics and crossing language barriers. However, rigorous evaluation is essential to ensure accuracy, quality and readability. Therefore, this study aims to evaluate the efficacy, quality and readability of answers generated by ChatGPT-4 on CRC, utilizing patient-style question prompts.
    METHOD: To evaluate ChatGPT-4, eight CRC-related questions were derived using peer-reviewed literature and Google Trends. Eight colorectal surgeons evaluated AI responses for accuracy, safety, appropriateness, actionability and effectiveness. Quality was assessed using validated tools: the Patient Education Materials Assessment Tool (PEMAT-AI), modified DISCERN (DISCERN-AI) and Global Quality Score (GQS). A number of readability assessments were measured including Flesch Reading Ease (FRE) and the Gunning Fog Index (GFI).
    RESULTS: The responses were generally accurate (median 4.00), safe (4.25), appropriate (4.00), actionable (4.00) and effective (4.00). Quality assessments rated PEMAT-AI as 'very good' (71.43), DISCERN-AI as 'fair' (12.00) and GQS as 'high' (4.00). Readability scores indicated difficulty (FRE 47.00, GFI 12.40), suggesting a higher educational level was required.
    CONCLUSION: This study concludes that ChatGPT-4 is capable of providing safe but nonspecific medical information, suggesting its potential as a patient education aid. However, enhancements in readability through contextual prompting and fine-tuning techniques are required before considering implementation into clinical practice.
    Keywords:  artificial intelligence; colorectal cancer; education models; patient education
    DOI:  https://doi.org/10.1111/codi.17267
  8. J Pediatr Urol. 2024 Dec 07. pii: S1477-5131(24)00619-3. [Epub ahead of print]
       INTRODUCTION: Vesicoureteral reflux (VUR) is a common congenital or acquired urinary disorder in children. Chat Generative Pre-trained Transformer (ChatGPT) is an artificial intelligence-driven platform offering medical information. This research aims to assess the reliability and readability of ChatGPT-4o's answers regarding pediatric VUR for general, non-medical audience.
    MATERIALS AND METHODS: Twenty of the most frequently asked English-language questions about VUR in children were used to evaluate ChatGPT-4o's responses. Two independent reviewers rated the reliability and quality using the Global Quality Scale (GQS) and a modified version of the DISCERN tool. The readability of ChatGPT responses was assessed through the Flesch Reading Ease (FRE) Score, Flesch-Kincaid Grade Level (FKGL), Gunning Fog Index (GFI), Coleman-Liau Index (CLI), and Simple Measure of Gobbledygook (SMOG).
    RESULTS: Median mDISCERN and GQS scores were 4 (4-5) and 5 (3-5), respectively. Most of the responses of ChatGPT have moderate (55 %) and good (45 %) reliability according to the mDISCERN score and high quality (95 %) according to GQS. The mean ± standard deviation scores for FRE, FKGL, SMOG, GFI, and CLI of the text were 26 ± 12, 15 ± 2.5, 16.3 ± 2, 18.8 ± 2.9, and 15.3 ± 2.2, respectively, indicating a high level of reading difficulty.
    DISCUSSION: While ChatGPT-4o offers accurate and high-quality information about pediatric VUR, its readability poses challenges, as the content is difficult to understand for a general audience.
    CONCLUSION: ChatGPT provides high-quality, accessible information about VUR. However, improving readability should be a priority to make this information more user-friendly for a broader audience.
    Keywords:  Artificial intelligence; ChatGPT; Vesicoureteral reflux
    DOI:  https://doi.org/10.1016/j.jpurol.2024.12.002
  9. Eye (Lond). 2024 Dec 16.
       BACKGROUND/OBJECTIVES: Dry eye disease (DED) is an exceedingly common diagnosis in patients, yet recent analyses have demonstrated patient education materials (PEMs) on DED to be of low quality and readability. Our study evaluated the utility and performance of three large language models (LLMs) in enhancing and generating new patient education materials (PEMs) on dry eye disease (DED).
    SUBJECTS/METHODS: We evaluated PEMs generated by ChatGPT-3.5, ChatGPT-4, Gemini Advanced, using three separate prompts. Prompts A and B requested they generate PEMs on DED, with Prompt B specifying a 6th-grade reading level, using the SMOG (Simple Measure of Gobbledygook) readability formula. Prompt C asked for a rewrite of existing PEMs at a 6th-grade reading level. Each PEM was assessed on readability (SMOG, FKGL: Flesch-Kincaid Grade Level), quality (PEMAT: Patient Education Materials Assessment Tool, DISCERN), and accuracy (Likert Misinformation scale).
    RESULTS: All LLM-generated PEMs in response to Prompt A and B were of high quality (median DISCERN = 4), understandable (PEMAT understandability ≥70%) and accurate (Likert Score=1). LLM-generated PEMs were not actionable (PEMAT Actionability <70%). ChatGPT-4 and Gemini Advanced rewrote existing PEMs (Prompt C) from a baseline readability level (FKGL: 8.0 ± 2.4, SMOG: 7.9 ± 1.7) to targeted 6th-grade reading level; rewrites contained little to no misinformation (median Likert misinformation=1 (range: 1-2)). However, only ChatGPT-4 rewrote PEMs while maintaining high quality and reliability (median DISCERN = 4).
    CONCLUSION: LLMs (notably ChatGPT-4) were able to generate and rewrite PEMs on DED that were readable, accurate, and high quality. Our study underscores the value of leveraging LLMs as supplementary tools to improving PEMs.
    DOI:  https://doi.org/10.1038/s41433-024-03476-5
  10. Nutr Res. 2024 Nov 19. pii: S0271-5317(24)00150-7. [Epub ahead of print]133 46-53
      Patients with polycystic ovary syndrome (PCOS) often have many questions about nutrition and turn to chatbots such as Chat Generative Pretrained Transformer (ChatGPT) for advice. This study aims to evaluate the reliability, quality, and readability of ChatGPT's responses to nutrition-related questions asked by women with PCOS. Frequently asked nutrition-related questions from women with PCOS were reviewed in both Turkish and English. The reliability and quality of the answers were independently evaluated by 2 authors and a panel of 10 expert dietitians, using modified DISCERN and global quality score. Additionally, the readability of the answers was calculated using frequently used readability formulas. The mean modified DISCERN scores for English and Turkish versions were 27.6±0.87 and 27.2±0.87, respectively, indicating a fair level of reliability in the responses (16-31 points or 40%-79%). According to the global quality score, 100% of the responses in English and 90.9% of the responses in Turkish were rated as high quality. The readability of responses was classified as "difficult to read" with the readership levels assessed at college level and above for both English and Turkish. The correlation and regression analyses indicated no relationship between reliability, quality, and readability in English. However, a significant relationship was observed between quality and readability indexes in Turkish (P < .05). Our results suggest that ChatGPT's responses to nutrition-related questions about PCOS are generally of high quality, but improvements in both reliability and readability are still necessary. Although ChatGPT can offer general information and guidance on nutrition for PCOS, it should not be considered a substitute for personalized medical advice from health care professionals for effective management of the syndrome.
    Keywords:  ChatGPT; Diet; Nutrition; Polycystic ovary syndrome; Quality; Readability; Reliability
    DOI:  https://doi.org/10.1016/j.nutres.2024.11.005
  11. Health Info Libr J. 2024 Dec 16.
       INTRODUCTION: Although thematic analysis of health librarianship (HL) presentations at conferences in the USA exists, no similar research has been reported focused on HL at UK conferences.
    OBJECTIVES: To determine trends in HL conference presentations from 2017 to 2022 at three UK-based HL conferences and the Chartered Institute of Library and Information Professionals (CILIP) conferences.
    METHODS: Thematic analysis of conference programmes obtained from websites, the Internet Archive Wayback Machine and conference organisers.
    RESULTS: A total of 226 HL-related conference presentations were identified across all the examined conference programmes. Eight themes emerged: being a Healthcare Librarian; Digital Working; Finding the Evidence; Generating Research; Strategic Library Management; Literacies; Other; and Using the Evidence. 'Being a Healthcare Librarian' (n = 54) and 'Strategic Library Management' (n = 53) were the most prominent cross-conference themes.
    DISCUSSION: Presentations at HL-specific conferences provide a wider range of themes than CILIP conferences, with 'Being a Healthcare Librarian' absent from CILIP conferences but 'Literacies' appearing in similar numbers at both. Differences in conference formats and the COVID-19 pandemic likely influenced presentation numbers.
    CONCLUSION: HL conference themes are not directly reflected in CILIP conferences. NHS Knowledge and Library Services staff should be encouraged to undertake and disseminate original research, creating a UK evidence base for healthcare librarianship.
    Keywords:  United Kingdom (UK); continuing professional development (CPD); evaluation; evidence‐based library and information practice (EBLIP); professional development
    DOI:  https://doi.org/10.1111/hir.12561
  12. J Fr Ophtalmol. 2024 Dec 13. pii: S0181-5512(24)00326-7. [Epub ahead of print]48(2): 104381
       PURPOSE: To evaluate the appropriateness, understandability, actionability, and readability of responses provided by ChatGPT-3.5, Bard, and Bing Chat to frequently asked questions about keratorefractive surgery (KRS).
    METHOD: Thirty-eight frequently asked questions about KRS were directed three times to a fresh ChatGPT-3.5, Bard, and Bing Chat interfaces. Two experienced refractive surgeons categorized the chatbots' responses according to their appropriateness and the accuracy of the responses was assessed using the Structure of the Observed Learning Outcome (SOLO) taxonomy. Flesch Reading Ease (FRE) and Coleman-Liau Index (CLI) were used to evaluate the readability of the responses of chatbots. Furthermore, the understandability scores of responses were evaluated using the Patient Education Materials Assessment Tool (PEMAT).
    RESULTS: The appropriateness of the ChatGPT-3.5, Bard, and Bing Chat responses was 86.8% (33/38), 84.2% (32/38), and 81.5% (31/38), respectively (P>0.05). According to the SOLO test, ChatGPT-3.5 (3.91±0.44) achieved the highest mean accuracy and followed by Bard (3.64±0.61) and Bing Chat (3.19±0.55). For understandability (mean PEMAT-U score the ChatGPT-3.5: 68.5%, Bard: 78.6%, and Bing Chat: 67.1%, P<0.05), and actionability (mean PEMAT-A score the ChatGPT-3.5: 62.6%, Bard: 72.4%, and Bing Chat: 60.9%, P<0.05) the Bard scored better than the other chatbots. Two readability analyses showed that Bing had the highest readability, followed by the ChatGPT-3.5 and Bard, however, the understandability and readability scores were more challenging than the recommended level.
    CONCLUSION: Artificial intelligence supported chatbots have the potential to provide detailed and appropriate responses at acceptable levels in KRS. Chatbots, while promising for patient education in KRS, require further progress, especially in readability and understandability aspects.
    Keywords:  Artificial intelligence; Chatbot; Chirurgie kératoréfractive; Compréhensibilité; Intelligence artificielle; Keratorefractive surgery; Lisibilité; Patient education; Readability; Understandability; Éducation des patients
    DOI:  https://doi.org/10.1016/j.jfo.2024.104381
  13. J Exp Orthop. 2024 Oct;11(4): e70114
       Purpose: To determine the scope and accuracy of medical information provided by ChatGPT-4 in response to clinical queries concerning total shoulder arthroplasty (TSA), and to compare these results to those of the Google search engine.
    Methods: A patient-replicated query for 'total shoulder replacement' was performed using both Google Web Search (the most frequently used search engine worldwide) and ChatGPT-4. The top 10 frequently asked questions (FAQs), answers, and associated sources were extracted. This search was performed again independently to identify the top 10 FAQs necessitating numerical responses such that the concordance of answers could be compared between Google and ChatGPT-4. The clinical relevance and accuracy of the provided information were graded by two blinded orthopaedic shoulder surgeons.
    Results: Concerning FAQs with numeric responses, 8 out of 10 (80%) had identical answers or substantial overlap between ChatGPT-4 and Google. Accuracy of information was not significantly different (p = 0.32). Google sources included 40% medical practices, 30% academic, 20% single-surgeon practice, and 10% social media, while ChatGPT-4 used 100% academic sources, representing a statistically significant difference (p = 0.001). Only 3 out of 10 (30%) FAQs with open-ended answers were identical between ChatGPT-4 and Google. The clinical relevance of FAQs was not significantly different (p = 0.18). Google sources for open-ended questions included academic (60%), social media (20%), medical practice (10%) and single-surgeon practice (10%), while 100% of sources for ChatGPT-4 were academic, representing a statistically significant difference (p = 0.0025).
    Conclusion: ChatGPT-4 provided trustworthy academic sources for medical information retrieval concerning TSA, while sources used by Google were heterogeneous. Accuracy and clinical relevance of information were not significantly different between ChatGPT-4 and Google.
    Level of Evidence: Level IV cross-sectional.
    Keywords:  ChatGPT; LLM; information retrieval; large language model; total shoulder arthroplasty
    DOI:  https://doi.org/10.1002/jeo2.70114
  14. J Am Acad Audiol. 2024 Dec 18.
       OBJECTIVE:  To determine the readability and quality of both English and Spanish Web sites for the topic of hearing aids.
    STUDY DESIGN:  Cross-sectional Web site analysis.
    SETTING:  Various online search engines.
    METHODS:  The term "hearing aid" was queried across four popular search engines. The first resulted 75 English Web sites and first resulted 75 Spanish Web sites were extracted for data collection. Web sites that met the inclusion criteria were stratified by the presence of a Health on the Net Code (HONCode) certificate. Articles were then compiled to be independently reviewed by experts on hearing aids, using the DISCERN criteria, which allowed assessment of the quality of the Web sites. Readability was assessed by calculating the Flesch Reading Ease Score in English and the Fernandez Huerta Formula in Spanish. Readability and quality were both analyzed, comparing scores to their respective language and cross-comparing.
    RESULTS:  There were 37 English Web sites and 30 Spanish Web sites that met inclusion criteria. When analyzing readability, English Web sites were determined to be significantly more difficult to read (average = 55.37, standard deviation [SD] = 7.73, 95% confidence interval [CI] = 52.9-57.9) than the Spanish Web site counterparts (average = 58.64, SD = 5.26, 95% CI = 56.8-60.5, p = 0.035). For quality, Spanish Web sites (average = 38, SD = 9.7, 95% CI = 34.5-41.5) were determined to be of significantly higher quality than English Web sites (average = 32.16, SD = 10.60, 95% CI = 29.7-34.6). Additionally, there was a significant difference between the non-HONCode English Web sites versus the non-HONCode Spanish Web sites (p = 0.0081), signifying that Spanish non-HONCode certified Web sites were less reliable than non-HONCode certified English Web sites.
    DISCUSSION:  The present study highlights the importance and necessity of providing quality, readable materials to patients seeking information regarding hearing aids. This study shows that both English and Spanish Web sites are written at a level that is much higher than the American Medical Association (AMA)-recommended sixth-grade reading level, and no Web site included in this study fell at or below the AMA-recommended sixth-grade reading level. English and Spanish Web sites also lacked consistency and quality, as evidenced by their wide variability in DISCERN scores. Specifically, Hispanic patients are more likely to suffer long-term consequences of their health care due to low levels of health literacy. It is important to bridge this gap by providing adequate reading materials. It is especially important to provide evidence-based claims that are directly supported by experts in the field.
    DOI:  https://doi.org/10.1055/s-0044-1791215
  15. Eye (Lond). 2024 Dec 17.
       BACKGROUND/OBJECTIVE: This study aimed to evaluate the accuracy, comprehensiveness, and readability of responses generated by various Large Language Models (LLMs) (ChatGPT-3.5, Gemini, Claude 3, and GPT-4.0) in the clinical context of uveitis, utilizing a meticulous grading methodology.
    METHODS: Twenty-seven clinical uveitis questions were presented individually to four Large Language Models (LLMs): ChatGPT (versions GPT-3.5 and GPT-4.0), Google Gemini, and Claude. Three experienced uveitis specialists independently assessed the responses for accuracy using a three-point scale across three rounds with a 48-hour wash-out interval. The final accuracy rating for each LLM response ('Excellent', 'Marginal', or 'Deficient') was determined through a majority consensus approach. Comprehensiveness was evaluated using a three-point scale for responses rated 'Excellent' in the final accuracy assessment. Readability was determined using the Flesch-Kincaid Grade Level formula. Statistical analyses were conducted to discern significant differences among LLMs, employing a significance threshold of p < 0.05.
    RESULTS: Claude 3 and ChatGPT 4 demonstrated significantly higher accuracy compared to Gemini (p < 0.001). Claude 3 also showed the highest proportion of 'Excellent' ratings (96.3%), followed by ChatGPT 4 (88.9%). ChatGPT 3.5, Claude 3, and ChatGPT 4 had no responses rated as 'Deficient', unlike Gemini (14.8%) (p = 0.014). ChatGPT 4 exhibited greater comprehensiveness compared to Gemini (p = 0.008), and Claude 3 showed higher comprehensiveness compared to Gemini (p = 0.042). Gemini showed significantly better readability compared to ChatGPT 3.5, Claude 3, and ChatGPT 4 (p < 0.001). Gemini also had fewer words, letter characters, and sentences compared to ChatGPT 3.5 and Claude 3.
    CONCLUSIONS: Our study highlights the outstanding performance of Claude 3 and ChatGPT 4 in providing precise and thorough information regarding uveitis, surpassing Gemini. ChatGPT 4 and Claude 3 emerge as pivotal tools in improving patient understanding and involvement in their uveitis healthcare journey.
    DOI:  https://doi.org/10.1038/s41433-024-03545-9
  16. Gynecol Oncol Rep. 2024 Dec;56 101548
       Objective: Over half of Spanish-speaking patients use the internet to understand their diagnosis. We evaluated readability of Spanish online patient education materials (OPEMs) about gynecologic cancer to assess compliance with National Institute of Health (NIH) recommendations to be at or below eighth grade reading level.
    Methods: We conducted an online search using six Spanish gynecologic cancer terms on three major search engines with cookies and location disabled. The first five results by cancer type were included. Readability was analyzed by Spanish Simple Measure of Gobbledygook (SMOG) and Gilliam-Peña-Mountain (GPM) indices. One-way ANOVA with Tukey's Honestly Significant Difference (HSD) post-hoc analysis was performed.
    Results: 322 unique OPEMs were retrieved using Spanish queries. This included 132 (41 %) from non-profit organizations, 114 (35.4 %) from governmental organizations, and 63 (19.5 %) from academic medical centers; the remainder were from professional medical society or pharmaceutical company sources. Overall, gynecologic oncology OPEMs were written at a mean 9.8 ± 1.2 grade reading level. Only 14 % of OPEMs were written at or below an eighth grade reading level. There were significant differences in readability by publishing source (p < 0.001). Though there were no significant differences in readability by cancer type (p = 0.07), the mean reading level for all cancer types was between ninth and eleventh grade level.
    Conclusions: 86% of readily searchable Spanish gynecologic oncology OPEMs are written above recommended reading levels. Gynecologic oncologists should curate and support Spanish-speaking patients in finding high-quality online educational content.
    Keywords:  Health literacy; Online health resources; Patient education; Readability; Spanish
    DOI:  https://doi.org/10.1016/j.gore.2024.101548
  17. J Patient Exp. 2024 ;11 23743735241305533
      The Web Content Accessibility Guidelines (WCAGs) provide website development requirements with users' cognitive/sensory limitations in mind. The purpose of this study was to assess the accessibility and usability of shoulder instability surgery and open Latarjet surgery online patient education materials (OPEMs) for persons with disabilities based on WCAG compliance. OPEMs were evaluated for search engine optimization, content, design, performance, accessibility, overall scores, body text contrast ratios, and compliance error count at increasing WCAG standard levels. Analysis suggested that OPEMs across both search terms scored poorly in WCAG compliance scores and had significant increases in the number of compliance errors as standards became more stringent. These results suggest that orthopedic OPEMs place unnecessary cognitive and physical loads on users with disabilities, warranting greater scrutiny of the availability and accessibility of orthopedic OPEMs.
    Keywords:  access to care; communication; education; equity; health literacy; inclusion; medications/adherence; orthopedics
    DOI:  https://doi.org/10.1177/23743735241305533
  18. Cureus. 2024 Nov;16(11): e73989
      Introduction This study aimed to evaluate and compare the quality and reliability of information provided by two widely used digital platforms, ChatGPT-4 and Google, on frequently asked questions about colon cancer. With the growing popularity of these platforms, individuals increasingly turn to them for accessible health information, yet questions remain regarding the accuracy and reliability of such content. Given that colon cancer is a prevalent and serious condition, trustworthy information is essential to support patient education, facilitate informed decision-making, and potentially improve patient outcomes. Therefore, the objective was to determine which platform offers more reliable and accurate medical information on colon cancer, using established evaluation criteria to assess the quality of information. Methods Twenty frequently asked questions about colon cancer were selected based on search popularity and relevance to patients and then searched using ChatGPT-4 and Google. Responses were evaluated using tools such as DISCERN (reliability), Global Quality Score (GQS), Journal of the American Medical Association (JAMA) criteria (accuracy), SAM (suitability), Flesch-Kincaid Readability Test, HITS (user experience), and VPI (visibility). Statistical analyses determined significant differences between the platforms (p < 0.05). ChatGPT-4 scored significantly higher than Google on DISCERN, GQS, and JAMA, indicating greater reliability, accuracy, and comprehensibility (p < 0.001). Both platforms showed similar readability scores, but ChatGPT-4 rated higher for patient suitability (SAM, p < 0.01) and user-friendliness (HITS, p < 0.01). Although Google exhibited higher visibility (VPI), the limited HONcode certification raised concerns about the reliability of its results. Results ChatGPT-4 scored significantly higher than Google on DISCERN, GQS, and JAMA criteria, demonstrating superior reliability, accuracy, and comprehensibility (p < 0.001). While both platforms had comparable readability scores on the Flesch-Kincaid Readability Test, ChatGPT-4 was rated as more suitable for patient education according to SAM criteria (p < 0.01). Furthermore, ChatGPT-4 was found to be more user-friendly and offered more structured information based on the HITS scale (p < 0.01). Although Google showed higher visibility according to the VPI, the limited presence of HONcode-certified results raised concerns about the reliability of its information. Conclusion ChatGPT-4 proved to be a more reliable and higher-quality source of medical information compared to Google, particularly for patient queries about colon cancer. AI-based platforms such as ChatGPT-4 hold promise for enhancing patient education and providing accurate medical information, although further research is needed to confirm these findings across different medical topics and larger populations.
    Keywords:  chatgpt-4; colon cancer; google; health quality; information quality
    DOI:  https://doi.org/10.7759/cureus.73989
  19. Med Ref Serv Q. 2024 Dec 17. 1-10
      This pilot study investigated the use of Generative AI using ChatGPT to produce Boolean search strings to query PubMed. The goals were to determine if ChatGPT could be used in search string formation and if so, which approach was most effective. Research outputs from published systematic reviews were compared to outputs from AI generated search strings. While moderate overlap in publication retrieval between published and AI generated search strings was noted, the numbers were not sufficient to completely replicate published search strings and little difference was observed between prompted and unprompted GPT in using ChatGPT.
    Keywords:  Artificial intelligence; Boolean searching; ChatGPT; GenAI; generative AI; systematic reviews
    DOI:  https://doi.org/10.1080/02763869.2024.2440848
  20. Ear Nose Throat J. 2024 Dec 16. 1455613241307886
      Introduction: Patients frequently use social media to direct their health care. However, the quality of social media posts regarding facial paralysis and reanimation is unclear. Objective: To assess the quality of facial reanimation posts on social media. Methods: Ten key search terms were used to search YouTube and Facebook. The top 10 posts for each search term were graded using a variety of parameters including the Global Quality Score (GQS), Modified DISCERN, Journal of the American Medical Association Criteria, and a novel Social Media Quality Score (SMQS) which was created by the authors. Results: There was a significant difference in SMQS (P = .035) and GQS (P = .01) between YouTube and Facebook Videos. For YouTube videos, there was a significant difference in SMQS scores (P = .003) between various search terms. For Facebook videos, there was a significant difference in both SMQS (P < .0001) and Modified DISCERN (P = .036) scores. The majority of videos evaluated were of moderate or low quality. Conclusion: Higher quality posts regarding facial reanimation are needed on social media. As health care providers, we must provide patients with appropriate resources to find high-quality posts, and when posting content, we must carefully curate the "key words" so that patients can easily find high-quality content.
    Keywords:  facial paralysis; facial reanimation; social media
    DOI:  https://doi.org/10.1177/01455613241307886
  21. OTO Open. 2024 Oct-Dec;8(4):8(4): e70052
       Objective: Assessing the quality of human papillomavirus (HPV) vaccination-related content on TikTok is crucial due to its popularity among adolescents. We assessed these videos while comparing the content and quality of videos with and without physician involvement.
    Study Design: Cross-sectional cohort analysis.
    Setting: HPV vaccination-related TikTok videos.
    Methods: The TikTok library was queried using the search terms #HPVvaccine, #HPVvaccination, #Gardasil, #Gardasilvaccine, and #Gardasilvaccination. Video quality was evaluated using the DISCERN scale, assessing treatment-related information quality. Descriptive statistics were used to characterize our cohort. t Test and Fischer's exact test were used to assess for differences in video content and quality based on physician involvement. Significance was set at P < .05.
    Results: Our search yielded 131 videos, averaging 68,503.12 views, 2314.27 likes, and 89.28 comments per video. Videos frequently involved physicians (48.09%), focused on education (54.96%) or advocacy (22.90%), and were US-made (68.90%). Otolaryngologists were rarely featured (3.17%). While most videos mentioned the HPV vaccine protected against cancer generally (86.26%), and cervical cancer specifically (67.94%), few discussed its protective effect against head and neck cancer (26.72%). Videos infrequently discussed updated eligibility among all adults ≤45 years of age (26.72%) or that men can also receive the vaccine (28.24%). Physician-involved videos were more focused on education (P < .001) and focused less on patient experiences (P < .001) and advocacy (P = .036). Overall DISCERN scores were low among physician (mean = 2.46, SD = 1.13) and nonphysician (mean = 2.09, SD = 1.02) content.
    Conclusion: TikTok HPV vaccination content is poor in quality, even with physician involvement. Enhancing content quality and increasing otolaryngologist participation can boost HPV awareness and vaccination rates.
    Keywords:  HPV; TikTok; head and neck cancer; quality; reliability; social media; vaccination
    DOI:  https://doi.org/10.1002/oto2.70052
  22. Health Informatics J. 2024 Oct-Dec;30(4):30(4): 14604582241310427
      An increasing number of patients turn to YouTube for medical information, driving the growth of research on medical video content, including in dermatology. The objective was to analyze the content of YouTube videos discussing acne, psoriasis, or anti-aging skincare in Arabic. This infodemiological study analyzed the most viewed videos on these topics. A usefulness score was created to compare "useful" and "not useful" videos, along with assessments of global quality and reliability. Among the 98 most viewed videos, 75 were analyzed. Non-professionals produced 53.33%. Median scores for quality (2/5), reliability (1/5), and usefulness (4/19) were low. Most videos (78.67%) were "not useful," while 21.33% were "useful," with the latter showing significantly higher quality and reliability. In conclusion, most videos present shortcomings both in terms of quality and reliability. Videos from professional sources are far fewer in number and less popular.
    Keywords:  Arabic; quality; reliability; social media; usefulness; youtube
    DOI:  https://doi.org/10.1177/14604582241310427
  23. Medicine (Baltimore). 2024 Dec 13. 103(50): e40852
      Ophthalmologists and ophthalmology residents (ORs) are increasingly turning to the internet for medical information, underscoring the significant role that YouTube videos, particularly three-dimensional (3D) ones, play in lifelong learning. This study aimed to compare the content and quality of 3D YouTube videos with two-dimensional (2D) videos as supplementary educational tools for vitreoretinal surgery. Data collected included video length (minutes), time elapsed since upload (days), number of views, likes, dislikes, vitreoretinal surgery type, and visualization system. Video popularity and interaction were calculated using the video power index, interaction index, and viewing rate. Two senior ophthalmologists (SOs) and 2 ORs evaluated the videos using the DISCERN, Global Quality Score, and usefulness scoring systems. Inter-rater reliability was assessed using the intra-class correlation coefficient. A total of 392 videos were screened, with 67 2D and 67 3D videos deemed appropriate for inclusion. While 2D videos had significantly more views, likes, interaction index, and viewing rate than 3D videos (P < .001 for all), 3D videos were rated higher by ORs across all scoring systems (P < .05 for all). Inter-rater reliability was confirmed to be good, with the lowest intra-class correlation coefficient being 0.796 for SOs (95% confidence interval: 0.668-0.875) and 0.814 for ORs (95% confidence interval: 0.698-0.886). In conclusion, side-by-side 3D YouTube videos offer a valuable supplementary educational tool, enhancing depth perception and enabling both SOs and ORs to better understand the complexities of ocular surgeries, particularly vitreoretinal procedures. These videos can also be used to observe new procedures and refresh previously acquired knowledge of past surgeries.
    DOI:  https://doi.org/10.1097/MD.0000000000040852
  24. Dent Med Probl. 2024 Nov-Dec;61(6):61(6): 865-873
       BACKGROUND: Patients are increasingly turning to Internet platforms for health-related information. An example is YouTube, one of the largest media-sharing networks in the world.
    OBJECTIVES: The aim of the present study was to assess the informational value of YouTube videos on the treatment of bruxism with botulinum toxin, a procedure that is becoming increasingly popular in the field of dentistry.
    MATERIAL AND METHODS: After collecting 30 videos for each of the 5 keywords, a total of 150 videos were examined. The following search terms were used: 'bruxism Botox treatment'; 'tooth grinding Botox treatment'; 'jaw clenching Botox treatment'; 'Botox for bruxism'; and 'Botox for masseter reduction'. Two researchers independently assessed the quality of the video content using the DISCERN scoring system. Additionally, the relationships between quantitative variables, such as video duration, the source of upload and video popularity, and the DISCERN scores, were examined.
    RESULTS: The mean overall DISCERN score was 32.3. The YouTube videos were divided into the following categories based on their DISCERN scores: very poor (26.3%); poor (61.4%); fair (10.5%); good (1.8%); and excellent (0.0%). Videos that addressed risk factors during therapy, treatment outcomes, bruxism symptoms, and the muscle anatomy had significantly higher overall DISCERN scores.
    CONCLUSIONS: In general, YouTube videos on botulinum toxin treatment for bruxism had poor informational value. It is important that dentists recognize the significance of YouTube as a source of health-related information, and ensure that the content they provide is of the highest quality, accurate and up-to-date.
    Keywords:  Botox; DISCERN; YouTube; bruxism; quality
    DOI:  https://doi.org/10.17219/dmp/168410
  25. J Multidiscip Healthc. 2024 ;17 5927-5939
       Aims and Objectives: To assess the content quality and reliability of Gastroesophageal reflux disease (GERD) videos on TikTok and Bilibili.
    Background: Since many people with GERD use current online platforms to search for health information, there is a need to assess the quality of GERD videos on social media. There are many GERD videos on TikTok and Bilibili; however, the quality of information in these videos remains unknown.
    Design: A cross-sectional survey on two video platforms.
    Methods: In November 2023, we retrieved 200 videos from TikTok and Bilibili with the search term "GERD." Basic video information was extracted, the content coded, and the video source identified. Two independent raters assessed the quality of each video using the Journal of the American Medical Association (JAMA) benchmark criteria, the modified DISCERN (mDISCERN) criteria, and the Global Quality Score (GQS) tool.
    Results: A total of 156 videos were collected. Most of the videos on TikTok and Bilibili came from gastroenterologists. TikTok's GERD video quality and reliability were higher than Bilibili's. The mDISCERN and GQS scores of both platforms were positively correlated with duration, and the GQS score was positively correlated with collection and shares. Bilibili's JAMA score was negatively correlated with time-sync comments, and TikTok's JAMA score was negatively correlated with days since upload.
    Conclusion: This study indicated that the content quality scores of TikTok and Bilibili as sources of scientific information on GERD are average, and patients should carefully identify and select to watch GERD-related videos on TikTok and Bilibili.
    Relevance to Clinical Practice: By evaluating the quality of videos on GERD on the two platforms, this can provide new ideas for health education interventions in the clinic and a relevant basis for improving the quality level of the videos.
    Keywords:  gastroesophageal reflux disease; health education; online health information-seeking; online video; quality; social media
    DOI:  https://doi.org/10.2147/JMDH.S485781
  26. Digit Health. 2024 Jan-Dec;10:10 20552076241304594
       Objective: Thyroid-associated ophthalmopathy (TAO) is a prevalent orbital disease significantly affecting patients' daily lives. Nowadays, TikTok acts as a novel tool for healthcare but involved videos need to be further assessed. Many individuals search for disease information on TikTok before professional consultation. This study aimed to assess the quality of TAO-related TikTok videos and correlations between video variables and quality.
    Methods: The top 150 TikTok videos were collected using the keyword TAO. Duplicate, too-short, irrelevant videos and similar videos from the same source were excluded. Two raters evaluated the included videos' overall quality, reliability, understandability, and actionability on different sources and content focuses.
    Results: Ninety videos had received nearly 15,000 likes and 2000 shares. Ophthalmologists and treatment focus were two primary parts among categorized groups, whose quality scores were much higher than others. The average Patient Education Materials Assessment Tool for Audiovisual Materials scores, Global Quality Scores, and DISCERN scores indicated that these videos were easy to understand (87.6%), actionable (74.5%), and fair in quality (44.97). The number of added hashtags was an essential variable positively correlated with video understandability. Additionally, popularity showed negative correlations with the overall quality, while video length positively correlated with its reliability and negatively correlated with the uploaded days.
    Conclusion: Certified healthcare professionals uploaded most TAO videos, resulting in acceptable quality with minimal misinformation. To serve as a qualified source of patient educational materials, TikTok is supposed to promote longer disease-related videos and enhance reliability and understandability simultaneously.
    Keywords:  PEMAT-A/V; Thyroid-associated ophthalmopathy; TikTok; information quality; public health; social media
    DOI:  https://doi.org/10.1177/20552076241304594
  27. J Cancer Educ. 2024 Dec 17.
      Information is crucial for person-centered cancer care. This study investigated sociodemographic, psychological, and communicative factors associated with perceived information needs and the intention to continue seeking information among individuals with cancer experience in Hong Kong. Data were drawn from the INSIGHTS-Hong Kong (International Studies to Investigate Global Health Information Trends) survey, which included 510 respondents with personal cancer experience or as family members and close friends of those diagnosed with cancer. The findings revealed that 62% of participants perceived knowledge deficits and needed more cancer information, yet only 43% intended to seek additional information. Greater cancer worry, extensive effort in previous information searches, and concerns about information quality were significantly associated with heightened information needs. These results highlight key areas for prioritization in educational and supportive care initiatives to address unmet support needs. Additionally, the intention to seek further information was associated with perceived information needs, cancer severity, subjective norms, and concerns about information usefulness. These findings suggest strategies to enhance supportive care services by addressing unmet information needs through expanding access to credible and clear information, enhancing credibility assessment skills, emphasizing cancer risks, and leveraging support networks for individuals affected by cancer. This study lays the groundwork for future research on cancer information engagement in Hong Kong and other settings.
    Keywords:  Cancer information seeking; Information needs; Supportive care; Survey
    DOI:  https://doi.org/10.1007/s13187-024-02551-5
  28. J Educ Health Promot. 2024 ;13 346
      One of the important factors that play a fundamental role in people's information behavior is psychological factors. The aim of the current research is to identify the psychological factors that impact users' health information-seeking behavior through a systematic review. Innovation in this work emphasizes the use of a systematic approach to identify psychological factors that influence individuals' information behavior. By employing a systematic method, this research can have high scientific value and provides greater confidence in identifying and describing psychological factors related to information behavior. The research method of this study was carried out using a systematic review method. After searching in WoS, PubMed, and Scopus databases, 4162 articles were reviewed, after removing repetition and applying article selection criteria, 31 articles were selected for analysis. In this article, a systematic review of the Prisma flowchart tool has been utilized. The Prisma flowchart is a valuable instrument for ensuring methodological transparency and facilitating the reporting of systematic reviews and meta-analyses. It provides a structured framework for outlining the various stages of the review process, including study identification, screening, eligibility assessment, data extraction, and synthesis. By employing the Prisma flowchart, researchers can enhance the rigor and reproducibility of their systematic reviews, thereby promoting evidence-based decision making in various fields of study. The findings reveal that out of 31 articles, 28 were surveys, and 3 were descriptive studies. Furthermore, one article employed an intervention methodology, targeting community members, pregnant women, or patients as the statistical population. The research findings highlight anxiety, uncertainty, and avoidance of information as the most commonly identified psychological variables influencing Health information-seeking behavior. Psychological factors play an important role in the health information behavior of information users in different societies; however, in the published articles in the field of health information behavior, more attention has been paid to information carriers and less attention has been paid to the psychological characteristics of people, which originate from the human psyche and mind. The importance of dealing with non-communicable diseases has been emphasized in the "Research and Technology Policies and Priorities" documents. These documents highlight disease management, self-care, and the role of education and information in disease control and reducing the burden of non-communicable diseases. Therefore, it is essential that planners and policymakers can take important steps by focusing on these factors in order to improve the quality of information acquisition. Also, this work provides the possibility for researchers to study the information in future research with more knowledge by knowing the existing gaps in the field of psychologically effective factors on information behavior.
    Keywords:  Anxiety; health information-seeking behavior; information avoidance; psychological variables; uncertainly
    DOI:  https://doi.org/10.4103/jehp.jehp_973_23
  29. JMIR Infodemiology. 2024 Dec 16. 4 e64577
       BACKGROUND: After the US Supreme Court overturned Roe v. Wade, confusion followed regarding the legality of abortion in different states across the country. Recent studies found increased Google searches for abortion-related terms in restricted states after the Dobbsv. Jackson Women's Health Organization decision was leaked. As patients and providers use Wikipedia (Wikimedia Foundation) as a predominant medical information source, we hypothesized that changes in reproductive health information-seeking behavior could be better understood by examining Wikipedia article traffic.
    OBJECTIVE: This study aimed to examine trends in Wikipedia usage for abortion and contraception information before and after the Dobbs decision.
    METHODS: Page views of abortion- and contraception-related Wikipedia pages were scraped. Temporal changes in page views before and after the Dobbs decision were then analyzed to explore changes in baseline views, differences in views for abortion-related information in states with restrictive abortion laws versus nonrestrictive states, and viewer trends on contraception-related pages.
    RESULTS: Wikipedia articles related to abortion topics had significantly increased page views following the leaked and final Dobbs decision. There was a 103-fold increase in the page views for the Wikipedia article Roe v. Wade following the Dobbs decision leak (mean 372,654, SD 135,478 vs mean 3614, SD 248; P<.001) and a 67-fold increase in page views following the release of the final Dobbs decision (mean 8942, SD 402 vs mean 595,871, SD 178,649; P<.001). Articles about abortion in the most restrictive states had a greater increase in page views (mean 40.6, SD 12.7; 18/51, 35% states) than articles about abortion in states with some restrictions or protections (mean 26.8, SD 7.3; 24/51, 47% states; P<.001) and in the most protective states (mean 20.6, SD 5.7; 8/51, 16% states; P<.001). Finally, views to pages about common contraceptive methods significantly increased after the Dobbs decision. "Vasectomy" page views increased by 183% (P<.001), "IUD" (intrauterine device) page views increased by 80% (P<.001), "Combined oral contraceptive pill" page views increased by 24% (P<.001), "Emergency Contraception" page views increased by 224% (P<.001), and "Tubal ligation" page views increased by 92% (P<.001).
    CONCLUSIONS: People sought information on Wikipedia about abortion and contraception at increased rates after the Dobbs decision. Increased traffic to abortion-related Wikipedia articles correlated to the restrictiveness of state abortion policies. Increased interest in contraception-related pages reflects the increased demand for contraceptives observed after the Dobbs decision. Our work positions Wikipedia as an important source of reproductive health information and demands increased attention to maintain and improve Wikipedia as a reliable source of health information after the Dobbs decision.
    Keywords:  Dobbs; Wikipedia; abortion; contraception; contraceptive; information seeking; internet; page view; reproduction; reproductive; trend; viewer trends; women’s health
    DOI:  https://doi.org/10.2196/64577
  30. Health Inf Manag. 2024 Dec 18. 18333583241303771
      Background: Work-integrated learning (WIL) is integral to most health disciplines' profession-qualifying degree programs. Objectives: To analyse the categories, locales and foci of final-year (capstone), health information management professional practice (WIL) placements, 2012-2021, at La Trobe University, Australia. Method: A documentary analysis of 614 placement agency proposals, 2012-2021, interrogated multiple characteristics: agency type, placement (sub-) category (WIL model), project type, agency-required student capabilities, intended learning outcomes. Results: Public hospitals offered 50% of all placements. Medical research/health or disease screening/clinical registries offered 17.8%, incorporating 86.7% of "research-based" placements. Government department offerings were consistently stable; private hospital, primary care and community healthcare offerings declined. The majority (64.8%) of offerings were "project-based," followed by "internship" (28.7%: Health Information Service (14%) and "other" (14.7%)), research-based (4.9%) and other (1.6%). Ninety-nine (16.1%) proposals specified additional, pre-placement skills and capabilities: technical (information technologies, software applications; 58.6% of 99 proposals); working independently (49.5%); communications (written, verbal; 45.5%); targeted interest (38.4%) in "informatics and data quality," "quality and safety," "software development," "coding"; organisational and/or time management skills (29.9%); teamwork skills (20.2%); data analysis skills (18.2%); enthusiasm and/or self-motivation (15.2%). Conclusion: The project-based model for the capstone placement is ideal for preparing health information management students for complex, graduate professional work. Agencies' pre-placement expectations of students (knowledge, technical skills, soft skills) are consistent with findings from the WIL literature and align with course curricula and Australia's Health Information Manager (HIM) Profession-entry Competency Standards. Implications: The findings will strengthen the health information management profession's knowledge base of WIL and inform educators, students and agency supervisors.
    Keywords:  allied health occupations; experiential learning; health information management; health information management profession; health information management workforce; health information manager; higher education; preceptorship; professional practice; project-based placements; work-integrated learning
    DOI:  https://doi.org/10.1177/18333583241303771