bims-librar Biomed News
on Biomedical librarianship
Issue of 2024‒07‒14
twenty papers selected by
Thomas Krichel, Open Library Society



  1. Zhonghua Yi Shi Za Zhi. 2024 May 28. 54(3): 170-174
      The current version of Jing Xiao Chan Bao is believed to be the earliest medical book on gynecology remaining in China. It has three problems: formulae missing, lack of fluency in the text, and thus difficulties in proofreading and editing. These problems are still there because there are very few versions of Jing Xiao Chan Bao left in China and so it is difficult to do further studies to make comparisons. The Waseda University Library announced that the version they held was a handwritten. It provides a new version for further research of this book. This version was believed to be compiled and edited by Japanese scholars based on Medical Prescription Analogues (Yi Fang Lei Ju) and therefore appears to be similar to the South Song Dynasty version. Using archival research, it was found that in the version at Waseda University Library, the content organisation, the number of formulas, and the use of taboo words is different from those in the current version in China. In this sense, it is believed that this version is valuable and meaningful for archival and clinical research for traditional Chinese medicine.
    DOI:  https://doi.org/10.3760/cma.j.cn112155-20240104-00005
  2. Syst Rev. 2024 Jul 08. 13(1): 174
      BACKGROUND: The demand for high-quality systematic literature reviews (SRs) for evidence-based medical decision-making is growing. SRs are costly and require the scarce resource of highly skilled reviewers. Automation technology has been proposed to save workload and expedite the SR workflow. We aimed to provide a comprehensive overview of SR automation studies indexed in PubMed, focusing on the applicability of these technologies in real world practice.METHODS: In November 2022, we extracted, combined, and ran an integrated PubMed search for SRs on SR automation. Full-text English peer-reviewed articles were included if they reported studies on SR automation methods (SSAM), or automated SRs (ASR). Bibliographic analyses and knowledge-discovery studies were excluded. Record screening was performed by single reviewers, and the selection of full text papers was performed in duplicate. We summarized the publication details, automated review stages, automation goals, applied tools, data sources, methods, results, and Google Scholar citations of SR automation studies.
    RESULTS: From 5321 records screened by title and abstract, we included 123 full text articles, of which 108 were SSAM and 15 ASR. Automation was applied for search (19/123, 15.4%), record screening (89/123, 72.4%), full-text selection (6/123, 4.9%), data extraction (13/123, 10.6%), risk of bias assessment (9/123, 7.3%), evidence synthesis (2/123, 1.6%), assessment of evidence quality (2/123, 1.6%), and reporting (2/123, 1.6%). Multiple SR stages were automated by 11 (8.9%) studies. The performance of automated record screening varied largely across SR topics. In published ASR, we found examples of automated search, record screening, full-text selection, and data extraction. In some ASRs, automation fully complemented manual reviews to increase sensitivity rather than to save workload. Reporting of automation details was often incomplete in ASRs.
    CONCLUSIONS: Automation techniques are being developed for all SR stages, but with limited real-world adoption. Most SR automation tools target single SR stages, with modest time savings for the entire SR process and varying sensitivity and specificity across studies. Therefore, the real-world benefits of SR automation remain uncertain. Standardizing the terminology, reporting, and metrics of study reports could enhance the adoption of SR automation techniques in real-world practice.
    Keywords:  Artificial intelligence; Automation; Evidence synthesis; Machine learning; Natural language processing; Systematic literature review; Text mining
    DOI:  https://doi.org/10.1186/s13643-024-02592-3
  3. Syst Rev. 2024 Jul 11. 13(1): 177
      OBJECTIVES: In a time of exponential growth of new evidence supporting clinical decision-making, combined with a labor-intensive process of selecting this evidence, methods are needed to speed up current processes to keep medical guidelines up-to-date. This study evaluated the performance and feasibility of active learning to support the selection of relevant publications within medical guideline development and to study the role of noisy labels.DESIGN: We used a mixed-methods design. Two independent clinicians' manual process of literature selection was evaluated for 14 searches. This was followed by a series of simulations investigating the performance of random reading versus using screening prioritization based on active learning. We identified hard-to-find papers and checked the labels in a reflective dialogue.
    MAIN OUTCOME MEASURES: Inter-rater reliability was assessed using Cohen's Kappa (ĸ). To evaluate the performance of active learning, we used the Work Saved over Sampling at 95% recall (WSS@95) and percentage Relevant Records Found at reading only 10% of the total number of records (RRF@10). We used the average time to discovery (ATD) to detect records with potentially noisy labels. Finally, the accuracy of labeling was discussed in a reflective dialogue with guideline developers.
    RESULTS: Mean ĸ for manual title-abstract selection by clinicians was 0.50 and varied between - 0.01 and 0.87 based on 5.021 abstracts. WSS@95 ranged from 50.15% (SD = 17.7) based on selection by clinicians to 69.24% (SD = 11.5) based on the selection by research methodologist up to 75.76% (SD = 12.2) based on the final full-text inclusion. A similar pattern was seen for RRF@10, ranging from 48.31% (SD = 23.3) to 62.8% (SD = 21.20) and 65.58% (SD = 23.25). The performance of active learning deteriorates with higher noise. Compared with the final full-text selection, the selection made by clinicians or research methodologists deteriorated WSS@95 by 25.61% and 6.25%, respectively.
    CONCLUSION: While active machine learning tools can accelerate the process of literature screening within guideline development, they can only work as well as the input given by human raters. Noisy labels make noisy machine learning.
    Keywords:  Active learning; Guideline development; Machine learning; Systematic reviewing
    DOI:  https://doi.org/10.1186/s13643-024-02590-5
  4. Cureus. 2024 Jun;16(6): e61955
      BACKGROUND: In reconstructive plastic surgery, the need for comprehensive research and systematic reviews is apparent due to the field's intricacies, influencing the evidence supporting specific procedures. Although Chat-GPT's knowledge is limited to September 2021, its integration into research proves valuable for efficiently identifying knowledge gaps. Therefore, this tool becomes a potent asset, directing researchers to focus on conducting systematic reviews where they are most necessary.METHODS: Chat-GPT 3.5 was prompted to generate 10 unpublished, innovative research topics on breast reconstruction surgery, followed by 10 additional subtopics. Results were filtered for systematic reviews in PubMed, and novel ideas were identified. To evaluate Chat-GPT's power in generating improved responses, two additional searches were conducted using search terms generated by Chat-GPT.
    RESULTS: Chat-GPT produced 83 novel ideas, leading to an accuracy rate of 83%. There was a wide range of novel ideas produced among topics such as transgender women, generating 10 ideas, whereas acellular dermal matrix (ADM) generated five ideas. Chat-GPT increased the total number of manuscripts generated by a factor of 2.3, 3.9, and 4.0 in the first, second, and third trials, respectively. While the search results were accurate to our manual searches (95.2% accuracy), the greater number of manuscripts potentially diluted the quality of articles, resulting in fewer novel systematic review ideas.
    CONCLUSION: Chat-GPT proves valuable in identifying gaps in the literature and offering insights into areas lacking research in breast reconstruction surgery. While it displays high sensitivity, refining its specificity is imperative. Prudent practice involves evaluating accomplished work and conducting a comprehensive review of all components involved.
    Keywords:  breast reconstruction; chat-gpt; novel ideas; plastic and reconstructive surgery; reconstructive breast surgery; reconstructive plastic surgery; research; systematic review
    DOI:  https://doi.org/10.7759/cureus.61955
  5. Genet Med. 2024 Jul 04. pii: S1098-3600(24)00143-6. [Epub ahead of print] 101209
      
    DOI:  https://doi.org/10.1016/j.gim.2024.101209
  6. Cancers (Basel). 2024 Jun 25. pii: 2324. [Epub ahead of print]16(13):
      BRCA genetic testing is available for UK Jewish individuals but the provision of information online for BRCA is unknown. We aimed to evaluate online provision of BRCA information by UK organisations (UKO), UK Jewish community organisations (JCO), and genetic testing providers (GTP). Google searches for organisations offering BRCA information were performed using relevant sets of keywords. The first 100 website links were categorised into UKOs/JCOs/GTPs; additional JCOs were supplemented through community experts. Websites were reviewed using customised questionnaires for BRCA information. Information provision was assessed for five domains: accessibility, scope, depth, accuracy, and quality. These domains were combined to provide a composite score (maximum score = 5). Results were screened (n = 6856) and 45 UKOs, 16 JCOs, and 18 GTPs provided BRCA information. Accessibility was high (84%,66/79). Scope was lacking with 35% (28/79) addressing >50% items. Most (82%, 65/79) described BRCA-associated cancers: breast and/or ovarian cancer was mentioned by 78%(62/79), but only 34% (27/79) mentioned ≥1 pancreatic, prostate, melanoma. Few websites provided carrier frequencies in the general (24%,19/79) and Jewish populations (20%,16/79). Only 15% (12/79) had quality information with some/minimal shortcomings. Overall information provision was low-to-moderate: median scores UKO = 2.1 (IQR = 1), JCO = 1.6 (IQR = 0.9), and GTP = 2.3 (IQR = 1) (maximum-score = 5). There is a scarcity of high-quality BRCA information online. These findings have implications for UK Jewish BRCA programmes and those considering BRCA testing.
    Keywords:  BRCA; Jewish; genetic testing; online information
    DOI:  https://doi.org/10.3390/cancers16132324
  7. NPJ Digit Med. 2024 Jul 08. 7(1): 183
      With the introduction of ChatGPT, Large Language Models (LLMs) have received enormous attention in healthcare. Despite potential benefits, researchers have underscored various ethical implications. While individual instances have garnered attention, a systematic and comprehensive overview of practical applications currently researched and ethical issues connected to them is lacking. Against this background, this work maps the ethical landscape surrounding the current deployment of LLMs in medicine and healthcare through a systematic review. Electronic databases and preprint servers were queried using a comprehensive search strategy which generated 796 records. Studies were screened and extracted following a modified rapid review approach. Methodological quality was assessed using a hybrid approach. For 53 records, a meta-aggregative synthesis was performed. Four general fields of applications emerged showcasing a dynamic exploration phase. Advantages of using LLMs are attributed to their capacity in data analysis, information provisioning, support in decision-making or mitigating information loss and enhancing information accessibility. However, our study also identifies recurrent ethical concerns connected to fairness, bias, non-maleficence, transparency, and privacy. A distinctive concern is the tendency to produce harmful or convincing but inaccurate content. Calls for ethical guidance and human oversight are recurrent. We suggest that the ethical guidance debate should be reframed to focus on defining what constitutes acceptable human oversight across the spectrum of applications. This involves considering the diversity of settings, varying potentials for harm, and different acceptable thresholds for performance and certainty in healthcare. Additionally, critical inquiry is needed to evaluate the necessity and justification of LLMs' current experimental use.
    DOI:  https://doi.org/10.1038/s41746-024-01157-x
  8. Tremor Other Hyperkinet Mov (N Y). 2024 ;14 33
      Background: Large-language models (LLMs) driven by artificial intelligence allow people to engage in direct conversations about their health. The accuracy and readability of the answers provided by ChatGPT, the most famous LLM, about Essential Tremor (ET), one of the commonest movement disorders, have not yet been evaluated.Methods: Answers given by ChatGPT to 10 questions about ET were evaluated by 5 professionals and 15 laypeople with a score ranging from 1 (poor) to 5 (excellent) in terms of clarity, relevance, accuracy (only for professionals), comprehensiveness, and overall value of the response. We further calculated the readability of the answers.
    Results: ChatGPT answers received relatively positive evaluations, with median scores ranging between 4 and 5, by both groups and independently from the type of question. However, there was only moderate agreement between raters, especially in the group of professionals. Moreover, readability levels were poor for all examined answers.
    Discussion: ChatGPT provided relatively accurate and relevant answers, with some variability as judged by the group of professionals suggesting that the degree of literacy about ET has influenced the ratings and, indirectly, that the quality of information provided in clinical practice is also variable. Moreover, the readability of the answer provided by ChatGPT was found to be poor. LLMs will likely play a significant role in the future; therefore, health-related content generated by these tools should be monitored.
    Keywords:  Artificial intelligence; ChatGPT; Essential tremor; Large language Model; Movement disorders
    DOI:  https://doi.org/10.5334/tohm.917
  9. Front Ophthalmol (Lausanne). 2023 ;3 1260415
      Purpose: Our study investigates ChatGPT and its ability to communicate with glaucoma patients.Methods: We inputted eight glaucoma-related questions/topics found on the American Academy of Ophthalmology (AAO)'s website into ChatGPT. We used the Flesch-Kincaid test, Gunning Fog Index, SMOG Index, and Dale-Chall readability formula to evaluate the comprehensibility of its responses for patients. ChatGPT's answers were compared with those found on the AAO's website.
    Results: ChatGPT's responses required reading comprehension of a higher grade level (average = grade 12.5 ± 1.6) than that of the text on the AAO's website (average = 9.4 grade ± 3.5), (0.0384). For the eight responses, the key ophthalmic terms appeared 34 out of 86 times in the ChatGPT responses vs. 86 out of 86 times in the text on the AAO's website. The term "eye doctor" appeared once in the ChatGPT text, but the formal term "ophthalmologist" did not appear. The term "ophthalmologist" appears 26 times on the AAO's website. The word counts of the answers produced by ChatGPT and those on the AAO's website were similar (p = 0.571), with phrases of a homogenous length.
    Conclusion: ChatGPT trains on the texts, phrases, and algorithms inputted by software engineers. As ophthalmologists, through our websites and journals, we should consider encoding the phrase "see an ophthalmologist". Our medical assistants should sit with patients during their appointments to ensure that the text is accurate and that they fully comprehend its meaning. ChatGPT is effective for providing general information such as definitions or potential treatment options for glaucoma. However, ChatGPT has a tendency toward repetitive answers and, due to their elevated readability scores, these could be too difficult for a patient to read.
    Keywords:  ChatGPT; artificial intelligence; glaucoma; ophthalmology; patient education
    DOI:  https://doi.org/10.3389/fopht.2023.1260415
  10. Spec Care Dentist. 2024 Jul 10.
      BACKGROUND: Internet has become an indispensable source of health-related information. However, several studies have shown there to be a lack of quality control for webpages related to disability. Specifically, available content concerning Down syndrome (DS) and dentistry is limited and of dubious quality.OBJECTIVE: The aim of the present study was to assess the quality of online content in Spanish and Portuguese on dental care for individuals with DS.
    METHODS: A simultaneous search in Google and Bing using the terms "Down syndrome" and "odontology/dentist/dental treatment" in Spanish and Portuguese was conducted in seven Ibero-American countries (Argentina, Brazil, Chile, Colombia, Spain, Mexico, and Portugal). The first 100 consecutive pages of results from the three combinations of terms in each of the search engines were accessed and selected by applying conventional exclusion criteria. The selected pages were classified according to their authorship, specificity and dissemination potential. The quality of the online content was assessed using the DISCERN questionnaire and the Questionnaire to Evaluate Health Web Sites According to European Criteria (QEEC). The presence of the Health On Net (HON) and Accredited Medical Website (AMW) seals was also assessed.
    RESULTS: The mean DISCERN score was 2.51 ± 0.85 and 2.57 ± 0.86 for the Spanish and Portuguese webpages, respectively. The mean readability score was 3.43 ± 1.26 and 3.25 ± 1.08 for the Spanish and Portuguese webpages, respectively. None of the selected webpages presented the HONcode or AMW trust seals.
    CONCLUSIONS: The content available online in Spanish and Portuguese regarding Down syndrome and dentistry is scarce and of highly questionable quality.
    Keywords:  Down syndrome; dental treatment; dentist; disabilities; odontology
    DOI:  https://doi.org/10.1111/scd.13037
  11. BJA Open. 2024 Jun;10 100296
      Background: The expansion of artificial intelligence (AI) within large language models (LLMs) has the potential to streamline healthcare delivery. Despite the increased use of LLMs, disparities in their performance particularly in different languages, remain underexplored. This study examines the quality of ChatGPT responses in English and Japanese, specifically to questions related to anaesthesiology.Methods: Anaesthesiologists proficient in both languages were recruited as experts in this study. Ten frequently asked questions in anaesthesia were selected and translated for evaluation. Three non-sequential responses from ChatGPT were assessed for content quality (accuracy, comprehensiveness, and safety) and communication quality (understanding, empathy/tone, and ethics) by expert evaluators.
    Results: Eight anaesthesiologists evaluated English and Japanese LLM responses. The overall quality for all questions combined was higher in English compared with Japanese responses. Content and communication quality were significantly higher in English compared with Japanese LLMs responses (both P<0.001) in all three responses. Comprehensiveness, safety, and understanding were higher scores in English LLM responses. In all three responses, more than half of the evaluators marked overall English responses as better than Japanese responses.
    Conclusions: English LLM responses to anaesthesia-related frequently asked questions were superior in quality to Japanese responses when assessed by bilingual anaesthesia experts in this report. This study highlights the potential for language-related disparities in healthcare information and the need to improve the quality of AI responses in underrepresented languages. Future studies are needed to explore these disparities in other commonly spoken languages and to compare the performance of different LLMs.
    Keywords:  ChatGPT; anaesthesia; artificial intelligence; digital health
    DOI:  https://doi.org/10.1016/j.bjao.2024.100296
  12. OTO Open. 2024 Jul-Sep;8(3):8(3): e163
      Objective: Evaluate the quality of responses from Chat Generative Pre-Trained Transformer (ChatGPT) models compared to the answers for "Frequently Asked Questions" (FAQs) from the American Academy of Otolaryngology-Head and Neck Surgery (AAO-HNS) Clinical Practice Guidelines (CPG) for Ménière's disease (MD).Study Design: Comparative analysis.
    Setting: The AAO-HNS CPG for MD includes FAQs that clinicians can give to patients for MD-related questions. The ability of ChatGPT to properly educate patients regarding MD is unknown.
    Methods: ChatGPT-3.5 and 4.0 were each prompted with 16 questions from the MD FAQs. Each response was rated in terms of (1) comprehensiveness, (2) extensiveness, (3) presence of misleading information, and (4) quality of resources. Readability was assessed using Flesch-Kincaid Grade Level (FKGL) and Flesch Reading Ease Score (FRES).
    Results: ChatGPT-3.5 was comprehensive in 5 responses whereas ChatGPT-4.0 was comprehensive in 9 (31.3% vs 56.3%, P = .2852). ChatGPT-3.5 and 4.0 were extensive in all responses (P = 1.0000). ChatGPT-3.5 was misleading in 5 responses whereas ChatGPT-4.0 was misleading in 3 (31.3% vs 18.75%, P = .6851). ChatGPT-3.5 had quality resources in 10 responses whereas ChatGPT-4.0 had quality resources in 16 (62.5% vs 100%, P = .0177). AAO-HNS CPG FRES (62.4 ± 16.6) demonstrated an appropriate readability score of at least 60, while both ChatGPT-3.5 (39.1 ± 7.3) and 4.0 (42.8 ± 8.5) failed to meet this standard. All platforms had FKGL means that exceeded the recommended level of 6 or lower.
    Conclusion: While ChatGPT-4.0 had significantly better resource reporting, both models have room for improvement in being more comprehensive, more readable, and less misleading for patients.
    Keywords:  Clinical Practice Guidelines; Ménière's disease; artificial intelligence; patient education
    DOI:  https://doi.org/10.1002/oto2.163
  13. Hand Surg Rehabil. 2024 Jul 04. pii: S2468-1229(24)00163-4. [Epub ahead of print] 101748
      
    Keywords:  ChatGPT; Hand surgery; Patient education websites; Quality; Readability
    DOI:  https://doi.org/10.1016/j.hansur.2024.101748
  14. Dermatol Surg. 2024 Jul 10.
      BACKGROUND: The Internet has become the primary information source for patients, with most turning to online resources before seeking medical advice.OBJECTIVE: The aim of this study is to evaluate the quality of online information on hidradenitis suppurativa available to patients.
    METHODS: The authors performed an Internet search using the search terms "hidradenitis suppurativa," "hidradenitis suppurativa treatment," "hidradenitis suppurativa surgery," and "acne inversa." They identified the initial 100 websites from Google, Yahoo, and Bing. Websites were evaluated based on the modified Ensuring Quality Information for Patients instrument.
    RESULTS: Of the 300 websites, 95 (31.7%) were incorporated after accounting for the exclusion criteria: duplicate entries, websites not pertinent to the subject matter, websites inaccessible due to location restrictions or necessitating user accounts for access, websites in languages other than English, and websites originating from scientific publications directed at a scientific audience rather than the general population. Ensuring Quality Information for Patients scores ranged from 5 to 30/36, with a median of 17.
    CONCLUSION: This analysis unveils a diverse array of websites that could confound patients navigating toward high-caliber resources. These barriers may hinder the access to top-tier online patient information and magnify disparities in referral rates, patient engagement, treatment satisfaction, and quality of life.
    DOI:  https://doi.org/10.1097/DSS.0000000000004323
  15. J Neurooncol. 2024 Jul 11.
      PURPOSE: Our study aims to discover the leading topics within glioblastoma (GB) research, and to examine if these topics have "hot" or "cold" trends. Additionally, we aim to showcase the potential of natural language processing (NLP) in facilitating research syntheses, offering an efficient strategy to dissect the landscape of academic literature in the realm of GB research.METHODS: The Scopus database was queried using "glioblastoma" as the search term, in the "TITLE" and "KEY" fields. BERTopic, an NLP-based topic modeling (TM) method, was used for probabilistic TM. We specified a minimum topic size of 300 documents and 5% probability cutoff for outlier detection. We labeled topics based on keywords and representative documents and visualized them with word clouds. Linear regression models were utilized to identify "hot" and "cold" topic trends per decade.
    RESULTS: Our TM analysis categorized 43,329 articles into 15 distinct topics. The most common topics were Genomics, Survival, Drug Delivery, and Imaging, while the least common topics were Surgical Resection, MGMT Methylation, and Exosomes. The hottest topics over the 2020s were Viruses and Oncolytic Therapy, Anticancer Compounds, and Exosomes, while the cold topics were Surgical Resection, Angiogenesis, and Tumor Metabolism.
    CONCLUSION: Our NLP methodology provided an extensive analysis of GB literature, revealing valuable insights about historical and contemporary patterns difficult to discern with traditional techniques. The outcomes offer guidance for research directions, policy, and identifying emerging trends. Our approach could be applied across research disciplines to summarize and examine scholarly literature, guiding future exploration.
    Keywords:  Glioblastoma; Hot topic; Natural language processing; Research trends; Topic modeling
    DOI:  https://doi.org/10.1007/s11060-024-04762-8
  16. Musculoskeletal Care. 2024 Sep;22(3): e1916
      OBJECTIVE: The Internet has transformed how patients access health information. We examined Google search engine data to understand which aspects of health are most often searched for in combination with inflammatory arthritis (IA).METHODS: Using Google Trends data (2011-2022) we determined the relative popularity of searches for 'patient symptoms' (pain, fatigue, stiffness, mood, work) and 'treat-to-target' (disease-modifying drugs, steroids, swelling, inflammation) health domains made with rheumatoid arthritis (RA), psoriatic arthritis (PsA), and axial spondyloarthritis (AxSpA) in the UK/USA. Google Trends normalises searches by popularity over time and region, generating 0-100 scale relative search volumes (RSV; 100 represents the time-point with most searches). Up to five search term combinations can be compared.
    RESULTS: In all IA forms, pain was the most popular patient symptom domain. UK/USA searches for pain gave mean RSVs of 58/79, 34/51, and 39/63 with RA, PsA, and AxSpA; mean UK/USA RSVs for other patient symptom domains ranged 2-7/2-8. Methotrexate was the most popular treat-to-target search term with RA/PsA in the UK (mean 28/21) and USA (mean 63/33). For AxSpA, inflammation was most popular (mean UK/USA 9/34). Searches for pain were substantially more popular than searches for methotrexate in RA and PsA, and inflammation in AxSpA. Searches increased over time.
    CONCLUSIONS: Pain is the most popular search term used with IA in Google searches in the UK/USA, supporting surveys/qualitative studies highlighting the importance of improving pain to patients with IA. Routine pain assessments should be embedded within treat-to-target strategies to ensure patient perspectives are considered.
    Keywords:  Internet; axial spondyloarthritis; pain; psoriatic arthritis; rheumatoid arthritis
    DOI:  https://doi.org/10.1002/msc.1916
  17. J Med Internet Res. 2024 Jul 11. 26 e57842
      BACKGROUND: During the COVID-19 pandemic, much misinformation and disinformation emerged and spread rapidly via the internet, posing a severe public health challenge. While the need for eHealth literacy (eHL) has been emphasized, few studies have compared the difficulties involved in seeking and using COVID-19 information between adult internet users with low or high eHL.OBJECTIVE: This study examines the association between eHL and web-based health information-seeking behaviors among adult Japanese internet users. Moreover, this study qualitatively shed light on the difficulties encountered in seeking and using this information and examined its relationship with eHL.
    METHODS: This cross-sectional internet-based survey (October 2021) collected data from 6000 adult internet users who were equally divided into sample groups by gender, age, and income. We used the Japanese version of the eHL Scale (eHEALS). We also used a Digital Health Literacy Instrument (DHLI) adapted to the COVID-19 pandemic to assess eHL after we translated it to Japanese. Web-based health information-seeking behaviors were assessed by using a 10-item list of web sources and evaluating 10 topics participants searched for regarding COVID-19. Sociodemographic and other factors (eg, health-related behavior) were selected as covariates. Furthermore, we qualitatively explored the difficulties in information seeking and using. The descriptive contents of the responses regarding difficulties in seeking and using COVID-19 information were analyzed using an inductive qualitative content analysis approach.
    RESULTS: Participants with high eHEALS and DHLI scores on information searching, adding self-generated information, evaluating reliability, determining relevance, and operational skills were more likely to use all web sources of information about COVID-19 than those with low scores. However, there were negative associations between navigation skills and privacy protection scores when using several information sources, such as YouTube (Google LLC), to search for COVID-19 information. While half of the participants reported no difficulty seeking and using COVID-19 information, participants who reported any difficulties, including information discernment, incomprehensible information, information overload, and disinformation, had lower DHLI score. Participants expressed significant concerns regarding "information quality and credibility," "abundance and shortage of relevant information," "public trust and skepticism," and "credibility of COVID-19-related information." Additionally, they disclosed more specific concerns, including "privacy and security concerns," "information retrieval challenges," "anxieties and panic," and "movement restriction."
    CONCLUSIONS: Although Japanese internet users with higher eHEALS and total DHLI scores were more actively using various web sources for COVID-19 information, those with high navigation skills and privacy protection used web-based information about COVID-19 cautiously compared with those with lower proficiency. The study also highlighted an increased need for information discernment when using social networking sites in the "Health 2.0" era. The identified categories and themes from the qualitative content analysis, such as "information quality and credibility," suggest a framework for addressing the myriad challenges anticipated in future infodemics.
    Keywords:  Asia; Asian; COVID-19; DHLI; Japan; Japanese; SARS-COV-2; adult population; cross sectional; digital health literacy; eHEALS; eHealth; eHealth literacy; health communication; health literacy; infectious; information behavior; information seeking; internet; mixed methods study; public health; questionnaire; questionnaires; respiratory; survey; surveys; web-based information
    DOI:  https://doi.org/10.2196/57842