bims-arines Biomed News
on AI in evidence synthesis
Issue of 2025–02–16
three papers selected by
Farhad Shokraneh



  1. Front Big Data. 2024 ;7 1505284
      The rise of Large Language Models (LLMs), such as LLaMA and ChatGPT, has opened new opportunities for enhancing recommender systems through improved explainability. This paper provides a systematic literature review focused on leveraging LLMs to generate explanations for recommendations-a critical aspect for fostering transparency and user trust. We conducted a comprehensive search within the ACM Guide to Computing Literature, covering publications from the launch of ChatGPT (November 2022) to the present (November 2024). Our search yielded 232 articles, but after applying inclusion criteria, only six were identified as directly addressing the use of LLMs in explaining recommendations. This scarcity highlights that, despite the rise of LLMs, their application in explainable recommender systems is still in an early stage. We analyze these select studies to understand current methodologies, identify challenges, and suggest directions for future research. Our findings underscore the potential of LLMs improving explanations of recommender systems and encourage the development of more transparent and user-centric recommendation explanation solutions.
    Keywords:  LLMS; explainable AI; explainable recommendation; explanations; large language models; recommender systems
    DOI:  https://doi.org/10.3389/fdata.2024.1505284
  2. JACC Adv. 2025 Feb 08. pii: S2772-963X(25)00010-9. [Epub ahead of print]4(3): 101593
      To explore threats and opportunities and to chart a path for safely navigating the rapid changes that generative artificial intelligence (AI) will bring to clinical research, the Duke Clinical Research Institute convened a multidisciplinary think tank in January 2024. Leading experts from academia, industry, nonprofits, and government agencies highlighted the potential opportunities of generative AI in automation of documentation, strengthening of participant and community engagement, and improvement of trial accuracy and efficiency. Challenges include technical hurdles, ethical dilemmas, and regulatory uncertainties. Success is expected to require establishing rigorous data management and security protocols, fostering integrity and trust among stakeholders, and sharing information about the safety and effectiveness of AI applications. Meeting insights point towards a future where, through collaboration and transparency, generative AI will help to shorten the translational pipeline and increase the inclusivity and equitability of clinical research.
    Keywords:  artificial intelligence; clinical research; generative AI; participant engagement; research ethics
    DOI:  https://doi.org/10.1016/j.jacadv.2025.101593
  3. JAMIA Open. 2025 Feb;8(1): ooaf003
       Objective: To enhance the accuracy of information retrieval from pharmacovigilance (PV) databases by employing Large Language Models (LLMs) to convert natural language queries (NLQs) into Structured Query Language (SQL) queries, leveraging a business context document.
    Materials and Methods: We utilized OpenAI's GPT-4 model within a retrieval-augmented generation (RAG) framework, enriched with a business context document, to transform NLQs into executable SQL queries. Each NLQ was presented to the LLM randomly and independently to prevent memorization. The study was conducted in 3 phases, varying query complexity, and assessing the LLM's performance both with and without the business context document.
    Results: Our approach significantly improved NLQ-to-SQL accuracy, increasing from 8.3% with the database schema alone to 78.3% with the business context document. This enhancement was consistent across low, medium, and high complexity queries, indicating the critical role of contextual knowledge in query generation.
    Discussion: The integration of a business context document markedly improved the LLM's ability to generate accurate SQL queries (ie, both executable and returning semantically appropriate results). Performance achieved a maximum of 85% when high complexity queries are excluded, suggesting promise for routine deployment.
    Conclusion: This study presents a novel approach to employing LLMs for safety data retrieval and analysis, demonstrating significant advancements in query generation accuracy. The methodology offers a framework applicable to various data-intensive domains, enhancing the accessibility of information retrieval for non-technical users.
    Keywords:  drug safety; information retrieval; large language models (LLMs); natural language processing (NLP); pharmacovigilance
    DOI:  https://doi.org/10.1093/jamiaopen/ooaf003