Implement Sci. 2026 Apr 14.
Guillaume Fontaine,
Susan Michie,
Rinad S Beidas,
Elvin Geng,
Christine Fahim,
Byron J Powell,
Vivian Welch,
James Thomas,
Jeffery Chan,
Samira Abbasgholizadeh-Rahimi,
France Légaré,
Janna Hastings,
Sylvie D Lambert,
Justin Presseau,
Sharon E Straus,
Ruopeng An,
Ashrita Saran,
Natalie Taylor.
BACKGROUND: Artificial intelligence (AI), including machine learning, natural language processing, and large language models, may support implementation practice and research in tasks such as evidence synthesis, determinant assessment, strategy selection, monitoring, adaptation, and theory development. However, these applications of AI do not form a single, uniform category. They span a continuum from practice-facing applications that support local implementation work to research- and methods-facing applications that support evidence generation and synthesis. The guidance on how to classify, evaluate, and report these uses of AI remains limited. The AI Methods for Implementation Science (AIM-IS) program aims to develop, validate, and maintain a suite of products to guide the responsible use of AI across implementation practice, implementation research, and bridging use cases.
METHODS: AIM-IS is a multi-phase, multi-method methodological development program. The unit of analysis is the AI-for-implementation use case: a specific AI capability supporting a defined implementation practice or research task within a workflow, decision point, and governance context. Phase 1 is a living scoping review mapping published AI use cases in implementation science, including how they are evaluated and what risks they raise. Phase 2 is a qualitative interview study with implementation researchers, practitioners, AI experts, community members, and data infrastructure and governance experts to refine use cases and identify feasibility constraints, outcome priorities, and reporting needs. Phase 3 will integrate findings from Phases 1 and 2 to develop the draft AIM-IS products, including a framework, a taxonomy of use cases, guardrails for responsible use, a practical guide, outcome domains, and reporting items. Phase 4 will use an eDelphi process and consensus meeting to refine and finalize these products. Phase 5 will conduct usability testing to improve clarity and ease of use, resulting in the finalized AIM-IS products. AIM-IS is informed by implementation science, sociotechnical systems, equity, and responsible AI frameworks, and includes a living-update approach to support ongoing refinement.
DISCUSSION: The AIM-IS program will deliver a suite of products, including a framework, toolkit and reporting standard, to support the specification, governance, evaluation, and reporting of AI in implementation science. Together, these products aim to strengthen transparency, comparability, accountability, and attention to equity in how AI is used by implementation practitioners and researchers over time.
REGISTRATION: Open Science Framework, March 15, 2026: https://doi.org/10.17605/OSF.IO/BX35K.
Keywords: Artificial intelligence; Generative AI; Implementation practice; Implementation research; Large language models; Machine learning; Methodology; Reporting guideline