J Med Internet Res. 2026 Jan 21.
BACKGROUND: Artificial intelligence (AI) tools are widely and freely available for clinical use. Understanding hospitalists' real-world adoption patterns in the absence of organizational endorsement is essential for healthcare institutions to develop governance frameworks and optimize AI integration.
OBJECTIVE: The objective of this study was to investigate hospitalist use of AI, examining the AI platforms being utilized, frequency of use, and clinical contexts of application. We hypothesized that AI use is more common among younger, less experienced hospitalists, albeit at an overall low frequency.
METHODS: An anonymous online survey was distributed via email to all 70 hospitalists (physicians, nurse practitioners, physician assistants) providing direct patient care at a large urban academic tertiary care hospital. Demographic data, AI platform used if any, purpose(s) for AI use, and frequency of use information was collected. CHERRIES checklist was used for creating, testing, administering, and reporting the results of the survey. Chi-square test was used where possible; when expected cell values were low, Fisher's exact test was used instead. Friedman test and pairwise Wilcoxon signed-rank test were used for analyzing the differences between frequency of AI use for various tasks. Likert-scale responses to frequency questions (Never, Rarely, Sometimes, Often, Always) were converted to ordinal values (1 - 5, respectively) to facilitate analysis.
RESULTS: Of 70 providers, 54 (77.1%) responded to the survey. No significant differences in AI usage were observed across shift type, years of practice, time allocation to hospitalist duties, sex, age, or provider designation, contrary to our hypothesis. Overall, 36 of 54 respondents (66.7%, 95% CI 53.4%-77.8%) reported using AI in clinical practice. OpenEvidence was the most used platform (28/54, 51.9%), far exceeding general-purpose tools like ChatGPT (4/54, 7.4%), suggesting preference for medical-specific platforms. Among non-users, primary concerns were AI accuracy and preference for established resources. The most common application was answering miscellaneous clinical questions (32/36, 88.9%), generating differential diagnoses (31/36, 86.1%) and determining management options (31/36, 86.1%), with much lower use for patient education materials (16/36, 44.4%). There was a statistically significant difference in the frequency of AI use across these clinical scenarios (Friedman test chi-square statistic 37.596, df 4, P<.001). Pairwise comparisons using the Wilcoxon signed-rank test revealed significant differences between use for answering miscellaneous questions and confirming suspected diagnosis (P=.003) and generating patient education materials (P=.004) respectively. Most respondents reported using AI for under 25% of clinical encounters across all use cases.
CONCLUSIONS: Two-thirds of hospitalists organically adopted AI despite the absence of institutional oversight. AI is predominantly used as a supplementary decision support tool, with a preference for a medical-specific platform. Healthcare institutions must develop governance frameworks, validation protocols, and educational initiatives to ensure safe and effective AI deployment in clinical practice.
CLINICALTRIAL: