bims-aukdir Biomed News
on Automated knowledge discovery in diabetes research
Issue of 2026–03–29
24 papers selected by
Mott Given



  1. Sci Rep. 2026 Mar 23. pii: 9825. [Epub ahead of print]16(1):
      Diabetic Retinopathy (DR) remains a leading cause of vision loss among diabetic patients, underscoring the importance of early detection through reliable retinal imaging analysis. Retinal fundus images are inherently physics-driven, capturing the interactions of light with retinal tissue, including absorption, reflection, and scattering phenomena, which define the intensity and structural patterns critical for diagnosis. However, existing machine learning and optimization approaches for DR screening face challenges in handling the high-dimensional, heterogeneous, and complex physical characteristics of these images. Conventional methods often suffer from suboptimal feature selection, limited generalization, and reduced classification accuracy due to their inability to adaptively exploit image-specific patterns. To address these challenges, this study introduces a Dynamic Grasshopper Optimization Algorithm (DGOA) for feature selection, leveraging its dynamic adaptation capabilities to explore and exploit the physically meaningful feature space effectively. By incorporating adaptive parameter control, DGOA mitigates premature convergence and ensures the selection of the most discriminative features, enhancing model robustness. To further improve classification reliability, an ensemble learning classifier is integrated, combining multiple base models to leverage complementary strengths, reduce overfitting, and maximize predictive performance. The proposed physics-aware AI framework was validated on the EyePACS Retinal Fundus Images dataset, a large and diverse collection of high-resolution images reflecting variations in illumination, contrast, and tissue properties. Comparative experiments with EfficientNetV2S, MGA-CSG, and BWO-DL highlight the advantages of our approach in balancing computational efficiency, generalization, and physically informed feature extraction. The DGOA-Ensemble model achieved an accuracy of 94.6%, F1-score of 0.94, and AUC-ROC of 0.96, demonstrating its effectiveness as a robust, interpretable, and generalizable framework that bridges the gap between physics-based retinal imaging and AI-driven automated DR detection.
    Keywords:  Diabetic Retinopathy; Dynamic Adaptation; Ensemble Learning; Feature Selection; Optimization
    DOI:  https://doi.org/10.1038/s41598-026-41998-y
  2. Photodiagnosis Photodyn Ther. 2026 Mar 25. pii: S1572-1000(26)00122-5. [Epub ahead of print] 105455
       OBJECTIVE: To develop and evaluate a transformer-enabled multi-task framework for automated diabetic retinopathy (DR) analysis, including lesion-level segmentation and detection, and to compare end-to-end vision transformers with radiomics-based classification for DR severity grading across multi-center datasets.
    MATERIALS AND METHODS: A total of 987 fundus images from two clinical centers were used for lesion segmentation and detection, and 6,852 images were used for four-class DR severity classification, with rigorous inclusion criteria and expert-verified annotations. Preprocessing included CLAHE normalization, artifact filtering, and standardized retinal masking. Four segmentation models (UNet++, CE-Net, Swin-UNet, SegFormer) and four detection models (RetinaNet, YOLOv11, DETR, Deformable DETR) were trained under harmonized settings. Classification was performed using two strategies: (1) an end-to-end Vision Transformer (ViT), and (2) a radiomics-based pipeline incorporating 971 IBSI-compliant radiomic features, ComBat harmonization, and three feature-selection methods (SGR, TES, mRMR) paired with six classifiers (CatBoost, LightGBM, TabPFN, SVM, RF, LR). All models underwent internal cross-validation and external multi-center testing.
    RESULTS: SegFormer achieved the highest segmentation performance, with Dice scores of 0.871-0.963 across lesions and strong external generalization. Deformable DETR achieved the best detection performance, reaching external mAP values up to 0.895. For severity classification, the radiomics-based TES + TabPFN pipeline achieved the best results, reaching an external accuracy of 0.883 and an AUC of 0.947, outperforming the ViT classifier (accuracy 0.838, AUC 0.902). Radiomics models demonstrated superior robustness under domain shift and reduced sensitivity to training-set size compared with end-to-end transformers.
    CONCLUSIONS: Transformer-based lesion analysis combined with radiomics-driven classification provides a robust, generalizable, and clinically meaningful solution for automated DR screening and severity assessment.
    Keywords:  Deep learning; Diabetic retinopathy; Fundus imaging; Multi-lesion segmentation; Severity classification; Transformer models
    DOI:  https://doi.org/10.1016/j.pdpdt.2026.105455
  3. Healthcare (Basel). 2026 Mar 22. pii: 808. [Epub ahead of print]14(6):
      Background: Diabetes mellitus is a global health challenge, especially among homeless people. Early prediction of diabetes can reduce treatment costs and improve interventions. This study aimed to identify predictors of diabetes among homeless adults by utilizing artificial intelligence and providing recommendations for diabetes prevention. Methods: A case-control study of 150 homeless adults in Giza, Egypt (99 diabetes cases and 51 controls), analyzed 43 variables collected through interviews and physiological measures, with missing data imputed. Feature selection using recursive feature elimination and univariate and correlation analyses reduced the predictors to 13 variables. The class imbalance was addressed using synthetic minority over-sampling on the training set. Six models and a stacking ensemble with XGBoost as a meta-learner were evaluated using 5-fold cross-validation and performance metrics, including the accuracy, precision, recall, F1-score, and AUC-ROC. Results: The key predictors included BMI, systolic blood pressure, triceps skinfold thickness, waist circumference, lifestyle factors, comorbidities, diastolic blood pressure, age, medication adherence, educational level, marital status, duration of residence, and diabetes knowledge. Individual classifiers achieved a moderate performance (accuracy: 56.7-70.0%, F1-score: 0.686-0.781). The stacking ensemble substantially outperformed individual models, achieving a 95.45% accuracy, a 100% precision, a 93.75% recall, a 0.968 F1-score, and a 0.979 AUC-ROC on the test set. Conclusions: Machine learning models can reliably predict diabetes. The proposed hybrid stacking model outperformed conventional classifiers in terms of the prediction performance, highlighting the benefits of ensemble learning and sophisticated resampling strategies in dealing with imbalanced medical data. It is recommended that healthcare institutions integrate AI-powered diagnostic assistance technology into clinical processes to aid in the early detection and treatment of diabetes.
    Keywords:  artificial intelligence; diabetes prediction; healthcare recommendations; homeless population; machine learning
    DOI:  https://doi.org/10.3390/healthcare14060808
  4. Medicina (Kaunas). 2026 Mar 09. pii: 502. [Epub ahead of print]62(3):
      Background and Objectives: Diabetes mellitus represents one of the most prevalent chronic metabolic disorders worldwide, necessitating precise insulin dose management to prevent both acute and long-term complications. The optimization of insulin dosing remains a significant clinical challenge, as inappropriate dosing can lead to hypoglycemia or hyperglycemia, each carrying substantial morbidity risks. Machine learning approaches have emerged as promising tools for developing clinical decision support systems; however, their practical implementation requires both high predictive accuracy and model interpretability. This study aimed to develop and evaluate an explainable machine learning framework for predicting insulin dose adjustments in diabetic patients. We sought to compare multiple ensemble learning approaches and identify the optimal model configuration that balances predictive performance with clinical interpretability through comprehensive SHAP and LIME analyses. Materials and Methods: A comprehensive dataset comprising 10,000 patient records with 12 clinical and demographic features was utilized. We implemented and compared nine machine learning models, including gradient boosting variants (XGBoost, LightGBM, CatBoost, GradientBoosting), AdaBoost, and four ensemble strategies (Voting, Stacking, Blending, and Meta-Learning). Model interpretability was achieved through SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) analyses. Performance was evaluated using accuracy, weighted F1-score, area under the receiver operating characteristic curve (AUC-ROC), precision-recall AUC (PR-AUC), sensitivity, specificity, and cross-entropy loss. Results: The Meta-Learning Ensemble achieved superior performance across all evaluation metrics, attaining an accuracy of 81.35%, weighted F1-score of 0.8121, macro-averaged AUC-ROC of 0.9637, and PR-AUC of 0.9317. The model demonstrated exceptional sensitivity (86.61%) and specificity (91.79%), with particularly high performance in detecting dose reduction requirements (100% sensitivity for the 'down' class). SHAP analysis revealed insulin sensitivity, previous medications, sleep hours, weight, and body mass index as the most influential predictors across different insulin adjustment categories. The meta-model feature importance analysis indicated that LightGBM probability estimates contributed most significantly to the ensemble predictions. Conclusions: The proposed explainable Meta-Learning Ensemble framework demonstrates robust predictive capability for insulin dose adjustment recommendations while maintaining clinical interpretability. The integration of SHAP-based explanations facilitates clinician understanding of model predictions, supporting transparent and informed decision-making in diabetes management. This approach represents a significant advancement toward the clinical implementation of artificial intelligence in personalized insulin therapy.
    Keywords:  LIME; SHAP; clinical decision support; diabetes mellitus; ensemble methods; explainable artificial intelligence; gradient boosting; insulin dose prediction; machine learning; meta-learning
    DOI:  https://doi.org/10.3390/medicina62030502
  5. Microvasc Res. 2026 Mar 25. pii: S0026-2862(26)00045-2. [Epub ahead of print] 104945
       BACKGROUND: Diabetes mellitus accelerates vascular degeneration and increases the risk of major macrovascular complications, including Peripheral Arterial Disease (PAD) and aortic pathologies, collectively termed Peripheral Arterial and Aortic Diseases (PAAD). These conditions are strongly associated with adverse cardiovascular outcomes but often remain underdiagnosed in diabetic populations due to asymptomatic progression and limited access to early screening.
    OBJECTIVE: This study aims to develop and validate a non-invasive, artificial intelligence (AI)-based screening framework using retinal fundus imaging for early detection of PAAD by exploiting retinal microvascular features as systemic biomarkers.
    METHODS: A hybrid diagnostic pipeline integrated simulated Optical Coherence Tomography (OCT)-like structural features (retinal thickness, texture entropy, vessel density factor, and layer separation index), handcrafted vascular biomarkers, and an attention-enhanced VGG16 backbone with Convolutional Block Attention Modules (VGG16 + CBAM). Multiple Instance Learning (MIL) improved lesion-level discrimination in weakly labeled datasets. Multi-level feature fusion aggregated spatial, physiological, and morphological descriptors. High-resolution fundus datasets from public and clinical cohorts were used for training and validation. Model interpretability was ensured using SHapley Additive exPlanations (SHAP) and Gradient-weighted Class Activation Mapping (Grad-CAM).
    RESULTS: The framework achieved an accuracy of 94.6%, sensitivity of 90.5%, specificity of 96.2%, and an AUC-ROC of 0.973 on an independent test set. SHAP identified retinal thickness and texture entropy as dominant predictors, while Grad-CAM highlighted vessel bifurcations and arteriolar narrowing, consistent with PAAD pathophysiology. The average inference time was 150 ms per image on GPU, enabling real-time use.
    CONCLUSION: This interpretable AI-based system demonstrates high diagnostic performance for PAAD detection using retinal imaging. It offers a non-invasive, cost-effective, and scalable alternative to conventional vascular assessments and may support earlier diagnosis and improved prevention of cardiovascular and cerebrovascular complications.
    Keywords:  Arteriolar narrowing; Cardiovascular risk prediction; Convolutional Neural Networks (VGG16 + CBAM); Diabetes mellitus; Explainable AI (XAI); Grad-CAM; MIL; Non-invasive vascular screening; OCT simulation; Peripheral Arterial and Aortic Diseases (PAAD); Retinal entropy; SHAP; Vascular biomarkers; Vessel tortuosity
    DOI:  https://doi.org/10.1016/j.mvr.2026.104945
  6. Diabetes Metab Res Rev. 2026 Mar;42(3): e70161
       AIMS: Diabetic retinopathy (DR) is a leading cause of vision loss in individuals with diabetes, highlighting the need for timely screening. In Taiwan, limited ophthalmologic resources, especially in underserved areas, constrain screening coverage. This study evaluated the cost-effectiveness of artificial intelligence (AI)-assisted DR screening as an alternative strategy for early detection and improved resource allocation.
    MATERIALS AND METHODS: A Markov decision-tree model was constructed from the healthcare payer's perspective, using transition probabilities, costs, and quality-adjusted life years (QALYs) derived from domestic and international data. The model applied a 1-year cycle length, a 10-year time horizon, a 3% annual discount rate, and 10,000 Monte Carlo simulations. Incremental cost-effectiveness ratios (ICERs) were calculated for AI-assisted versus ophthalmologist-based screening, with probabilistic and one-way sensitivity analyses conducted to evaluate robustness. Statistical analyses were conducted using SPSS version 23.0, while cost-effectiveness analyses were performed using TreeAge Pro Healthcare 2021.
    RESULTS: AI-assisted screening incurred higher costs ($10,077.34) than ophthalmologist-based screening ($8282.06) but provided greater health benefits (7.60 vs. 6.34 QALYs). The ICER was $1429.19/QALYs, well below willingness-to-pay threshold ($33,983, 2024 Taiwan per capita gross domestic product), demonstrating high cost-effectiveness.
    CONCLUSIONS: AI-assisted DR screening is a cost-effective approach that may enhance access, especially in regions with limited specialist availability. By enabling earlier detection and reducing reliance on ophthalmologists, AI-based screening has the potential to improve both efficiency and equity in healthcare delivery. These findings support its integration into national screening programs and emphasise the importance of local data in informing policy decisions.
    Keywords:  Markov model; artificial intelligence; cost‐effectiveness analysis; diabetic retinopathy
    DOI:  https://doi.org/10.1002/dmrr.70161
  7. Front Med (Lausanne). 2026 ;13 1742345
       Introduction: The application of artificial intelligence (AI) in the analysis of medical images faces significant challenges, chiefly due to the scarcity of well-labeled datasets that are crucial for training sophisticated diagnostic models. To address this issue, we developed three hybrid models that integrate generative components with classification systems. These models differ in their classification architectures to compare the effectiveness of generative data augmentation across various diagnostic applications. By generating high-quality synthetic images of Diabetic Foot Ulcers (DFUs) using advanced network techniques, we ensure both realistic image quality and robust clinical relevance, while abstracting low-level implementation details to focus on the stability and fidelity of the generative process.
    Methods: In our methodology, we introduce temporal dependency modeling within the latent feature space, despite the non-temporal nature of DFU images. The latent representations are systematically organized into ordered sequences, enabling Long Short-Term Memory (LSTM) layers to identify structured spatial relationships among varying wound regions. This sequential processing captures long-range spatial dependencies, thereby modeling consistencies between distant lesion areas and promoting anatomical coherence-challenges that conventional convolutional operations struggle to address. The three hybrid models incorporated in this study feature distinct generator backbones: 1. Baseline CNN-LSTM Architecture - Focused on efficient spatial modelling. 2. EfficientNetV2M-LSTM Model - Emphasizing high-capacity feature extraction. 3. EfficientNetV2S-LSTM Model - Striking a balance between computational efficiency and synthesis quality. Additionally, we employed WGAN-GP + LSTM in one of our models to enhance stable generative training and spatial consistency. This approach utilizes a critic network instead of a traditional discriminator, assessing the discrepancies between real and synthetic datasets to promote stable image generation and mitigate mode collapse. The generative models were trained on a carefully curated dataset comprising 5,894 clinically annotated DFU images from Lancashire Teaching Hospital, representing a variety of ulcer types and severities. Annotations were conducted by three seasoned healthcare professionals specializing in diabetic foot care.
    Results: Our findings demonstrate that the implementation of synthetic images significantly enhances disease classification accuracy and boosts the effectiveness of automated diagnostic systems for DFUs. By maintaining clinically relevant variability in ulcer appearances, the generated images contribute to the development of robust models capable of performing effectively under real-world conditions, which is critical for deployment in screening, triage, and remote wound assessment workflows.
    Discussion: The advancements realized through the integration of generative models in medical image analysis pave the way for real-time clinical applications such as early screening, patient prioritization during triage, and telemedicine assessments of wounds. This is especially crucial for healthcare systems in underserved or remote areas. The ability to leverage synthetic data not only supports improved diagnostic capabilities but also ensures that models remain adaptable to the variability present in clinical scenarios, ultimately enhancing patient care and resource allocation in diabetic foot ulcer management.
    Keywords:  CNN-LSTM; Efficienet V2M-LSTM; Efficienet V2S-LSTM; LSTM; WGAN-GP; deep learning; diabetic foot ulcer (DFU)
    DOI:  https://doi.org/10.3389/fmed.2026.1742345
  8. Nutr Hosp. 2026 Mar 11.
       INTRODUCTION: diabetes mellitus increases the risk of cognitive impairment, but the role of dietary nutrients remains unclear.
    OBJECTIVES: to develop interpretable machine learning (ML) models to identify associations with cognitive impairment in adults aged 50 years and older with diabetes, and to identify key dietary nutrients associated with cognitive outcomes.
    METHODS: data from the 2011-2014 National Health and Nutrition Examination Survey were analyzed. Cognitive function was assessed using the Consortium to Establish a Registry for Alzheimer's Disease (CERAD) Word Learning Test, the Animal Fluency Test (AFT), and the Digit Symbol Substitution Test (DSST). A total of 46 dietary nutrients and other covariates were included. Feature selection was performed using the Boruta algorithm. Six ML models were trained with ten-fold cross-validation. SHapley Additive Explanations and Local Interpretable Model-Agnostic Explanations were applied for model interpretation.
    RESULTS: XGBoost achieved the highest performance in the CERAD model (AUC = 0.982), whereas Random Forest outperformed other models in the AFT and DSST models (AUC = 0.958 and 0.856, respectively). Caffeine emerged as a key protective factor. Copper, zinc, and moisture intake were also associated with reduced risk of cognitive impairment.
    CONCLUSIONS: interpretable ML models can effectively predict cognitive impairment in older adults with diabetes. Nutritional profiling may support early screening and targeted intervention strategies based on observed associations.
    DOI:  https://doi.org/10.20960/nh.06195
  9. Front Endocrinol (Lausanne). 2026 ;17 1790356
       Background: Type 2 diabetes mellitus (T2DM) is a prevalent metabolic disorder, and identifying robust biomarkers is crucial for improving diagnosis and understanding its pathogenesis.
    Methods: We analyzed the gene expression dataset GSE250283 from the GEO database to identify differentially expressed genes (DEGs). Functional enrichment analyses (GO and KEGG) were performed. A comprehensive evaluation of 113 machine learning algorithm combinations was conducted to select an optimal model for hub gene identification and diagnostic prediction. The expression of key genes was validated using independent datasets and quantitative real-time PCR (qRT-PCR). Immune infiltration analysis, gene regulatory network prediction, and drug interaction analysis were also carried out.
    Results: A total of 393 DEGs were identified, primarily enriched in immune-related functions and pathways. The LASSO+GBM hybrid model demonstrated superior relative performance among the tested algorithms and pinpointed six hub genes: LY96, CCR1, BLVRB, TCF3, LILRA2, and NCF1. A logistic regression model based on these genes showed promising predictive accuracy (AUC > 0.75) in both training and testing sets. Validation confirmed that BLVRB and NCF1 were significantly dysregulated. Immune infiltration revealed significant alterations in the immune cell landscape of T2DM patients, with BLVRB and NCF1 showing substantial correlations with various immune cells. Regulatory network analysis suggested hsa-miR-127-5p as a potential upstream regulator of BLVRB, and methylene blue was identified as a potential targeting drug.
    Conclusion: This study identifies novel immune-related candidate genes, particularly BLVRB and NCF1, for T2DM. The constructed diagnostic model shows potential for further development and the findings offer new insights into the immune mechanisms and potential therapeutic avenues for T2DM.
    Keywords:  BLVRB; NCF1; T2DM; diagnostic model; machine learning
    DOI:  https://doi.org/10.3389/fendo.2026.1790356
  10. Diabetes Res Clin Pract. 2026 Mar 22. pii: S0168-8227(26)00138-5. [Epub ahead of print]235 113219
       OBJECTIVE: To identify independent risk factors for diabetic peripheral neuropathic pain (DPNP), construct a nomogram prediction model, and quantify the contribution of predictive factors using SHapley Additive exPlanations (SHAP) values.
    METHODS: This retrospective study of 500 type 2 diabetes patients diagnosed DPNP via the Michigan Neuropathy Screening Instrument and clinical evaluation. Predictors were selected using univariate analysis and LASSO regression, with independent risk factors identified by multivariate logistic regression. Nonlinear relationships were assessed using restricted cubic spline (RCS). The nomogram was evaluated using receiver operating characteristic (ROC) curves, precision-recall (PR) curves, calibration plots, and decision curve analysis (DCA). SHAP quantified factor importance.
    RESULTS: Seven independent risk factors were identified: age, diabetes duration, BMI, smoking history, fasting blood glucose, hyperlipidemia, and AST-highlighting metabolic parameters, especially AST, as key novel contributors. RCS revealed a nonlinear relationship for diabetes duration. The nomogram exhibited strong discrimination (AUCs: 0.863 training, 0.813 validation), good calibration, and strong clinical utility. SHAP confirmed diabetes duration as the most influential predictor.
    CONCLUSIONS: This nomogram provides an interpretable tool for early DPNP risk prediction. By quantifying individual risk, it enables clinicians to identify high-risk patients and implement personalized preventive strategies, potentially improving outcomes.
    Keywords:  Diabetic peripheral neuropathic pain; Machine learning; Prediction model; Risk factors
    DOI:  https://doi.org/10.1016/j.diabres.2026.113219
  11. J Clin Med. 2026 Mar 23. pii: 2461. [Epub ahead of print]15(6):
      Background: Gestational diabetes mellitus (GDM) affects many pregnancies worldwide and is associated with adverse maternal and fetal outcomes. Current screening at 24-28 weeks limits opportunities for early intervention. We evaluated whether machine learning (ML) models using first-trimester clinical and dietary data can predict GDM risk before the standard oral glucose tolerance test. Methods: We analyzed data from 797 pregnant women enrolled in the BORN2020 prospective cohort study (Thessaloniki, Greece). Ten ML algorithms were evaluated across five class-imbalance handling strategies using stratified 5-fold cross-validation, with final evaluation on an independent 20% held-out test set. Features included maternal demographics, obstetric history, lifestyle factors, and 22 dietary micronutrient intakes from the pre-pregnancy period assessed by Food Frequency Questionnaire. Results: The best-performing model (Logistic Regression without resampling) achieved an AUC-ROC of 0.664 (95% CI: 0.542-0.777), with sensitivity of 0.783 and NPV of 0.932 at the pre-specified threshold. The high NPV should be interpreted in the context of the low GDM prevalence (14.7%), as NPV is mathematically dependent on disease prevalence. A reduced nine-feature model using only routine clinical and demographic variables achieved a numerically higher AUC of 0.712 (95% CI: 0.589-0.825), with overlapping confidence intervals, indicating that detailed FFQ-derived micronutrient data did not improve prediction. Maternal age and pre-pregnancy BMI were the strongest individual predictors by SHAP analysis. No model reached the AUC >0.80 threshold for good discrimination. Substantial miscalibration was observed (slope: 0.56; intercept: -1.83), limiting use for absolute risk estimation. Conclusions: This exploratory study demonstrates that first-trimester ML models achieve modest discriminative ability for early GDM prediction, with routine clinical variables performing comparably to models incorporating detailed dietary assessment. These findings should be interpreted with caution, as no external validation cohort was available and the low events-per-variable ratio (~3.8) constrains the reliability of individual model estimates. Substantial miscalibration further limits use for absolute risk estimation. Accordingly, these models should be regarded as exploratory risk-ranking tools only and require external validation and recalibration before any clinical implementation.
    Keywords:  GDM; ML; SHAP; class imbalance; explainable AI; gestational diabetes mellitus; gradient boosting methods; machine learning; maternal nutrition; micronutrients; prediction
    DOI:  https://doi.org/10.3390/jcm15062461
  12. Sci Prog. 2026 Jan-Mar;109(1):109(1): 368504261436075
      ObjectiveIn older adults with Type 2 diabetes mellitus (T2DM), the risk of delirium is significantly increased, driven by neuropathological alterations stemming from chronic insulin resistance. We utilized artificial intelligence and geriatric electronic health records to create an interpretable online machine-learning algorithm for predicting delirium risk. This tool facilitates prompt identification of high-risk elderly T2DM patients, enabling optimized interventions and improved clinical outcomes.MethodsThis retrospective cohort study identified older adults with T2DM using International Classification of Diseases (ICD) codes, with delirium defined by the Confusion Assessment Method for the intensive care unit (CAM-ICU). We extracted baseline demographics, vital signs, laboratory measurements, comorbidities and clinical severity scores. Candidate predictors for eight machine-learning algorithms were selected using least absolute shrinkage and selection operator regression and the Boruta method. Discrimination was assessed using accuracy, sensitivity, specificity and the F1 score. The final model was interpreted using SHapley Additive exPlanations (SHAP) and deployed as an online risk calculator.ResultsIntegrating dual feature selection methods identified 14 key predictors and the gradient boosting machine (GBM) model accurately predicted delirium risk in elderly patients with T2DM, demonstrating strong discriminatory performance with robust calibration in both internal and external validation. SHAP analysis highlighted the Glasgow Coma Scale, ICU length of stay and Sequential Organ Failure Assessment score as the predominant contributors to model predictions. The model was successfully deployed as an accessible online tool and the accompanying web-based calculator enables rapid, personalized risk assessment to support early intervention in ICU settings.ConclusionsThe GBM model showed strong performance in predicting delirium risk among elderly patients with T2DM, supporting clinically meaningful risk stratification. The accompanying web-based calculator enables rapid, individualized bedside assessment and may facilitate early identification of high-risk patients and timely intervention in ICU settings.
    Keywords:  Type 2 diabetes mellitus; delirium; gradient boosting algorithm; machine learning; online calculator
    DOI:  https://doi.org/10.1177/00368504261436075
  13. Digit Health. 2026 Jan-Dec;12:12 20552076261435864
      Diabetic optic neuropathy (DON) is an increasingly recognized, distinct neurodegenerative complication of diabetes and a significant independent cause of vision loss. Its diagnosis is challenging due to heterogeneity, tool limits, and lack of biomarkers, leading to underdiagnosis and delayed intervention. This review aims to provide a comprehensive overview of DON, focusing on its pathophysiological mechanisms, current diagnostic challenges, and the emerging role of artificial intelligence (AI) as a transformative tool for enabling earlier detection and personalized management. This review provides a narrative synthesis of the literature on DON, covering clinical manifestations and multifactorial pathophysiology involving metabolic, vascular, inflammatory, and neurodegenerative pathways. Based on the foundational success in application of AI in diabetic retinopathy (DR), the translational application of machine learning and deep learning algorithms is systematically explored, covering key areas such as optic nerve head segmentation, disease classification, differential diagnosis, predictive analytics, and the discovery of novel imaging biomarkers through radiomics. AI demonstrates significant potential in quantifying subtle structural signs of DON and integrating multimodal data to overcome current diagnostic limitations. The transition from AI models in DR to those for DON represents a shift from detecting microvascular lesions to identifying neurodegenerative changes. Future directions hinge on developing explainable AI for clinical trust and leveraging longitudinal data for predictive modeling of disease progression. The integration of sophisticated AI tools into clinical practice is poised to shift the management of DON from reactive intervention to proactive, precision-based care, ultimately improving visual outcomes for the vast global diabetic population.
    Keywords:  Diabetic optic neuropathy; artificial intelligence; diagnosis; machine learning; prognosis; translational medicine
    DOI:  https://doi.org/10.1177/20552076261435864
  14. Exp Eye Res. 2026 Mar 24. pii: S0014-4835(26)00094-1. [Epub ahead of print]267 110938
      In computer-assisted diagnostics, assessing the quality of retinal images, especially for DR, is vital. While current Image Quality Assessment (IQA) methods lean on Transfer Learning (TL), their adaptability to specific IQA demands, especially for DR images, remains questionable due to the challenges of detecting detailed distortions. In this paper, we propose a novel framework termed Saliency-Aware Mutual Learning for Image Quality Assessment (SAM-IQA). This framework intricately learns the relationship between the representation of salient regions and the overall representation of fundus images. Specifically, we introduce a dual-branch network architecture that simultaneously extracts global features from distorted images and local features from their salient regions. This dual extraction promotes the learning of both coarse and fine-grained feature representations. To further enhance feature extraction, we integrate mutual learning techniques within this dual-branch network, facilitating the capture of high-level content presentation and low-level fusion quality features. This integration results in a more holistic quality assessment. Our evaluation of the DeepDRiD dataset demonstrates the efficacy of SAM-IQA. The method achieved an AUC of 81.5% (↑6.6% vs. previous state-of-the-art (SOTA) methods of 74.9%), outperforming existing IQA methods.
    Keywords:  Artificial intelligence; Computer-assisted diagnostics; Diabetic retinopathy; Machine learning; Saliency-aware
    DOI:  https://doi.org/10.1016/j.exer.2026.110938
  15. Lancet Diabetes Endocrinol. 2026 Mar 24. pii: S2213-8587(26)00010-0. [Epub ahead of print]
      Artificial intelligence (AI) has the potential to improve primary diabetes care in low-income and middle-income countries (LMICs), where the rising burden of disease contrasts sharply with limited health-care resources. Emerging evidence shows the promise of AI for screening, risk prediction, monitoring, and personalised management of diabetes and its complications. However, substantial barriers remain, including infrastructure deficits, data fragmentation, equity and inclusivity challenges, limited prospective validation, and concerns about the acceptability, sustainability, and regulatory oversight of AI. The effective integration of AI into primary diabetes care will depend on coordinated investment in foundational infrastructure that includes large-scale development and rigorous validation of novel AI models for use by primary care physicians and patients across diverse populations. AI initiatives are also needed to support interdisciplinary and international collaborations spanning clinical, technical, and policy domains to ensure successful implementation. By aligning technological innovation with health care needs, AI could evolve from a proof-of-concept tool to a practical enabler of equitable, scalable, and cost-effective diabetes care in LMICs. In this Personal View, we outline the major opportunities and challenges of applying AI to primary diabetes care in LMICs, and propose directions for future development and implementation.
    DOI:  https://doi.org/10.1016/S2213-8587(26)00010-0
  16. Photodiagnosis Photodyn Ther. 2026 Mar 25. pii: S1572-1000(26)00112-2. [Epub ahead of print] 105445
       OBJECTIVE: To develop and validate machine learning (ML) models using optical coherence tomography (OCT)-derived quantitative relative reflectivity (RR) features to predict short-term response to anti-vascular endothelial growth factor (anti-VEGF) therapy in diabetic macular edema (DME), and to identify non-invasive imaging biomarkers for treatment stratification.
    METHODS: This retrospective study included 345 eyes from 345 patients with DME who received three consecutive monthly intravitreal anti-VEGF injections. Based on 3-month anatomical and functional outcomes, eyes were classified as Non-Persistent DME (NPDME, n=184) or Persistent DME (PDME, n=161). A total of 30 baseline features were extracted, comprising clinical data, OCT morphological characteristics, and fundamental reflectivity measurements. From these, we derived additional quantitative RR features via predefined mathematical transformations. After feature engineering and ensemble feature selection, 25 predictors were retained for model development. Six ML models including logistic regression (LR), random forest (RF), gradient boosting (GB), multilayer perceptron (MLP), stacking ensemble, and voting ensemble, were evaluated on an independent test set (n=69) using area under the curve (AUC), sensitivity, and specificity.
    RESULTS: Key RR features, particularly those describing the largest intraretinal cystoid spaces (LICS), showed significant differences between groups (p<0.001). The stacking ensemble model achieved the highest discriminative ability, with an AUC of 0.934 (95% CI: 0.867-0.987), a sensitivity of 81.08%, and a specificity of 90.62%. After threshold optimization, the GB model demonstrated the highest sensitivity (97.30%) with an AUC of 0.931 (95% CI: 0.865-0.984), while the LR model exhibited the most favorable generalization (lowest overfitting risk).
    CONCLUSIONS: Quantitative OCT-derived RR features, especially those reflecting intraretinal cyst characteristics, are strongly associated with short-term anti-VEGF response in DME. ML models incorporating these features may support individualized treatment assessment, with simpler models offering advantages in robustness and interpretability.
    Keywords:  Anti-VEGF therapy; Diabetic macular edema; Machine learning,Intraretinal cystoid spaces; Optical coherence tomography; Relative reflectivity
    DOI:  https://doi.org/10.1016/j.pdpdt.2026.105445
  17. Front Digit Health. 2026 ;8 1710829
       Introduction: Type 2 Diabetes Mellitus (T2DM) is a rising global health concern, heavily influenced by modifiable lifestyle and psychosocial factors. However, most predictive tools focus on biomedical markers and rely on real-time data from wearables or electronic health records, limiting their scalability in resource-constrained settings. This study presents a novel digital twin (DT) framework that uses retrospective lifestyle, behavioral, and psychosocial data to forecast T2DM onset and simulate the estimated effects of preventive interventions.
    Methods: Data were drawn from 19,774 participants in the UK Biobank cohort, followed for up to 17 years. A penalized Cox proportional hazards model was employed to estimate individual time-to-event risk trajectories based on 90 candidate predictors. Predictors were selected through univariate screening, multicollinearity assessment, and variance filtering, yielding a final model with 14 significant variables. Causal inference techniques, including directed acyclic graphs (DAGs) and counterfactual simulations, were used to explore intervention effects on disease progression.
    Results: The model demonstrated strong predictive performance (C-index = 0.90, SD = 0.004). Psychosocial stressors such as loneliness, insomnia, and poor mental health emerged as strong independent predictors and were associated with estimated increases in absolute T2DM risk of approximately 35 percentage points individually and nearly 78 percentage points when combined, under the modeled assumptions. These effects were partly reinforced through diet, with high intake of processed meat, salt, and sugary cereals acting as risk amplifiers within the modeled causal pathways. Cheese intake was protective overall, but its estimated benefit was attenuated under psychosocial stress, where reduced consumption produced a small, directionally harmful mediation effect. Counterfactual simulations suggested that improvements in psychosocial conditions could reduce estimated T2DM risk by approximately 11.6 percentage points within the modeled cohort, with protective dietary patterns such as cheese consumption re-emerging as psychosocial stress was alleviated. The model also revealed pronounced ethnic disparities, with South Asian, African, and Caribbean participants exhibiting significantly higher estimated risk than White counterparts within this cohort. These findings highlight the potential of integrated, stress-informed prevention strategies that address both psychosocial and dietary pathways.
    Conclusion: This study introduces a transparent, simulation-enabled DT framework for estimating T2DM risk and exploring behavioral intervention scenarios without reliance on real-time data streams. It enables interpretable, personalized prevention planning and supports exploration of scalable deployment in public health, particularly in underserved or low-infrastructure environments. The integration of psychosocial and lifestyle data represents an important step toward more equitable and behaviorally informed digital health solutions.
    Keywords:  Cox regression; artificial intelligence (AI); casual interference; diabetes prediction; digital twin; machine learning; survival analysis; type 2 diabetes mellitus (T2DM)
    DOI:  https://doi.org/10.3389/fdgth.2026.1710829
  18. Front Immunol. 2026 ;17 1781013
       Background: Diabetic kidney disease (DKD) is a leading cause of end-stage renal disease (ESRD), and its early diagnosis remains a major global challenge because conventional biomarkers lack sensitivity. The East Asian population is characterized by distinct genetic, environmental, and lifestyle factors that may influence the development and progression of DKD, highlighting the importance of population-specific research. The primary objective of this study was to apply a multi-omics strategy, including Mendelian randomization (MR) analysis, within an East Asian cohort to investigate potential causal relationships among microbiota, metabolites, and DKD, with the aim of identifying candidate biomarkers relevant to this population. Secondary objectives included the analysis of clinical samples from East Asian participants to characterize microbiota composition, metabolomic profiles, and tongue image features (TIFs), as well as the development of machine learning (ML) models to distinguish patients with type 2 diabetes mellitus (T2DM) from those with DKD.
    Methods: MR analysis was performed to investigate potential causal associations between more than 190 microbiota taxa and 404 differential metabolites in relation to DKD within the East Asian cohort. Clinical samples (n = 535) were collected from East Asian individuals and analyzed for microbiota composition, metabolomic profiling, and TIFs. Subsequently, ML models were constructed to differentiate patients with T2DM from those with DKD in this cohort.
    Results: MR analysis identified significant associations between specific microbiota taxa (e.g., Haemophilus-A, TM7x, Lachnoanaerobaculum, and Bacteroides) and metabolites (e.g., tyrosine and glutamine) in relation to DKD within the East Asian cohort. However, the causal nature of these associations requires further experimental or longitudinal validation. Clinical analyses revealed microbial dysbiosis in patients with DKD, including a 2.5-fold increase in Klebsiella and a 60% reduction in Faecalibaculum and Dubosiella. Metabolomic profiling demonstrated alterations in branched-chain amino acids (BCAAs) and fatty acids. Integrated multi-omics analysis suggested complex interactions among microbiota and metabolites that may contribute to DKD progression. The ML models achieved an accuracy exceeding 90% in distinguishing T2DM from DKD in the East Asian cohort.
    Conclusion: Multi-omics integration combined with ML may provide candidate biomarkers for the early detection of DKD in the East Asian population. These approaches could improve the accuracy of non-invasive diagnosis and support the development of personalized management strategies. Nevertheless, further studies are required to validate the identified associations and confirm their clinical applicability in real-world East Asian settings.
    Keywords:  East Asian; Mendelian randomization; biomarkers; diabetic kidney disease; early diagnosis; machine learning; metabolites
    DOI:  https://doi.org/10.3389/fimmu.2026.1781013
  19. J Diabetes Res. 2026 ;2026(1): e7913374
       BACKGROUND: Gestational diabetes mellitus (GDM) is a pregnancy-associated metabolic disorder linked to adverse maternal and fetal outcomes. Mitochondrial dysfunction is a recognized feature of GDM, yet the role of mitophagy-the selective degradation of damaged mitochondria-remains insufficiently understood.
    OBJECTIVE: This study examined the expression and regulatory patterns of mitophagy-related genes (MRGs) in GDM using publicly available transcriptomic datasets.
    METHODS: Transcriptomic datasets available in public repositories were analyzed to explore MRG expression and regulatory dynamics in GDM. RNA-seq data from two datasets: GSE203346 (placental and cord blood samples) and GSE154414 (placental samples) were analyzed to identify differentially expressed mitophagy genes. Additionally, maternal circulating blood RNA-seq data from GSE154377 were included for machine learning analysis. These datasets, which encompassed samples collected across multiple trimesters, facilitated a comparative evaluation of MRG expression dynamics in both placental tissue and maternal blood throughout pregnancy. A curated list of 65 MRGs was evaluated using edgeR and DESeq2 for differential expressions (DEs). Temporal expression dynamics were modeled with the multiclassPairs package in R using GSE154377.
    RESULTS: Consistent downregulation of four critical MRGs-MUL1, PINK1, TOMM7, and ATF4-was observed in GDM placental tissue (GSE154414) and in both placental tissue and fetal umbilical cord blood (GSE203346) but not in maternal peripheral blood. In healthy pregnancies, these genes exhibited distinct temporal regulation across gestation, a pattern disrupted in GDM. Classifier models based on MRG expression accurately predicted gestational stage in controls (accuracy > 85%) but performed poorly in GDM (accuracy < 50%). Functional enrichment analyses revealed impaired mitochondrial protein import, autophagy, and oxidative stress responses.
    CONCLUSION: These findings suggest that mitophagy dysregulation is an early and persistent defect in GDM, with MUL1, PINK1, TOMM7, and ATF4 emerging as potential biomarkers and therapeutic targets. The results support the hypothesis that mitochondrial quality control failure contributes to the pathogenesis of GDM with similar patterns shown in both placental and cord blood tissues. However, these genes were not significantly altered in plasma, highlighting tissue context as a critical factor in detecting mitophagy-related dysregulation.
    Keywords:  ATF4; GDM; MUL1; PINK1; TOMM7; gene expression; mitophagy; placenta
    DOI:  https://doi.org/10.1155/jdr/7913374
  20. Healthcare (Basel). 2026 Mar 13. pii: 739. [Epub ahead of print]14(6):
      Background/Objectives: Large language models (LLMs) are increasingly used as decision support tools in clinical nutrition, including meal planning for individuals with type 2 diabetes mellitus (T2DM). However, the clinical safety, quantitative accuracy, and guideline adherence of AI-generated dietary plans remain uncertain. This study aimed to evaluate systematic bias and agreement between LLM-generated diets and a guideline-concordant reference diet, and to assess whether current LLMs can function as reliable clinical nutrition decision support tools in T2DM. Methods: Six widely used LLMs generated standardized three-day, 1800 kcal dietary plans for T2DM using an identical prompt. Each day was treated as an independent observation (n = 18). Energy and macronutrient contents were analyzed using professional nutrition software and compared with a dietitian-designed reference diet based on ADA, EASD, IDF, and national guidelines. Agreement was evaluated using Bland-Altman analysis, proportional bias assessment, and intraclass correlation coefficients. Guideline adherence and clinical appropriateness were independently scored by registered dietitians. Results: Most LLM-generated diets systematically deviated from the reference diet, with lower total energy, reduced carbohydrate and fiber content, and variable protein distribution. Bland-Altman analyses demonstrated significant bias and wide limits of agreement for key nutrients, indicating clinically meaningful discrepancies. Guideline adherence scores varied substantially across models, with only one model showing relatively consistent performance. Inter-rater reliability between dietitians was high (ICC = 0.806). Conclusions: Current LLMs exhibit systematic quantitative bias and inconsistent guideline adherence when used for T2DM meal planning. AI-generated dietary plans are not interchangeable with dietitian-guided medical nutrition therapy and may pose clinical risks if used without professional oversight. Careful validation, domain-specific fine-tuning, and integration within supervised clinical workflows are required before implementation in diabetes care.
    Keywords:  artificial intelligence; dietary planning; guideline adherence; large language models; medical nutrition therapy; type 2 diabetes
    DOI:  https://doi.org/10.3390/healthcare14060739
  21. Sci Rep. 2026 Mar 27.
      Early non-invasive approaches for detecting diabetic peripheral neuropathy (DPN) are crucial to preventing its severe complications. However, these approaches have been limited by insufficient dynamic feature capture, low model efficiency, and poor portability. To improve the non-invasive detection capability for DPN, a novel combined method based on the fusion of PPG and ECG signals is proposed. Firstly, an adaptive denoising method integrating ICEEMDAN-based signal decomposition, wavelet thresholding, and particle swarm optimization is adopted to improve signal quality. Secondly, a combined encoding framework, integrating spatial position encoding, Grampian angular field, and recurrence plot, is employed to transform one-dimensional time-series signal segments into RGB color maps. Finally, an enhanced lightweight network named Afsharid, incorporating multi-branch depth wise convolution and a spatial hybrid self-attention mechanism, is designed to generate fused RGB representations. On the multi-cycle dataset, the proposed model achieved an accuracy of 93.89%, a sensitivity of 93.21%, and a precision of 94.52%. Compared with the best-performing baseline model EfficientNetV2, the accuracy was improved by 6.52%. The results show the feasibility and potential of the combined method as a new solution for early detection and daily monitoring of DPN.
    Keywords:  DPN; ICEEMDAN; Multibranch inception; Multimodal; SHSA
    DOI:  https://doi.org/10.1038/s41598-026-45862-x