Combining federated learning and travelling model boosts performance and opens opportunities for digital health equity.

Federated learning (FL) and travelling model (TM) allow privacy-preserving model training across sites without sharing patient-sensitive data. While both approaches have shown success, they face unique challenges related to distribution shifts between sites. To address this, we propose FedTM, a hybrid framework combining the strengths of FL and TM. FedTM begins with FL warmup training at sites with larger datasets, followed by sequential refinement through TM across all sites. We evaluated FedTM for Parkinson's disease classification using 1817 brain scans from 83 international sites. Model performance, misclassification disparities, and communication costs were computed and compared to standard FL and TM approaches. Our results reveal that FedTM improves AUROC from 77 ± 0.01% to 82 ± 0.01%, reduces misclassification disparities from 34 ± 0.01% to 26 ± 0.01%, and decreases training load for smaller sites from 22 to 12 cycles. These advancements mark an important step toward promoting global healthcare equity and advancing responsible AI development.
Mental Health
Care/Management

Authors

Souza Souza, Stanley Stanley, Ohara Ohara, Camicioli Camicioli, Monchi Monchi, Ismail Ismail, Wilms Wilms, Forkert Forkert
View on Pubmed
Share
Facebook
X (Twitter)
Bluesky
Linkedin
Copy to clipboard