Optimizing sparse ternary compression with thresholds for communication-efficient federated learning

International Journal of Artificial Intelligence

Optimizing sparse ternary compression with thresholds for communication-efficient federated learning

Abstract

Federated learning (FL) enables decentralized model training while preserving client data privacy, yet suffers from significant communication overhead due to frequent parameter exchanges. This study investigates how varying sparse ternary compression (STC) thresholds impact communication efficiency and model accuracy across the CIFAR-10 and MedMNIST datasets. Experiments tested thresholds ranging from 1.0 to 1.9 and batch sizes of 10, 15, and 20. Results demonstrated that selecting thresholds between 1.2 and 1.5 reduced total communication costs by approximately 10–15%, while maintaining acceptable accuracy levels. These findings suggest that careful threshold tuning can achieve substantial communication savings with minimal compromise in model performance, offering practical guidance for improving the efficiency and scalability of FL systems.

Discover Our Library

Embark on a journey through our expansive collection of articles and let curiosity lead your path to innovation.

Explore Now
Library 3D Ilustration