End-to-End Evaluation of Federated Learning and Split Learning for Internet of Things

First SAO seminar presenting internal CSCRC related research and activities

Date/Time: Thursday 11/3, 15.00-16.00 AEDT


Guest speaker:

Garrison Gao

Yansong (Garrison) Gao received his M.Sc degree from University of Electronic Science and Technology of China (UESTC) in 2013 and Ph.D degree from the University of Adelaide, Australia, in 2017. He is with Data61 as a postdoc research follow. His current research interests are AI Security and Privacy, System Security, and Hardware Security. He has published more than 20 technical papers, including international journals and conferences, such as Nature Electronics, IEEE Transactions on Dependable and Secure Computing (IEEE TDSC), IEEE Transactions on Information Forensics and Security (IEEE TIFS), IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (IEEE TCAD), ACSAC, SRDS, AsiaCCS, and ACNS. He is a frequent reviewer for IEEE TDSC, IEEE TIFS.



This work is the first attempt to evaluate and compare federated learning (FL) and split neural networks (SplitNN) in real-world IoT settings in terms of learning performance and device implementation overhead. We consider a variety of datasets, different model architectures, multiple clients, and various performance metrics. For learning performance, which is specified by the model accuracy and convergence speed metrics, we empirically evaluate both FL and SplitNN under different types of data distributions such as imbalanced and non-independent and identically distributed (non-IID) data. We show that the learning performance of SplitNN is better than FL under an imbalanced data distribution, but worse than FL under an extreme non-IID data distribution. For implementation overhead, we end-to-end mount both FL and SplitNN on Raspberry Pis, and comprehensively evaluate overheads including training time, communication overhead under the real LAN setting, power consumption and memory usage. Our key observations are that under IoT scenario where the communication traffic is the main concern, the FL appears to perform better over SplitNN because FL has the significantly lower communication overhead compared with SplitNN, which empirically corroborate previous statistical analysis. In addition, we reveal several unrecognized limitations about SplitNN, forming the basis for future research. The work is a collaboration between Cybersecurity CRC, CSIRO’s Data61 and Sungkyunkwan University from South Korea.