VerifyNet: Secure and Verifiable Federated Learning

Abstract

As an emerging training model with neural networks, federated learning has received widespread attention due to its ability to update parameters without collecting users' raw data. However, since adversaries can track and derive participants' privacy from the shared gradients, federated learning is still exposed to various security and privacy threats. In this paper, we consider two major issues in the training process over deep neural networks (DNNs): 1) how to protect user's privacy (i.e., local gradients) in the training process and 2) how to verify the integrity (or correctness) of the aggregated results returned from the server. To solve the above problems, several approaches focusing on secure or privacy-preserving federated learning have been proposed and applied in diverse scenarios. However, it is still an open problem enabling clients to verify whether the cloud server is operating correctly, while guaranteeing user's privacy in the training process. In this paper, we propose VerifyNet, the first privacy-preserving and verifiable federated learning framework. In specific, we first propose a double-masking protocol to guarantee the confidentiality of users' local gradients during the federated learning. Then, the cloud server is required to provide the Proof about the correctness of its aggregated results to each user. We claim that it is impossible that an adversary can deceive users by forging Proof, unless it can solve the NP-hard problem adopted in our model. In addition, VerifyNet is also supportive of users dropping out during the training process. The extensive experiments conducted on real-world data also demonstrate the practical performance of our proposed scheme.

Publication Title

IEEE Transactions on Information Forensics and Security

Share

COinS