Deep Leakage from Gradients
Venue: NeurIPS 2019 Authors: Ligeng Zhu, Zhijian Liu, and Song Han Introduction. As deep learning is being used across a variety of fields and at scale, new problems are emerging. In the case of distributed learning were sharing training data is not an option, different participants perform a local training on local data with a shared global model, and communicate to other participants just the gradients. The gradients are then averaged and applied to the global model. This technique is also very similar to federated learning. In such a setting, a malicious attacker can pry on the gradients and try to extract information about the data based on which the gradients were produced. This paper proposes an attack, called deep leakage from gradients (DLG), which can recover pixel-level information for image classification tasks, and token-level information for text NLP tasks just from the gradients. They also propose two defense techniques that their attack cannot break. Deep Leakage ...