S++: A Fast and Deployable Secure-Computation Framework for Privacy-Preserving Neural Network Training


S++: A Fast and Deployable Secure-Computation Framework for Privacy-Preserving Neural Network Training

🔘 Paper page: arxiv.org/abs/2101.12078

Abstract

«We introduce S++, a simple, robust, and deployable framework for training a neural network (NN) using private data from multiple sources, using secret-shared secure function evaluation. In short, consider a virtual third party to whom every data-holder sends their inputs, and which computes the neural network: in our case, this virtual third party is actually a set of servers which individually learn nothing, even with a malicious (but non-colluding) adversary.
Previous work in this area has been limited to just one specific activation function: ReLU, rendering the approach impractical for many use-cases. For the first time, we provide fast and verifiable protocols for all common activation functions and optimize them for running in a secret-shared manner. The ability to quickly, verifiably, and robustly compute exponentiation, softmax, sigmoid, etc., allows us to use previously written NNs without modification, vastly reducing developer effort and complexity of code. In recent times, ReLU has been found to converge much faster and be more computationally efficient as compared to non-linear functions like sigmoid or tanh. However, we argue that it would be remiss not to extend the mechanism to non-linear functions such as the logistic sigmoid, tanh, and softmax that are fundamental due to their ability to express outputs as probabilities and their universal approximation property. Their contribution in RNNs and a few recent advancements also makes them more relevant».


Authors

Prashanthi Ramachandran, Shivam Agarwal, Arup Mondal, Aastha Shah, Debayan Gupta


Click to rate this post
[Total: 3 Average: 5]

Liked this post? Follow this blog to get more.