Disentangle Private Information from Edge-Cloud Training

Description

  • Shiyin Wang and Song Han

  • 6.100 Independent Project, HanLab, Massachusetts Institute of Technology

  • Feb 2019 ~ May 2019


The privacy issue of deep learning and how safe our personal data is utilized has become a global concern. Cloud servers collect users' data from edge devices to train large neural network models and provide model services in turn. Such data transmission causes privacy leakage. We propose an edge-cloud training framework which divides the neural network into a small edge model and a large cloud model. We split the layer before uploading to the cloud into private channels and public channels. We propose a discrimination loss, and we theoretically prove that it can bound the differential privacy. We find the mutual information between public channels and private attributes is approximately 0 when we have significantly small discrimination loss. Experiments show that this framework can achieve nearly no loss of accuracy and no chance of person id attack inference on CelebA dataset.