Journal of University of Science and Technology of China ›› 2020, Vol. 50 ›› Issue (8): 1064-1071.DOI: 10.3969/j.issn.0253-2778.2020.08.004

• Original Paper • Previous Articles     Next Articles

Differential privacy protection method for deep learning based on WGAN feedback

  

  1. TAO Tao,2, BAI Jianshu, LIU Heng, HOU Shudong, ZHENG Xiao,2
  • Received:2020-06-05 Revised:2020-08-18 Accepted:2020-08-18 Online:2020-08-31 Published:2020-08-18
  • Contact: TAO Tao
  • About author:TAO Tao(corresponding author), male, born in 1977, PhD/Associate Professor.
  • Supported by:
    Supported by the Key Research and Development Program Project of Anhui Province of China(201904d07020020), the Natural Science Foundation Project of Anhui Province of China(1908085MF212, 2008085MF190, 1808085QF210), the Program for Synergy Innovation in the Anhui Higher Education Institutions of China(GXXT-2020-012).

Abstract: Aiming at the problem that attackers may steal sensitive information of the deep learning training dataset by some technological means such as the Generative Adversarial Network(GAN), combining the differential privacy theory, the differential privacy protection method was proposed for deep learning based on the Wasserstein generative adversarial network(WGAN) feedback parameter tuning. This privacy protection method is realized by optimization of the stochastic gradient descent, gradient clipping of setting gradient threshold, and noise adding to the optimization process of deep learning; WGAN was used to generate optimized results similar to the original data. The difference of the generated results and the original data were used for feedback parameter tuning. The experiment result shows that this method can effectively protect sensitive private information of the dataset and has preferable data usability.

Key words: differential privacy protection, deep learning, Wasserstein generative adversarial network(WGAN)

CLC Number: