Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
|
4 years ago | |
---|---|---|
.. | ||
diff_privacy | 4 years ago | |
membership_inference | 4 years ago | |
README.md | 4 years ago | |
__init__.py | 4 years ago |
Although machine learning could obtain a generic model based on training data, it has been proved that the trained
model may disclose the information of training data (such as the membership inference attack). Differential
privacy training
is an effective
method proposed
to overcome this problem, in which Gaussian noise is added while training. There are mainly three parts for
differential privacy(DP) training: noise-generating mechanism, DP optimizer and DP monitor. We have implemented
a novel noise-generating mechanisms: adaptive decay noise mechanism. DP
monitor is used to compute the privacy budget while training.
With adaptive decay mechanism, the magnitude of the Gaussian noise would be decayed as the training step grows, which
resulting a stable convergence.
$ cd examples/privacy/diff_privacy
$ python lenet5_dp_ada_gaussian.py
With adaptive norm clip mechanism, the norm clip of the gradients would be changed according to the norm values of
them, which can adjust the ratio of noise and original gradients.
$ cd examples/privacy/diff_privacy
$ python lenet5_dp.py
By this evaluation method, we could judge whether a sample is belongs to training dataset or not.
$ cd examples/privacy/membership_inference_attack
$ python train.py --data_path home_path_to_cifar100 --ckpt_path ./
$ python example_vgg_cifar.py --data_path home_path_to_cifar100 --pre_trained 0-100_781.ckpt
MindArmour关注AI的安全和隐私问题。致力于增强模型的安全可信、保护用户的数据隐私。主要包含3个模块:对抗样本鲁棒性模块、Fuzz Testing模块、隐私保护与评估模块。 对抗样本鲁棒性模块 对抗样本鲁棒性模块用于评估模型对于对抗样本的鲁棒性,并提供模型增强方法用于增强模型抗对抗样本攻击的能力,提升模型鲁棒性。对抗样本鲁棒性模块包含了4个子模块:对抗样本的生成、对抗样本的检测、模型防御、攻防评估。
Python Markdown Text other