Merge pull request !422 from 宦晓玲/r1.9r1.9
| @@ -1,3 +1,3 @@ | |||||
| mindspore: | mindspore: | ||||
| 'mindspore/mindspore/version/202205/20220525/master_20220525210238_42306df4865f816c48a720d98e50ba2e586b1f59/' | |||||
| 'mindspore/mindspore/version/202209/20220923/r1.9_20220923224458_c16390f59ab8dace3bb7e5a6ab4ae4d3bfe74bea/' | |||||
| @@ -58,7 +58,7 @@ mindarmour.adv_robustness.defenses | |||||
| 参数: | 参数: | ||||
| - **network** (Cell) - 要防御的MindSpore网络。 | - **network** (Cell) - 要防御的MindSpore网络。 | ||||
| - **loss_fn** (Union[Loss, None]) - 损失函数。默认值:None。 | - **loss_fn** (Union[Loss, None]) - 损失函数。默认值:None。 | ||||
| - **optimizer** (Cell):用于训练网络的优化器。默认值:None。 | |||||
| - **optimizer** (Cell) - 用于训练网络的优化器。默认值:None。 | |||||
| - **bounds** (tuple) - 数据的上下界。以(clip_min, clip_max)的形式出现。默认值:(0.0, 1.0)。 | - **bounds** (tuple) - 数据的上下界。以(clip_min, clip_max)的形式出现。默认值:(0.0, 1.0)。 | ||||
| - **replace_ratio** (float) - 用对抗样本替换原始样本的比率。默认值:0.5。 | - **replace_ratio** (float) - 用对抗样本替换原始样本的比率。默认值:0.5。 | ||||
| - **eps** (float) - 攻击方法(FGSM)的步长。默认值:0.1。 | - **eps** (float) - 攻击方法(FGSM)的步长。默认值:0.1。 | ||||
| @@ -8,10 +8,10 @@ mindarmour.privacy.diff_privacy | |||||
| 基于 :math:`mean=0` 以及 :math:`standard\_deviation = norm\_bound * initial\_noise\_multiplier` 的高斯分布产生噪声。 | 基于 :math:`mean=0` 以及 :math:`standard\_deviation = norm\_bound * initial\_noise\_multiplier` 的高斯分布产生噪声。 | ||||
| 参数: | 参数: | ||||
| - **norm_bound** (float)- 梯度的l2范数的裁剪范围。默认值:1.0。 | |||||
| - **initial_noise_multiplier** (float)- 高斯噪声标准偏差除以 `norm_bound` 的比率,将用于计算隐私预算。默认值:1.0。 | |||||
| - **seed** (int)- 原始随机种子,如果seed=0随机正态将使用安全随机数。如果seed!=0随机正态将使用给定的种子生成值。默认值:0。 | |||||
| - **decay_policy** (str)- 衰减策略。默认值:None。 | |||||
| - **norm_bound** (float) - 梯度的l2范数的裁剪范围。默认值:1.0。 | |||||
| - **initial_noise_multiplier** (float) - 高斯噪声标准偏差除以 `norm_bound` 的比率,将用于计算隐私预算。默认值:1.0。 | |||||
| - **seed** (int) - 原始随机种子,如果seed=0随机正态将使用安全随机数。如果seed!=0随机正态将使用给定的种子生成值。默认值:0。 | |||||
| - **decay_policy** (str) - 衰减策略。默认值:None。 | |||||
| .. py:method:: construct(gradients) | .. py:method:: construct(gradients) | ||||
| @@ -26,8 +26,8 @@ mindarmour.privacy.evaluation | |||||
| 评估指标应由metrics规定。 | 评估指标应由metrics规定。 | ||||
| 参数: | 参数: | ||||
| - **dataset_train** (minspore.dataset) - 目标模型的训练数据集。 | |||||
| - **dataset_test** (minspore.dataset) - 目标模型的测试数据集。 | |||||
| - **dataset_train** (mindspore.dataset) - 目标模型的训练数据集。 | |||||
| - **dataset_test** (mindspore.dataset) - 目标模型的测试数据集。 | |||||
| - **metrics** (Union[list, tuple]) - 评估指标。指标的值必须在["precision", "accuracy", "recall"]中。默认值:["precision"]。 | - **metrics** (Union[list, tuple]) - 评估指标。指标的值必须在["precision", "accuracy", "recall"]中。默认值:["precision"]。 | ||||
| 返回: | 返回: | ||||
| @@ -38,8 +38,8 @@ mindarmour.privacy.evaluation | |||||
| 根据配置,使用输入数据集训练攻击模型。 | 根据配置,使用输入数据集训练攻击模型。 | ||||
| 参数: | 参数: | ||||
| - **dataset_train** (minspore.dataset) - 目标模型的训练数据集。 | |||||
| - **dataset_test** (minspore.dataset) - 目标模型的测试集。 | |||||
| - **dataset_train** (mindspore.dataset) - 目标模型的训练数据集。 | |||||
| - **dataset_test** (mindspore.dataset) - 目标模型的测试集。 | |||||
| - **attack_config** (Union[list, tuple]) - 攻击模型的参数设置。格式为 | - **attack_config** (Union[list, tuple]) - 攻击模型的参数设置。格式为 | ||||
| .. code-block:: python | .. code-block:: python | ||||
| @@ -236,8 +236,8 @@ MindArmour是MindSpore的工具箱,用于增强模型可信,实现隐私保 | |||||
| 评估指标应由metrics规定。 | 评估指标应由metrics规定。 | ||||
| 参数: | 参数: | ||||
| - **dataset_train** (minspore.dataset) - 目标模型的训练数据集。 | |||||
| - **dataset_test** (minspore.dataset) - 目标模型的测试数据集。 | |||||
| - **dataset_train** (mindspore.dataset) - 目标模型的训练数据集。 | |||||
| - **dataset_test** (mindspore.dataset) - 目标模型的测试数据集。 | |||||
| - **metrics** (Union[list, tuple]) - 评估指标。指标的值必须在["precision", "accuracy", "recall"]中。默认值:["precision"]。 | - **metrics** (Union[list, tuple]) - 评估指标。指标的值必须在["precision", "accuracy", "recall"]中。默认值:["precision"]。 | ||||
| 返回: | 返回: | ||||
| @@ -248,8 +248,8 @@ MindArmour是MindSpore的工具箱,用于增强模型可信,实现隐私保 | |||||
| 根据配置,使用输入数据集训练攻击模型。 | 根据配置,使用输入数据集训练攻击模型。 | ||||
| 参数: | 参数: | ||||
| - **dataset_train** (minspore.dataset) - 目标模型的训练数据集。 | |||||
| - **dataset_test** (minspore.dataset) - 目标模型的测试集。 | |||||
| - **dataset_train** (mindspore.dataset) - 目标模型的训练数据集。 | |||||
| - **dataset_test** (mindspore.dataset) - 目标模型的测试集。 | |||||
| - **attack_config** (Union[list, tuple]) - 攻击模型的参数设置。格式为: | - **attack_config** (Union[list, tuple]) - 攻击模型的参数设置。格式为: | ||||
| .. code-block:: | .. code-block:: | ||||
| @@ -35,14 +35,14 @@ class PointWiseAttack(Attack): | |||||
| References: `L. Schott, J. Rauber, M. Bethge, W. Brendel: "Towards the | References: `L. Schott, J. Rauber, M. Bethge, W. Brendel: "Towards the | ||||
| first adversarially robust neural network model on MNIST", ICLR (2019) | first adversarially robust neural network model on MNIST", ICLR (2019) | ||||
| <https://arxiv.org/abs/1805.09190>`_ | |||||
| <https://arxiv.org/abs/1805.09190>`_. | |||||
| Args: | Args: | ||||
| model (BlackModel): Target model. | model (BlackModel): Target model. | ||||
| max_iter (int): Max rounds of iteration to generate adversarial image. Default: 1000. | max_iter (int): Max rounds of iteration to generate adversarial image. Default: 1000. | ||||
| search_iter (int): Max rounds of binary search. Default: 10. | search_iter (int): Max rounds of binary search. Default: 10. | ||||
| is_targeted (bool): If True, targeted attack. If False, untargeted attack. Default: False. | is_targeted (bool): If True, targeted attack. If False, untargeted attack. Default: False. | ||||
| init_attack (Attack): Attack used to find a starting point. Default: None. | |||||
| init_attack (Union[Attack, None]): Attack used to find a starting point. Default: None. | |||||
| sparse (bool): If True, input labels are sparse-encoded. If False, input labels are one-hot-encoded. | sparse (bool): If True, input labels are sparse-encoded. If False, input labels are one-hot-encoded. | ||||
| Default: True. | Default: True. | ||||
| @@ -96,7 +96,7 @@ class DeepFool(Attack): | |||||
| sample to the nearest classification boundary and crossing the boundary. | sample to the nearest classification boundary and crossing the boundary. | ||||
| Reference: `DeepFool: a simple and accurate method to fool deep neural | Reference: `DeepFool: a simple and accurate method to fool deep neural | ||||
| networks <https://arxiv.org/abs/1511.04599>`_ | |||||
| networks <https://arxiv.org/abs/1511.04599>`_. | |||||
| Args: | Args: | ||||
| network (Cell): Target model. | network (Cell): Target model. | ||||
| @@ -109,7 +109,7 @@ class DeepFool(Attack): | |||||
| max_iters (int): Max iterations, which should be | max_iters (int): Max iterations, which should be | ||||
| greater than zero. Default: 50. | greater than zero. Default: 50. | ||||
| overshoot (float): Overshoot parameter. Default: 0.02. | overshoot (float): Overshoot parameter. Default: 0.02. | ||||
| norm_level (Union[int, str]): Order of the vector norm. Possible values: np.inf | |||||
| norm_level (Union[int, str, numpy.inf]): Order of the vector norm. Possible values: np.inf | |||||
| or 2. Default: 2. | or 2. Default: 2. | ||||
| bounds (Union[tuple, list]): Upper and lower bounds of data range. In form of (clip_min, | bounds (Union[tuple, list]): Upper and lower bounds of data range. In form of (clip_min, | ||||
| clip_max). Default: None. | clip_max). Default: None. | ||||
| @@ -130,21 +130,21 @@ class FastGradientMethod(GradientMethod): | |||||
| References: `I. J. Goodfellow, J. Shlens, and C. Szegedy, "Explaining | References: `I. J. Goodfellow, J. Shlens, and C. Szegedy, "Explaining | ||||
| and harnessing adversarial examples," in ICLR, 2015. | and harnessing adversarial examples," in ICLR, 2015. | ||||
| <https://arxiv.org/abs/1412.6572>`_ | |||||
| <https://arxiv.org/abs/1412.6572>`_. | |||||
| Args: | Args: | ||||
| network (Cell): Target model. | network (Cell): Target model. | ||||
| eps (float): Proportion of single-step adversarial perturbation generated | eps (float): Proportion of single-step adversarial perturbation generated | ||||
| by the attack to data range. Default: 0.07. | by the attack to data range. Default: 0.07. | ||||
| alpha (float): Proportion of single-step random perturbation to data range. | |||||
| alpha (Union[float, None]): Proportion of single-step random perturbation to data range. | |||||
| Default: None. | Default: None. | ||||
| bounds (tuple): Upper and lower bounds of data, indicating the data range. | bounds (tuple): Upper and lower bounds of data, indicating the data range. | ||||
| In form of (clip_min, clip_max). Default: (0.0, 1.0). | In form of (clip_min, clip_max). Default: (0.0, 1.0). | ||||
| norm_level (Union[int, numpy.inf]): Order of the norm. | |||||
| norm_level (Union[int, str, numpy.inf]): Order of the norm. | |||||
| Possible values: np.inf, 1 or 2. Default: 2. | Possible values: np.inf, 1 or 2. Default: 2. | ||||
| is_targeted (bool): If True, targeted attack. If False, untargeted | is_targeted (bool): If True, targeted attack. If False, untargeted | ||||
| attack. Default: False. | attack. Default: False. | ||||
| loss_fn (Loss): Loss function for optimization. If None, the input network \ | |||||
| loss_fn (Union[loss, None]): Loss function for optimization. If None, the input network \ | |||||
| is already equipped with loss function. Default: None. | is already equipped with loss function. Default: None. | ||||
| Examples: | Examples: | ||||
| @@ -207,7 +207,7 @@ class RandomFastGradientMethod(FastGradientMethod): | |||||
| References: `Florian Tramer, Alexey Kurakin, Nicolas Papernot, "Ensemble | References: `Florian Tramer, Alexey Kurakin, Nicolas Papernot, "Ensemble | ||||
| adversarial training: Attacks and defenses" in ICLR, 2018 | adversarial training: Attacks and defenses" in ICLR, 2018 | ||||
| <https://arxiv.org/abs/1705.07204>`_ | |||||
| <https://arxiv.org/abs/1705.07204>`_. | |||||
| Args: | Args: | ||||
| network (Cell): Target model. | network (Cell): Target model. | ||||
| @@ -217,11 +217,11 @@ class RandomFastGradientMethod(FastGradientMethod): | |||||
| Default: 0.035. | Default: 0.035. | ||||
| bounds (tuple): Upper and lower bounds of data, indicating the data range. | bounds (tuple): Upper and lower bounds of data, indicating the data range. | ||||
| In form of (clip_min, clip_max). Default: (0.0, 1.0). | In form of (clip_min, clip_max). Default: (0.0, 1.0). | ||||
| norm_level (Union[int, numpy.inf]): Order of the norm. | |||||
| norm_level (Union[int, str, numpy.inf): Order of the norm. | |||||
| Possible values: np.inf, 1 or 2. Default: 2. | Possible values: np.inf, 1 or 2. Default: 2. | ||||
| is_targeted (bool): If True, targeted attack. If False, untargeted | is_targeted (bool): If True, targeted attack. If False, untargeted | ||||
| attack. Default: False. | attack. Default: False. | ||||
| loss_fn (Loss): Loss function for optimization. If None, the input network \ | |||||
| loss_fn (Union[loss, None]): Loss function for optimization. If None, the input network \ | |||||
| is already equipped with loss function. Default: None. | is already equipped with loss function. Default: None. | ||||
| Raises: | Raises: | ||||
| @@ -264,19 +264,19 @@ class FastGradientSignMethod(GradientMethod): | |||||
| References: `Ian J. Goodfellow, J. Shlens, and C. Szegedy, "Explaining | References: `Ian J. Goodfellow, J. Shlens, and C. Szegedy, "Explaining | ||||
| and harnessing adversarial examples," in ICLR, 2015 | and harnessing adversarial examples," in ICLR, 2015 | ||||
| <https://arxiv.org/abs/1412.6572>`_ | |||||
| <https://arxiv.org/abs/1412.6572>`_. | |||||
| Args: | Args: | ||||
| network (Cell): Target model. | network (Cell): Target model. | ||||
| eps (float): Proportion of single-step adversarial perturbation generated | eps (float): Proportion of single-step adversarial perturbation generated | ||||
| by the attack to data range. Default: 0.07. | by the attack to data range. Default: 0.07. | ||||
| alpha (float): Proportion of single-step random perturbation to data range. | |||||
| alpha (Union[float, None]): Proportion of single-step random perturbation to data range. | |||||
| Default: None. | Default: None. | ||||
| bounds (tuple): Upper and lower bounds of data, indicating the data range. | bounds (tuple): Upper and lower bounds of data, indicating the data range. | ||||
| In form of (clip_min, clip_max). Default: (0.0, 1.0). | In form of (clip_min, clip_max). Default: (0.0, 1.0). | ||||
| is_targeted (bool): If True, targeted attack. If False, untargeted | is_targeted (bool): If True, targeted attack. If False, untargeted | ||||
| attack. Default: False. | attack. Default: False. | ||||
| loss_fn (Loss): Loss function for optimization. If None, the input network \ | |||||
| loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \ | |||||
| is already equipped with loss function. Default: None. | is already equipped with loss function. Default: None. | ||||
| Examples: | Examples: | ||||
| @@ -338,7 +338,7 @@ class RandomFastGradientSignMethod(FastGradientSignMethod): | |||||
| to create adversarial noises. | to create adversarial noises. | ||||
| References: `F. Tramer, et al., "Ensemble adversarial training: Attacks | References: `F. Tramer, et al., "Ensemble adversarial training: Attacks | ||||
| and defenses," in ICLR, 2018 <https://arxiv.org/abs/1705.07204>`_ | |||||
| and defenses," in ICLR, 2018 <https://arxiv.org/abs/1705.07204>`_. | |||||
| Args: | Args: | ||||
| network (Cell): Target model. | network (Cell): Target model. | ||||
| @@ -350,7 +350,7 @@ class RandomFastGradientSignMethod(FastGradientSignMethod): | |||||
| In form of (clip_min, clip_max). Default: (0.0, 1.0). | In form of (clip_min, clip_max). Default: (0.0, 1.0). | ||||
| is_targeted (bool): True: targeted attack. False: untargeted attack. | is_targeted (bool): True: targeted attack. False: untargeted attack. | ||||
| Default: False. | Default: False. | ||||
| loss_fn (Loss): Loss function for optimization. If None, the input network \ | |||||
| loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \ | |||||
| is already equipped with loss function. Default: None. | is already equipped with loss function. Default: None. | ||||
| Raises: | Raises: | ||||
| @@ -391,17 +391,17 @@ class LeastLikelyClassMethod(FastGradientSignMethod): | |||||
| least-likely class to generate the adversarial examples. | least-likely class to generate the adversarial examples. | ||||
| References: `F. Tramer, et al., "Ensemble adversarial training: Attacks | References: `F. Tramer, et al., "Ensemble adversarial training: Attacks | ||||
| and defenses," in ICLR, 2018 <https://arxiv.org/abs/1705.07204>`_ | |||||
| and defenses," in ICLR, 2018 <https://arxiv.org/abs/1705.07204>`_. | |||||
| Args: | Args: | ||||
| network (Cell): Target model. | network (Cell): Target model. | ||||
| eps (float): Proportion of single-step adversarial perturbation generated | eps (float): Proportion of single-step adversarial perturbation generated | ||||
| by the attack to data range. Default: 0.07. | by the attack to data range. Default: 0.07. | ||||
| alpha (float): Proportion of single-step random perturbation to data range. | |||||
| alpha (Union[float, None]): Proportion of single-step random perturbation to data range. | |||||
| Default: None. | Default: None. | ||||
| bounds (tuple): Upper and lower bounds of data, indicating the data range. | bounds (tuple): Upper and lower bounds of data, indicating the data range. | ||||
| In form of (clip_min, clip_max). Default: (0.0, 1.0). | In form of (clip_min, clip_max). Default: (0.0, 1.0). | ||||
| loss_fn (Loss): Loss function for optimization. If None, the input network \ | |||||
| loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \ | |||||
| is already equipped with loss function. Default: None. | is already equipped with loss function. Default: None. | ||||
| Examples: | Examples: | ||||
| @@ -439,7 +439,7 @@ class RandomLeastLikelyClassMethod(FastGradientSignMethod): | |||||
| targets the least-likely class to generate the adversarial examples. | targets the least-likely class to generate the adversarial examples. | ||||
| References: `F. Tramer, et al., "Ensemble adversarial training: Attacks | References: `F. Tramer, et al., "Ensemble adversarial training: Attacks | ||||
| and defenses," in ICLR, 2018 <https://arxiv.org/abs/1705.07204>`_ | |||||
| and defenses," in ICLR, 2018 <https://arxiv.org/abs/1705.07204>`_. | |||||
| Args: | Args: | ||||
| network (Cell): Target model. | network (Cell): Target model. | ||||
| @@ -449,7 +449,7 @@ class RandomLeastLikelyClassMethod(FastGradientSignMethod): | |||||
| Default: 0.035. | Default: 0.035. | ||||
| bounds (tuple): Upper and lower bounds of data, indicating the data range. | bounds (tuple): Upper and lower bounds of data, indicating the data range. | ||||
| In form of (clip_min, clip_max). Default: (0.0, 1.0). | In form of (clip_min, clip_max). Default: (0.0, 1.0). | ||||
| loss_fn (Loss): Loss function for optimization. If None, the input network \ | |||||
| loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \ | |||||
| is already equipped with loss function. Default: None. | is already equipped with loss function. Default: None. | ||||
| Raises: | Raises: | ||||
| @@ -115,7 +115,7 @@ class IterativeGradientMethod(Attack): | |||||
| bounds (tuple): Upper and lower bounds of data, indicating the data range. | bounds (tuple): Upper and lower bounds of data, indicating the data range. | ||||
| In form of (clip_min, clip_max). Default: (0.0, 1.0). | In form of (clip_min, clip_max). Default: (0.0, 1.0). | ||||
| nb_iter (int): Number of iteration. Default: 5. | nb_iter (int): Number of iteration. Default: 5. | ||||
| loss_fn (Loss): Loss function for optimization. If None, the input network \ | |||||
| loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \ | |||||
| is already equipped with loss function. Default: None. | is already equipped with loss function. Default: None. | ||||
| """ | """ | ||||
| def __init__(self, network, eps=0.3, eps_iter=0.1, bounds=(0.0, 1.0), nb_iter=5, | def __init__(self, network, eps=0.3, eps_iter=0.1, bounds=(0.0, 1.0), nb_iter=5, | ||||
| @@ -162,7 +162,7 @@ class BasicIterativeMethod(IterativeGradientMethod): | |||||
| adversarial examples. | adversarial examples. | ||||
| References: `A. Kurakin, I. Goodfellow, and S. Bengio, "Adversarial examples | References: `A. Kurakin, I. Goodfellow, and S. Bengio, "Adversarial examples | ||||
| in the physical world," in ICLR, 2017 <https://arxiv.org/abs/1607.02533>`_ | |||||
| in the physical world," in ICLR, 2017 <https://arxiv.org/abs/1607.02533>`_. | |||||
| Args: | Args: | ||||
| network (Cell): Target model. | network (Cell): Target model. | ||||
| @@ -175,7 +175,7 @@ class BasicIterativeMethod(IterativeGradientMethod): | |||||
| is_targeted (bool): If True, targeted attack. If False, untargeted | is_targeted (bool): If True, targeted attack. If False, untargeted | ||||
| attack. Default: False. | attack. Default: False. | ||||
| nb_iter (int): Number of iteration. Default: 5. | nb_iter (int): Number of iteration. Default: 5. | ||||
| loss_fn (Loss): Loss function for optimization. If None, the input network \ | |||||
| loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \ | |||||
| is already equipped with loss function. Default: None. | is already equipped with loss function. Default: None. | ||||
| Examples: | Examples: | ||||
| @@ -263,7 +263,7 @@ class MomentumIterativeMethod(IterativeGradientMethod): | |||||
| References: `Y. Dong, et al., "Boosting adversarial attacks with | References: `Y. Dong, et al., "Boosting adversarial attacks with | ||||
| momentum," arXiv:1710.06081, 2017 <https://arxiv.org/abs/1710.06081>`_ | |||||
| momentum," arXiv:1710.06081, 2017 <https://arxiv.org/abs/1710.06081>`_. | |||||
| Args: | Args: | ||||
| network (Cell): Target model. | network (Cell): Target model. | ||||
| @@ -277,9 +277,9 @@ class MomentumIterativeMethod(IterativeGradientMethod): | |||||
| attack. Default: False. | attack. Default: False. | ||||
| nb_iter (int): Number of iteration. Default: 5. | nb_iter (int): Number of iteration. Default: 5. | ||||
| decay_factor (float): Decay factor in iterations. Default: 1.0. | decay_factor (float): Decay factor in iterations. Default: 1.0. | ||||
| norm_level (Union[int, numpy.inf]): Order of the norm. Possible values: | |||||
| norm_level (Union[int, str, numpy.inf]): Order of the norm. Possible values: | |||||
| np.inf, 1 or 2. Default: 'inf'. | np.inf, 1 or 2. Default: 'inf'. | ||||
| loss_fn (Loss): Loss function for optimization. If None, the input network \ | |||||
| loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \ | |||||
| is already equipped with loss function. Default: None. | is already equipped with loss function. Default: None. | ||||
| Examples: | Examples: | ||||
| @@ -407,7 +407,7 @@ class ProjectedGradientDescent(BasicIterativeMethod): | |||||
| the attack proposed by Madry et al. for adversarial training. | the attack proposed by Madry et al. for adversarial training. | ||||
| References: `A. Madry, et al., "Towards deep learning models resistant to | References: `A. Madry, et al., "Towards deep learning models resistant to | ||||
| adversarial attacks," in ICLR, 2018 <https://arxiv.org/abs/1706.06083>`_ | |||||
| adversarial attacks," in ICLR, 2018 <https://arxiv.org/abs/1706.06083>`_. | |||||
| Args: | Args: | ||||
| network (Cell): Target model. | network (Cell): Target model. | ||||
| @@ -420,9 +420,9 @@ class ProjectedGradientDescent(BasicIterativeMethod): | |||||
| is_targeted (bool): If True, targeted attack. If False, untargeted | is_targeted (bool): If True, targeted attack. If False, untargeted | ||||
| attack. Default: False. | attack. Default: False. | ||||
| nb_iter (int): Number of iteration. Default: 5. | nb_iter (int): Number of iteration. Default: 5. | ||||
| norm_level (Union[int, numpy.inf]): Order of the norm. Possible values: | |||||
| norm_level (Union[int, str, numpy.inf]): Order of the norm. Possible values: | |||||
| np.inf, 1 or 2. Default: 'inf'. | np.inf, 1 or 2. Default: 'inf'. | ||||
| loss_fn (Loss): Loss function for optimization. If None, the input network \ | |||||
| loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \ | |||||
| is already equipped with loss function. Default: None. | is already equipped with loss function. Default: None. | ||||
| Examples: | Examples: | ||||
| @@ -503,7 +503,7 @@ class DiverseInputIterativeMethod(BasicIterativeMethod): | |||||
| on the input data could improve the transferability of the adversarial examples. | on the input data could improve the transferability of the adversarial examples. | ||||
| References: `Xie, Cihang and Zhang, et al., "Improving Transferability of | References: `Xie, Cihang and Zhang, et al., "Improving Transferability of | ||||
| Adversarial Examples With Input Diversity," in CVPR, 2019 <https://arxiv.org/abs/1803.06978>`_ | |||||
| Adversarial Examples With Input Diversity," in CVPR, 2019 <https://arxiv.org/abs/1803.06978>`_. | |||||
| Args: | Args: | ||||
| network (Cell): Target model. | network (Cell): Target model. | ||||
| @@ -514,7 +514,7 @@ class DiverseInputIterativeMethod(BasicIterativeMethod): | |||||
| is_targeted (bool): If True, targeted attack. If False, untargeted | is_targeted (bool): If True, targeted attack. If False, untargeted | ||||
| attack. Default: False. | attack. Default: False. | ||||
| prob (float): Transformation probability. Default: 0.5. | prob (float): Transformation probability. Default: 0.5. | ||||
| loss_fn (Loss): Loss function for optimization. If None, the input network \ | |||||
| loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \ | |||||
| is already equipped with loss function. Default: None. | is already equipped with loss function. Default: None. | ||||
| Examples: | Examples: | ||||
| @@ -558,7 +558,7 @@ class MomentumDiverseInputIterativeMethod(MomentumIterativeMethod): | |||||
| References: `Xie, Cihang and Zhang, et al., "Improving Transferability of | References: `Xie, Cihang and Zhang, et al., "Improving Transferability of | ||||
| Adversarial Examples With Input Diversity," in CVPR, 2019 <https://arxiv.org/abs/1803.06978>`_ | |||||
| Adversarial Examples With Input Diversity," in CVPR, 2019 <https://arxiv.org/abs/1803.06978>`_. | |||||
| Args: | Args: | ||||
| network (Cell): Target model. | network (Cell): Target model. | ||||
| @@ -568,10 +568,10 @@ class MomentumDiverseInputIterativeMethod(MomentumIterativeMethod): | |||||
| In form of (clip_min, clip_max). Default: (0.0, 1.0). | In form of (clip_min, clip_max). Default: (0.0, 1.0). | ||||
| is_targeted (bool): If True, targeted attack. If False, untargeted | is_targeted (bool): If True, targeted attack. If False, untargeted | ||||
| attack. Default: False. | attack. Default: False. | ||||
| norm_level (Union[int, numpy.inf]): Order of the norm. Possible values: | |||||
| norm_level (Union[int, str, numpy.inf]): Order of the norm. Possible values: | |||||
| np.inf, 1 or 2. Default: 'l1'. | np.inf, 1 or 2. Default: 'l1'. | ||||
| prob (float): Transformation probability. Default: 0.5. | prob (float): Transformation probability. Default: 0.5. | ||||
| loss_fn (Loss): Loss function for optimization. If None, the input network \ | |||||
| loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \ | |||||
| is already equipped with loss function. Default: None. | is already equipped with loss function. Default: None. | ||||
| Examples: | Examples: | ||||
| @@ -32,7 +32,7 @@ class AdversarialDefense(Defense): | |||||
| Args: | Args: | ||||
| network (Cell): A MindSpore network to be defensed. | network (Cell): A MindSpore network to be defensed. | ||||
| loss_fn (Functions): Loss function. Default: None. | |||||
| loss_fn (Union[Loss, None]): Loss function. Default: None. | |||||
| optimizer (Cell): Optimizer used to train the network. Default: None. | optimizer (Cell): Optimizer used to train the network. Default: None. | ||||
| Examples: | Examples: | ||||
| @@ -105,7 +105,7 @@ class AdversarialDefenseWithAttacks(AdversarialDefense): | |||||
| Args: | Args: | ||||
| network (Cell): A MindSpore network to be defensed. | network (Cell): A MindSpore network to be defensed. | ||||
| attacks (list[Attack]): List of attack method. | attacks (list[Attack]): List of attack method. | ||||
| loss_fn (Functions): Loss function. Default: None. | |||||
| loss_fn (Union[Loss, None]): Loss function. Default: None. | |||||
| optimizer (Cell): Optimizer used to train the network. Default: None. | optimizer (Cell): Optimizer used to train the network. Default: None. | ||||
| bounds (tuple): Upper and lower bounds of data. In form of (clip_min, | bounds (tuple): Upper and lower bounds of data. In form of (clip_min, | ||||
| clip_max). Default: (0.0, 1.0). | clip_max). Default: (0.0, 1.0). | ||||
| @@ -204,7 +204,7 @@ class EnsembleAdversarialDefense(AdversarialDefenseWithAttacks): | |||||
| Args: | Args: | ||||
| network (Cell): A MindSpore network to be defensed. | network (Cell): A MindSpore network to be defensed. | ||||
| attacks (list[Attack]): List of attack method. | attacks (list[Attack]): List of attack method. | ||||
| loss_fn (Functions): Loss function. Default: None. | |||||
| loss_fn (Union[Loss, None]): Loss function. Default: None. | |||||
| optimizer (Cell): Optimizer used to train the network. Default: None. | optimizer (Cell): Optimizer used to train the network. Default: None. | ||||
| bounds (tuple): Upper and lower bounds of data. In form of (clip_min, | bounds (tuple): Upper and lower bounds of data. In form of (clip_min, | ||||
| clip_max). Default: (0.0, 1.0). | clip_max). Default: (0.0, 1.0). | ||||
| @@ -23,11 +23,11 @@ class NaturalAdversarialDefense(AdversarialDefenseWithAttacks): | |||||
| Adversarial training based on FGSM. | Adversarial training based on FGSM. | ||||
| Reference: `A. Kurakin, et al., "Adversarial machine learning at scale," in | Reference: `A. Kurakin, et al., "Adversarial machine learning at scale," in | ||||
| ICLR, 2017. <https://arxiv.org/abs/1611.01236>`_ | |||||
| ICLR, 2017. <https://arxiv.org/abs/1611.01236>`_. | |||||
| Args: | Args: | ||||
| network (Cell): A MindSpore network to be defensed. | network (Cell): A MindSpore network to be defensed. | ||||
| loss_fn (Functions): Loss function. Default: None. | |||||
| loss_fn (Union[Loss, None]): Loss function. Default: None. | |||||
| optimizer (Cell): Optimizer used to train the network. Default: None. | optimizer (Cell): Optimizer used to train the network. Default: None. | ||||
| bounds (tuple): Upper and lower bounds of data. In form of (clip_min, | bounds (tuple): Upper and lower bounds of data. In form of (clip_min, | ||||
| clip_max). Default: (0.0, 1.0). | clip_max). Default: (0.0, 1.0). | ||||
| @@ -23,11 +23,11 @@ class ProjectedAdversarialDefense(AdversarialDefenseWithAttacks): | |||||
| Adversarial training based on PGD. | Adversarial training based on PGD. | ||||
| Reference: `A. Madry, et al., "Towards deep learning models resistant to | Reference: `A. Madry, et al., "Towards deep learning models resistant to | ||||
| adversarial attacks," in ICLR, 2018. <https://arxiv.org/abs/1611.01236>`_ | |||||
| adversarial attacks," in ICLR, 2018. <https://arxiv.org/abs/1611.01236>`_. | |||||
| Args: | Args: | ||||
| network (Cell): A MindSpore network to be defensed. | network (Cell): A MindSpore network to be defensed. | ||||
| loss_fn (Functions): Loss function. Default: None. | |||||
| loss_fn (Union[Loss, None]): Loss function. Default: None. | |||||
| optimizer (Cell): Optimizer used to train the nerwork. Default: None. | optimizer (Cell): Optimizer used to train the nerwork. Default: None. | ||||
| bounds (tuple): Upper and lower bounds of input data. In form of | bounds (tuple): Upper and lower bounds of input data. In form of | ||||
| (clip_min, clip_max). Default: (0.0, 1.0). | (clip_min, clip_max). Default: (0.0, 1.0). | ||||
| @@ -103,7 +103,7 @@ class MembershipInference: | |||||
| References: `Reza Shokri, Marco Stronati, Congzheng Song, Vitaly Shmatikov. | References: `Reza Shokri, Marco Stronati, Congzheng Song, Vitaly Shmatikov. | ||||
| Membership Inference Attacks against Machine Learning Models. 2017. | Membership Inference Attacks against Machine Learning Models. 2017. | ||||
| <https://arxiv.org/abs/1610.05820v2>`_ | |||||
| <https://arxiv.org/abs/1610.05820v2>`_. | |||||
| Args: | Args: | ||||
| model (Model): Target model. | model (Model): Target model. | ||||