31 Commits (e497117b741624f21eb83539b956e55a1072c775)

Author SHA1 Message Date
  wangnan39@huawei.com 082433183d uniform learning_rate behavior of optimizers 5 years ago
  wangnan39@huawei.com 86889c59cb optimizer adapt IndexedSlices 5 years ago
  panyifeng 44e74ad5aa Apply indexed_slices 5 years ago
  panyifeng d6635bbbe2 Add IndexedSlices 5 years ago
  wangnan39@huawei.com 172728a6a6 support weight decay for sparse optimizer 5 years ago
  Ziyan 41ddc153a6 modify lars interface 5 years ago
  panyifeng 3c2057297e support multi param for tuple grad 5 years ago
  lilei 497067d7b2 add sparse proximal ada grad optimizer 5 years ago
  mindspore-ci-bot 2d84011504 !2071 optimizer support loss scale for sparse situation 5 years ago
  wangnan39@huawei.com d4e3d69f37 support loss scale for sparse situation 5 years ago
  kingfo 9708e58259 fix TupleToArray & Cast operator issue 5 years ago
  mindspore-ci-bot 3536185f5b !2007 add lazy adam optimizer and support sparse adam&ftrl for cpu backend 5 years ago
  wangnan39@huawei.com 4042f16ce4 add lazy adam optim and support sparse adam & ftrl for cpu backend 5 years ago
  mindspore-ci-bot f859dfecc8 !1920 SupportPynativeIndexing 5 years ago
  huangdongrun 9522f59b87 support for tensor indexing in pynative 5 years ago
  mindspore-ci-bot 9dfb1011fe !1854 add SparseApplyAdam and SparseApplyLazyAdam ops 5 years ago
  wangnan39@huawei.com de21dbdaef add ops SparseApplyAdam and SparseApplyLazyAdam 5 years ago
  mindspore-ci-bot 3b8edd5a5b !1918 sparse grad for gatherv2 5 years ago
  panyifeng acaa66a738 sparse grad for gatherv2 5 years ago
  lilei 36d9e353a5 add proximal_ada_grad optimizer 5 years ago
  wangnan39@huawei.com c9b7d95c2c fix lr check bug in AdamWeightDecayDynamicLR 5 years ago
  wangnan39@huawei.com 810ccf80d8 fix_bug_in_check_lamb_warmup_step 5 years ago
  “liuxiao” ebbccca78b pylint clean 5 years ago
  guohongzilong 2d2f9ba8fd fix group parameter code for check 5 years ago
  jinyaohui 5a914994ba clean pylint 5 years ago
  jinyaohui 26fd75895d pylint waring clean 5 years ago
  guohongzilong 824bc30a94 learning rate and weight decay support group mode 5 years ago
  wangnan39@huawei.com 7f602016f4 add parameter verification for rmsprop, and modify default value in annotation 5 years ago
  root 7d700295f8 add dynamic lr and enhance optim 5 years ago
  Ziyan 4cbcd8e907 enable use float type learning rate in lars optimizer 5 years ago
  zhunaipan 930a1fb0a8 initial version 5 years ago