84 Commits (e497117b741624f21eb83539b956e55a1072c775)

Author SHA1 Message Date
  He Wei 1f6771256d Fix a bug for Parameter 5 years ago
  wangnan39@huawei.com 082433183d uniform learning_rate behavior of optimizers 5 years ago
  Wei Luning 88e864a4a3 remove loop can unroll flag, clean some python usage 5 years ago
  wangnan39@huawei.com 86889c59cb optimizer adapt IndexedSlices 5 years ago
  panyifeng 44e74ad5aa Apply indexed_slices 5 years ago
  leilei_snow 9b21420b3e update SSIM loss, add MSSSIM loss feature; add their ut testcases. 5 years ago
  peixu_ren bef1fc7f19 add sample functions in normal and bermoulli distributions 5 years ago
  Xun Deng 0aa26c1815 add high level abstract class Distribution and two example class: 5 years ago
  panyifeng d6635bbbe2 Add IndexedSlices 5 years ago
  wangnan39@huawei.com 172728a6a6 support weight decay for sparse optimizer 5 years ago
  zhaozhenlong 71d33b087e limit ssim input img type to fp32 and fp16 5 years ago
  Ziyan 41ddc153a6 modify lars interface 5 years ago
  panyifeng 3c2057297e support multi param for tuple grad 5 years ago
  lilei 497067d7b2 add sparse proximal ada grad optimizer 5 years ago
  simson ca988e9e69 fix the condition when activation name is 0 5 years ago
  liuxiao aa73abc2f7 Add image.CentralCrop 5 years ago
  mindspore-ci-bot 2d84011504 !2071 optimizer support loss scale for sparse situation 5 years ago
  wangnan39@huawei.com d4e3d69f37 support loss scale for sparse situation 5 years ago
  kingfo 9708e58259 fix TupleToArray & Cast operator issue 5 years ago
  mindspore-ci-bot 3536185f5b !2007 add lazy adam optimizer and support sparse adam&ftrl for cpu backend 5 years ago
  wangnan39@huawei.com 4042f16ce4 add lazy adam optim and support sparse adam & ftrl for cpu backend 5 years ago
  mindspore-ci-bot 5c7cb7bd71 !2023 add op CosineEmbeddingLoss 5 years ago
  zhaozhenlong 19c5921c06 composed op CosineEmbeddingLoss 5 years ago
  mindspore-ci-bot f859dfecc8 !1920 SupportPynativeIndexing 5 years ago
  huangdongrun 9522f59b87 support for tensor indexing in pynative 5 years ago
  mindspore-ci-bot 9dfb1011fe !1854 add SparseApplyAdam and SparseApplyLazyAdam ops 5 years ago
  wangnan39@huawei.com de21dbdaef add ops SparseApplyAdam and SparseApplyLazyAdam 5 years ago
  mindspore-ci-bot 3b8edd5a5b !1918 sparse grad for gatherv2 5 years ago
  panyifeng acaa66a738 sparse grad for gatherv2 5 years ago
  lilei 36d9e353a5 add proximal_ada_grad optimizer 5 years ago
  mindspore-ci-bot c82a8bf483 !1678 modify print 5 years ago
  jinyaohui 5e43edc474 clean pylint 5 years ago
  wangnan39@huawei.com c9b7d95c2c fix lr check bug in AdamWeightDecayDynamicLR 5 years ago
  chenhaozhe 435fc12e28 optimize clip_norm 5 years ago
  jinyaohui 86d197dfeb clean pylint 5 years ago
  mindspore-ci-bot 19ce0c372a !1257 Implicit type conversion 5 years ago
  wangnan39@huawei.com 810ccf80d8 fix_bug_in_check_lamb_warmup_step 5 years ago
  candanzg 2429da19fb implicit type conversion 5 years ago
  chenhaozhe b6aceddeab update bert scripts according to rules of modelzoo 5 years ago
  “liuxiao” ebbccca78b pylint clean 5 years ago
  “liuxiao” f4542f810b pylint clean 5 years ago
  “liuxiao” 382a0124c3 pylint clean 5 years ago
  mindspore-ci-bot 62c716b68e !1349 Fix some functions in group parameters and optimizer 5 years ago
  jinyaohui fbdba6e4da clean pylint 5 years ago
  guohongzilong 2d2f9ba8fd fix group parameter code for check 5 years ago
  jinyaohui 5a914994ba clean pylint 5 years ago
  jiangjinsheng e45532b78c fixed transpose 5 years ago
  candanzg 2cc85bdc93 Support weight compile according to shape 5 years ago
  jinyaohui 26fd75895d pylint waring clean 5 years ago
  mindspore-ci-bot 66667d727e !1034 Gpu Support Dropout operator 5 years ago