ChaiBapchya opened a new pull request #17444: [Large Tensor] Add LT support for NN optimizers and 1 activation function URL: https://github.com/apache/incubator-mxnet/pull/17444 ## Description ## Add large tensor support to optimizers and 1 activation function - hard_sigmoid - adam_update - ftml_update - mp_sgd_mom_update - mp_sgd_update - rmsprop_update - rmspropalex_update - sgd_mom_update - sgd_update - signsgd_update - signum_update - nagmom - mp_nagmom - lamb - mp_lamb - ftrl - adagrad ## Checklist ## ### Essentials ### Please feel free to remove inapplicable items for your PR. - [ ] Changes are complete (i.e. I finished coding on this PR) - [ ] All changes have test coverage: - [ ] Code is well-documented: - [ ] To the best of my knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change ### Changes ### - [ ] modified: src/operator/optimizer_op-inl.h - [ ] modified: src/operator/tensor/elemwise_unary_op.h ## Comments ## Tested hard_sigmoid with LT input : Pass ``` >>> import mxnet as mx >>> mx.nd.hard_sigmoid(data=mx.nd.random_normal(shape=(1, 2**32 + 1))) [[0.9424413 0.6548008 0.7086881 ... 0.53579605 0.37985992 0.20645571]] <NDArray 1x4294967297 @cpu(0)> ``` Rest of the *_update functions can't be tested with random_normal inputs as they give NaNs as result (even for shape < 2**32) Hence not tested. But they don't give a segmentation fault (which previously was the problem due to lack of Large tensor support).
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
