sxjscience commented on issue #7942: Adam optimizer consistent with paper
URL: https://github.com/apache/incubator-mxnet/pull/7942#issuecomment-331651717
 
 
   @formath I feel that the `rho` has the effect to gradually transform the 
estimated gradient from a biased estimation to an unbiased estimated, which may 
have some advantages in the online learning setting where the data distribution 
is changing [1]. However, I've checked the Adam paper and haven't found the rho 
hyper-parameter. Could you help point out the section in the paper? Also, we 
need to better document its usage.
   
   [1] Follow the Moving Leader in Deep Learning, Shuai Zheng, James T. Kwok, 
ICML 2017
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to