[GitHub] sxjscience commented on issue #7942: Adam optimizer consistent with paper
sxjscience commented on issue #7942: Adam optimizer consistent with paper URL: https://github.com/apache/incubator-mxnet/pull/7942#issuecomment-331819232 @formath I see. I've checked again different versions of the Adam paper and find the rho in the v2 and v3 versions: https://arxiv.org/pdf/1412.6980v2.pdf, https://arxiv.org/pdf/1412.6980v3.pdf. However, it's removed in the latest arxiv version (v9), https://arxiv.org/pdf/1412.6980v9.pdf . Do other packages support this parameter? @piiswrong, do you think we still need to add it? Another choice is to add the FTML optimizer, which should work better than Adam in this scenario. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] sxjscience commented on issue #7942: Adam optimizer consistent with paper
sxjscience commented on issue #7942: Adam optimizer consistent with paper URL: https://github.com/apache/incubator-mxnet/pull/7942#issuecomment-331819232 @formath I see. I've checked again different versions of the Adam paper and find the rho in the v2 and v3 versions: https://arxiv.org/pdf/1412.6980v2.pdf, https://arxiv.org/pdf/1412.6980v3.pdf. However, it's removed in the latest arxiv version (v9), https://arxiv.org/pdf/1412.6980v9.pdf . Does other packages support this parameter? @piiswrong, do you think we still need to add it? Another choice is to add the FTML optimizer, which should work better than Adam in this scenario. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] sxjscience commented on issue #7942: Adam optimizer consistent with paper
sxjscience commented on issue #7942: Adam optimizer consistent with paper URL: https://github.com/apache/incubator-mxnet/pull/7942#issuecomment-331819232 @formath I see. I've checked again different versions of the Adam paper and find the rho in the v2 and v3 versions: https://arxiv.org/pdf/1412.6980v2.pdf, https://arxiv.org/pdf/1412.6980v2.pdf. However, it's removed in the latest arxiv version (v9), https://arxiv.org/pdf/1412.6980v9.pdf . Does other packages support this parameter? @piiswrong, do you think we still need to add it? Another choice is to add the FTML optimizer, which should work better than Adam in this scenario. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] sxjscience commented on issue #7942: Adam optimizer consistent with paper
sxjscience commented on issue #7942: Adam optimizer consistent with paper URL: https://github.com/apache/incubator-mxnet/pull/7942#issuecomment-331819232 @formath I see. I've checked again different versions of the Adam paper and find the rho in the v2 and v3 versions: https://arxiv.org/pdf/1412.6980v2.pdf, https://arxiv.org/pdf/1412.6980v2.pdf. However, it's removed in the latest arxiv version (v9), https://arxiv.org/pdf/1412.6980v9.pdf . Does other packages support this parameter? Do you think we still need to add it? Another choice is to add the FTML optimizer, which should work better than Adam in this scenario. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] sxjscience commented on issue #7942: Adam optimizer consistent with paper
sxjscience commented on issue #7942: Adam optimizer consistent with paper URL: https://github.com/apache/incubator-mxnet/pull/7942#issuecomment-331819232 @formath I see. I've checked again different versions of the Adam paper and find the rho in the v2 and v3 versions: https://arxiv.org/pdf/1412.6980v2.pdf, https://arxiv.org/pdf/1412.6980v2.pdf. However, it's somehow removed in the latest arxiv version (v9), https://arxiv.org/pdf/1412.6980v9.pdf . This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] sxjscience commented on issue #7942: Adam optimizer consistent with paper
sxjscience commented on issue #7942: Adam optimizer consistent with paper URL: https://github.com/apache/incubator-mxnet/pull/7942#issuecomment-331651717 @formath I feel that setting `rho` to be smaller than 1 can gradually transform the estimated gradient from a biased estimation to an unbiased estimation, which may be helpful in scenarios where the data distribution is changing (like in the online learning setting) [1]. However, I've checked the Adam paper and haven't found the rho hyper-parameter. Could you help point out the section in the paper? Also, we need to better document its usage. [1] Follow the Moving Leader in Deep Learning, Shuai Zheng & James T. Kwok, ICML 2017 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] sxjscience commented on issue #7942: Adam optimizer consistent with paper
sxjscience commented on issue #7942: Adam optimizer consistent with paper URL: https://github.com/apache/incubator-mxnet/pull/7942#issuecomment-331651717 @formath I feel that the `rho` has the effect to gradually transform the estimated gradient from a biased estimation to an unbiased estimation, which may be helpful in scenarios where the data distribution is changing (like in the online learning setting) [1]. However, I've checked the Adam paper and haven't found the rho hyper-parameter. Could you help point out the section in the paper? Also, we need to better document its usage. [1] Follow the Moving Leader in Deep Learning, Shuai Zheng & James T. Kwok, ICML 2017 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] sxjscience commented on issue #7942: Adam optimizer consistent with paper
sxjscience commented on issue #7942: Adam optimizer consistent with paper URL: https://github.com/apache/incubator-mxnet/pull/7942#issuecomment-331651717 @formath I feel that the `rho` has the effect to gradually transform the estimated gradient from a biased estimation to an unbiased estimated, which may have some advantages in the online learning setting where the data distribution is changing [1]. However, I've checked the Adam paper and haven't found the rho hyper-parameter. Could you help point out the section in the paper? Also, we need to better document its usage. [1] Follow the Moving Leader in Deep Learning, Shuai Zheng & James T. Kwok, ICML 2017 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] sxjscience commented on issue #7942: Adam optimizer consistent with paper
sxjscience commented on issue #7942: Adam optimizer consistent with paper URL: https://github.com/apache/incubator-mxnet/pull/7942#issuecomment-331651717 @formath I feel that the `rho` has the effect to gradually transform the estimated gradient from a biased estimation to an unbiased estimated, which may have some advantages in the online learning setting where the data distribution is changing [1]. However, I've checked the Adam paper and haven't found the rho hyper-parameter. Could you help point out the section in the paper? Also, we need to better document its usage. [1] Follow the Moving Leader in Deep Learning, Shuai Zheng and James T. Kwok, ICML 2017 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] sxjscience commented on issue #7942: Adam optimizer consistent with paper
sxjscience commented on issue #7942: Adam optimizer consistent with paper URL: https://github.com/apache/incubator-mxnet/pull/7942#issuecomment-331651717 @formath I feel that the `rho` has the effect to gradually transform the estimated gradient from a biased estimation to an unbiased estimated, which may have some advantages in the online learning setting where the data distribution is changing [1]. However, I've checked the Adam paper and haven't found the rho hyper-parameter. Could you help point out the section in the paper? Also, we need to better document its usage. [1] Follow the Moving Leader in Deep Learning, Shuai Zheng, James T. Kwok, ICML 2017 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] sxjscience commented on issue #7942: Adam optimizer consistent with paper
sxjscience commented on issue #7942: Adam optimizer consistent with paper URL: https://github.com/apache/incubator-mxnet/pull/7942#issuecomment-331651717 @formath I feel that the `rho` has the effect to gradually transform the estimated gradient from a biased estimation to an unbiased estimated, which may have some advantages in the online learning setting where the data distribution is changing [1]. However, I've checked the Adam paper and haven't found the rho hyper-parameter. Could you help point out the section in the paper? Also, we need to better document its usage. [1] Shuai Zheng, James T. Kwok, Follow the Moving Leader in Deep Learning, ICML 2017 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] sxjscience commented on issue #7942: Adam optimizer consistent with paper
sxjscience commented on issue #7942: Adam optimizer consistent with paper URL: https://github.com/apache/incubator-mxnet/pull/7942#issuecomment-331651717 @formath I feel that the `rho` has the effect to gradually transform the gradient estimator from biased to unbiased, which may have some advantages. However, I've checked the Adam paper and haven't found the rho hyper-parameter. Could you help point out the section in the paper? Also, we need to better document its usage. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] sxjscience commented on issue #7942: Adam optimizer consistent with paper
sxjscience commented on issue #7942: Adam optimizer consistent with paper URL: https://github.com/apache/incubator-mxnet/pull/7942#issuecomment-331651717 @formath I feel that the `rho` has the effect to gradually transform the gradient estimator from biased to unbiased, which may have some advantages. However, I've checked the Adam paper and haven't found the rho hyper-parameter. Could you help point out the section in the paper? Also, we need to better document the it's usage. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] sxjscience commented on issue #7942: Adam optimizer consistent with paper
sxjscience commented on issue #7942: Adam optimizer consistent with paper URL: https://github.com/apache/incubator-mxnet/pull/7942#issuecomment-331651717 @formath I feel that the `rho` has the effect to gradually transforms the gradient estimator from biased to unbiased, which may have some advantages. However, I've checked the Adam paper and haven't found the rho hyper-parameter. Could you help point out the section in the paper? Also, we need to better document the it's usage. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services