This is an automated email from the ASF dual-hosted git repository.

cjolivier01 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
     new 81dc02e  add missed optimizer docs (#9779)
81dc02e is described below

commit 81dc02e517e0cd2be3148237e5bbb0482dc1f09f
Author: Sheng Zha <s...@users.noreply.github.com>
AuthorDate: Tue Feb 13 08:01:14 2018 -0800

    add missed optimizer docs (#9779)
---
 docs/api/python/optimization/optimization.md | 6 ++++++
 python/mxnet/optimizer.py                    | 4 ++--
 2 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/docs/api/python/optimization/optimization.md 
b/docs/api/python/optimization/optimization.md
index e333b00..7d6276d 100644
--- a/docs/api/python/optimization/optimization.md
+++ b/docs/api/python/optimization/optimization.md
@@ -96,8 +96,14 @@ implements one weight updating function.
     Adam
     AdaGrad
     AdaDelta
+    Adamax
+    Nadam
     DCASGD
     SGLD
+    Signum
+    FTML
+    LBSGD
+    Ftrl
 ```
 
 ## The ``mxnet.lr_scheduler`` package
diff --git a/python/mxnet/optimizer.py b/python/mxnet/optimizer.py
index 0652772..065c08c 100644
--- a/python/mxnet/optimizer.py
+++ b/python/mxnet/optimizer.py
@@ -68,8 +68,8 @@ class Optimizer(object):
        Flag to control the internal precision of the optimizer.
        ``False`` results in using the same precision as the weights (default),
        ``True`` makes internal 32-bit copy of the weights and applies gradients
-                in 32-bit precision even if actual weights used in the model 
have lower precision.
-                Turning this on can improve convergence and accuracy when 
training with float16.
+       in 32-bit precision even if actual weights used in the model have lower 
precision.
+       Turning this on can improve convergence and accuracy when training with 
float16.
 
     Properties
     ----------

-- 
To stop receiving notification emails like this one, please contact
cjolivie...@apache.org.

Reply via email to