[GitHub] szha commented on issue #8720: Implementation Help!!!

2017-11-19 Thread GitBox
szha commented on issue #8720: Implementation Help!!!
URL: 
https://github.com/apache/incubator-mxnet/issues/8720#issuecomment-345617102
 
 
   For the text part, you're using end-of-sentence token as the padding, and 
are unrolling all the way even for paddings. Usually you would want to 1) use a 
separate token for padding, and 2) minimize the impact of padding by including 
as little of it as possible. Having these long paddings can impact your text 
model performance.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #8720: Implementation Help!!!

2017-11-19 Thread GitBox
szha commented on issue #8720: Implementation Help!!!
URL: 
https://github.com/apache/incubator-mxnet/issues/8720#issuecomment-345616379
 
 
   From the screenshot it looks like the indexing of the text is correct. Do 
the questions match the input images?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #8720: Implementation Help!!!

2017-11-19 Thread GitBox
szha commented on issue #8720: Implementation Help!!!
URL: 
https://github.com/apache/incubator-mxnet/issues/8720#issuecomment-345615703
 
 
   I'm asking you to try and overfit a small dataset first as sanity check. If 
your model can't overfit a small subset of the data, then your model is wrong. 
This is the first step.
   
   Unfortunately I can't read the paper and reproduce this for you right now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] liuzhi136 commented on issue #8720: Implementation Help!!!

2017-11-19 Thread GitBox
liuzhi136 commented on issue #8720: Implementation Help!!!
URL: 
https://github.com/apache/incubator-mxnet/issues/8720#issuecomment-345613723
 
 
   I still wander whether the way I pad end character is correct ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] liuzhi136 commented on issue #8720: Implementation Help!!!

2017-11-19 Thread GitBox
liuzhi136 commented on issue #8720: Implementation Help!!!
URL: 
https://github.com/apache/incubator-mxnet/issues/8720#issuecomment-345611895
 
 
   I just examine the data batch I provide to the model. one of batches looks 
like below and the next one is its corresponding text version. I can assure you 
that my data pipeline is working correctly. The data set I use to train the 
model is the same as used in this paper, and I just want to reproduce this 
experiment result see whether it really work. I think it is not overfitting 
cause the training loss and validations loss are both high. I think it's still 
underfitting. Can you just read this paper to check if my model structure is 
correct? Cause it is not too complicated and won't take you too much time. 
maybe about ten minutes at most.
   
   
![image](https://user-images.githubusercontent.com/13534043/33006930-47302874-ce07-11e7-95a0-f63d539f30b9.png)
   
   
![image](https://user-images.githubusercontent.com/13534043/33006123-19511100-ce04-11e7-9867-7e856d59255c.png)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] liuzhi136 commented on issue #8720: Implementation Help!!!

2017-11-19 Thread GitBox
liuzhi136 commented on issue #8720: Implementation Help!!!
URL: 
https://github.com/apache/incubator-mxnet/issues/8720#issuecomment-345611895
 
 
   I just examine the data batch I provide to the model. one of batches looks 
like below and the next one is its corresponding text version. I can assure you 
that my data pipeline is working correctly. The data set I use to train the 
model is the same as used in this paper, and I just want to reproduce this 
experiment result see whether it really work. I think it is not overfitting 
cause the training loss and validations loss are both high. I think it's still 
underfitting. Can you just read this paper to check if my model structure is 
correct? Cause it is not too complicated and won't take you too much time. 
maybe about ten minutes at most.
   
   
![image](https://user-images.githubusercontent.com/13534043/33006782-a80a5e0e-ce06-11e7-8af3-e6cf6934d987.png)
   
   
![image](https://user-images.githubusercontent.com/13534043/33006123-19511100-ce04-11e7-9867-7e856d59255c.png)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] MDR-EX1000 closed issue #8357: USE_CUDNN=1

2017-11-19 Thread GitBox
MDR-EX1000 closed issue #8357:  USE_CUDNN=1
URL: https://github.com/apache/incubator-mxnet/issues/8357
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] MDR-EX1000 closed issue #8188: Periodic Loss Value when training with "step" learning rate policy

2017-11-19 Thread GitBox
MDR-EX1000 closed issue #8188: Periodic Loss Value when training with "step" 
learning rate policy
URL: https://github.com/apache/incubator-mxnet/issues/8188
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #8689: Mark tests that should only be run nightly.

2017-11-19 Thread GitBox
szha commented on issue #8689: Mark tests that should only be run nightly.
URL: https://github.com/apache/incubator-mxnet/pull/8689#issuecomment-345609106
 
 
   @KellenSunderland please rebase and resolve conflict.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 commented on issue #8721: fix custom op for backward compatibility

2017-11-19 Thread GitBox
anirudh2290 commented on issue #8721: fix custom op for backward compatibility
URL: https://github.com/apache/incubator-mxnet/pull/8721#issuecomment-345591574
 
 
   Thank you for fixing this and adding the tests! Yes, since the output stypes 
which is the in_grads would depend on arguments length and not output length. 
Can you please let me know what LR example you are referring to?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZiyueHuang commented on issue #8721: fix custom op for backward compatibility

2017-11-19 Thread GitBox
ZiyueHuang commented on issue #8721: fix custom op for backward compatibility
URL: https://github.com/apache/incubator-mxnet/pull/8721#issuecomment-345605234
 
 
   @anirudh2290 Thanks for your comments. LR example is the 
`linear_classification.py` in example/sparse.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #8720: Implementation Help!!!

2017-11-19 Thread GitBox
szha commented on issue #8720: Implementation Help!!!
URL: 
https://github.com/apache/incubator-mxnet/issues/8720#issuecomment-345602238
 
 
   I'm not familiar with this model, so I'm trying to help by offering the 
perspective of how I would approach it.
   
   Have you tried anything to find which parts of the whole pipeline might have 
problems? For example, are you sure your data pipeline is working correctly? 
Have you examined samples from your iterator to see if it looks correct? If 
just using a small subset as training set, does your model overfit to the set?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #8720: Implementation Help!!!

2017-11-19 Thread GitBox
szha commented on issue #8720: Implementation Help!!!
URL: 
https://github.com/apache/incubator-mxnet/issues/8720#issuecomment-345602238
 
 
   I'm not familiar with this model, so I'm trying to help by offering from the 
perspective of how I would approach it.
   
   Have you tried anything to find which parts of the whole pipeline might have 
problems? For example, are you sure your data pipeline is working correctly? 
Have you examined samples from your iterator to see if it looks correct? If 
just using a small subset as training set, does your model overfit to the set?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] dwSun commented on issue #8693: mx.image.ImageIter failed on lst files failed with " src/io/image_io.cc:168: Check failed: static_cast<void*>(dst.ptr()) == out->data().dptr"

2017-11-19 Thread GitBox
dwSun commented on issue #8693: mx.image.ImageIter failed on lst files failed 
with  " src/io/image_io.cc:168: Check failed: static_cast(dst.ptr()) == 
out->data().dptr"
URL: 
https://github.com/apache/incubator-mxnet/issues/8693#issuecomment-345602144
 
 
   I am a beginner on mxnet. No idea what I should do when face with such kind 
of problem.
   According to document of **mxnet.image.ImageIter**:
   >To load input images from .rec files, use path_imgrec parameter and to load 
from raw image files, use path_imglist and path_root parameters.
   
   I can load raw image files directly without rec files, that will save me 
some time when debug my model.
   
   > the imdecoding is always performed on cpu at this time.
   
   Is this documented some where? I can't find it.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] KeyKy commented on issue #8708: dataloader._batchify function is slow

2017-11-19 Thread GitBox
KeyKy commented on issue #8708: dataloader._batchify function is slow
URL: 
https://github.com/apache/incubator-mxnet/issues/8708#issuecomment-345599857
 
 
   37   118 5568 47.2  3.2  data = 
np.asarray(data)
   38   11839818337.4 23.0  return 
nd.array(data, dtype=data.dtype
   
   in my project, I also find np to nd is slow. To my knowledge, python numpy 
array to C++ is zero cost.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] KeyKy closed issue #8538: Can i fix bias to 0 and gbias also 0 in BatchNorm?

2017-11-19 Thread GitBox
KeyKy closed issue #8538: Can i fix bias to 0 and gbias also 0 in BatchNorm?
URL: https://github.com/apache/incubator-mxnet/issues/8538
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on a change in pull request #8704: Initial Prep for 1.0: bump up version and add 0.12.1 changes to master

2017-11-19 Thread GitBox
szha commented on a change in pull request #8704: Initial Prep for 1.0: bump up 
version and add 0.12.1 changes to master
URL: https://github.com/apache/incubator-mxnet/pull/8704#discussion_r151907672
 
 

 ##
 File path: docs/build_version_doc/build_all_version.sh
 ##
 @@ -21,7 +21,7 @@
 # Built files are stored in $built
 # Version numbers are stored in $tag_list.
 # Version numbers are ordered from latest to old and final one is master.
-tag_list="0.12.0 0.11.0 master"
+tag_list="1.0.0 0.12.0 0.11.0 master"
 
 Review comment:
   1.0.0 is not available yet. 0.12.1 should be added instead.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on a change in pull request #8704: Initial Prep for 1.0: bump up version and add 0.12.1 changes to master

2017-11-19 Thread GitBox
szha commented on a change in pull request #8704: Initial Prep for 1.0: bump up 
version and add 0.12.1 changes to master
URL: https://github.com/apache/incubator-mxnet/pull/8704#discussion_r151907710
 
 

 ##
 File path: setup-utils/install-mxnet-osx-python.sh
 ##
 @@ -33,7 +33,7 @@ then
# TODO: Change this to latest tag
#   to avoid updating this value for every release
#
-   export MXNET_TAG="0.12.0"
+   export MXNET_TAG="1.0.0"
 
 Review comment:
   0.12.1


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong closed pull request #8718: Doc src and fix

2017-11-19 Thread GitBox
piiswrong closed pull request #8718: Doc src and fix
URL: https://github.com/apache/incubator-mxnet/pull/8718
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/api/python/gluon/model_zoo.md 
b/docs/api/python/gluon/model_zoo.md
index 18d9ae2cb8..727aeb7fd8 100644
--- a/docs/api/python/gluon/model_zoo.md
+++ b/docs/api/python/gluon/model_zoo.md
@@ -13,6 +13,7 @@ This document lists the model APIs in Gluon:
 :nosignatures:
 
 mxnet.gluon.model_zoo
+mxnet.gluon.model_zoo.vision
 ```
 
 The `Gluon Model Zoo` API, defined in the `gluon.model_zoo` package, provides 
pre-defined
@@ -186,6 +187,8 @@ In the rest of this document, we list routines provided by 
the `gluon.model_zoo`
 
 ```eval_rst
 
+.. automodule:: mxnet.gluon.model_zoo
+
 .. automodule:: mxnet.gluon.model_zoo.vision
 :members:
 :imported-members:
diff --git a/docs/api/python/ndarray/ndarray.md 
b/docs/api/python/ndarray/ndarray.md
index 68f8333ada..59ca4a612e 100644
--- a/docs/api/python/ndarray/ndarray.md
+++ b/docs/api/python/ndarray/ndarray.md
@@ -559,13 +559,13 @@ The `ndarray` package provides several classes:
 .. autosummary::
 :nosignatures:
 
-mxnet.nd.random.uniform
-mxnet.nd.random.normal
-mxnet.nd.random.gamma
-mxnet.nd.random.exponential
-mxnet.nd.random.poisson
-mxnet.nd.random.negative_binomial
-mxnet.nd.random.generalized_negative_binomial
+mxnet.ndarray.random.uniform
+mxnet.ndarray.random.normal
+mxnet.ndarray.random.gamma
+mxnet.ndarray.random.exponential
+mxnet.ndarray.random.poisson
+mxnet.ndarray.random.negative_binomial
+mxnet.ndarray.random.generalized_negative_binomial
 mxnet.random.seed
 ```
 
diff --git a/docs/api/python/symbol/symbol.md b/docs/api/python/symbol/symbol.md
index 8d83086fde..e383597236 100644
--- a/docs/api/python/symbol/symbol.md
+++ b/docs/api/python/symbol/symbol.md
@@ -558,13 +558,13 @@ Composite multiple symbols into a new one by an operator.
 .. autosummary::
 :nosignatures:
 
-mxnet.sym.random.uniform
-mxnet.sym.random.normal
-mxnet.sym.random.gamma
-mxnet.sym.random.exponential
-mxnet.sym.random.poisson
-mxnet.sym.random.negative_binomial
-mxnet.sym.random.generalized_negative_binomial
+mxnet.symbol.random.uniform
+mxnet.symbol.random.normal
+mxnet.symbol.random.gamma
+mxnet.symbol.random.exponential
+mxnet.symbol.random.poisson
+mxnet.symbol.random.negative_binomial
+mxnet.symbol.random.generalized_negative_binomial
 mxnet.random.seed
 ```
 
diff --git a/docs/conf.py b/docs/conf.py
index ad51323f01..d018408d45 100644
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -59,6 +59,7 @@
 'sphinx.ext.autosummary',
 'sphinx.ext.napoleon',
 'sphinx.ext.mathjax',
+'sphinx.ext.viewcode',
 'breathe',
 'mxdoc'
 ]
diff --git a/docs/mxdoc.py b/docs/mxdoc.py
index 26e4c9e265..caf135680d 100644
--- a/docs/mxdoc.py
+++ b/docs/mxdoc.py
@@ -62,8 +62,12 @@ def generate_doxygen(app):
 
 def build_mxnet(app):
 """Build mxnet .so lib"""
-_run_cmd("cd %s/.. && cp make/config.mk config.mk && make -j$(nproc) 
DEBUG=1" %
-app.builder.srcdir)
+if not os.path.exists(os.path.join(app.builder.srcdir, '..', 'config.mk')):
+_run_cmd("cd %s/.. && cp make/config.mk config.mk && make -j$(nproc) 
DEBUG=1" %
+app.builder.srcdir)
+else:
+_run_cmd("cd %s/.. && make -j$(nproc) DEBUG=1" %
+app.builder.srcdir)
 
 def build_r_docs(app):
 """build r pdf"""
diff --git a/python/mxnet/gluon/model_zoo/vision/__init__.py 
b/python/mxnet/gluon/model_zoo/vision/__init__.py
index a9a539bf20..619711e71d 100644
--- a/python/mxnet/gluon/model_zoo/vision/__init__.py
+++ b/python/mxnet/gluon/model_zoo/vision/__init__.py
@@ -30,6 +30,7 @@
 -  `MobileNet`_
 
 You can construct a model with random weights by calling its constructor:
+
 .. code::
 
 from mxnet.gluon.model_zoo import vision
@@ -39,8 +40,8 @@
 densenet = vision.densenet_161()
 
 We provide pre-trained models for all the models except ResNet V2.
-These can constructed by passing
-``pretrained=True``:
+These can constructed by passing ``pretrained=True``:
+
 .. code::
 
 from mxnet.gluon.model_zoo import vision


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Doc src and fix (#8718)

2017-11-19 Thread jxie
This is an automated email from the ASF dual-hosted git repository.

jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new fd45517  Doc src and fix (#8718)
fd45517 is described below

commit fd45517614842bfa1d32d1ba54a200eb4a0dd377
Author: Sheng Zha 
AuthorDate: Sun Nov 19 21:57:15 2017 -0800

Doc src and fix (#8718)

* add viewcode

* fix formatting
---
 docs/api/python/gluon/model_zoo.md  |  3 +++
 docs/api/python/ndarray/ndarray.md  | 14 +++---
 docs/api/python/symbol/symbol.md| 14 +++---
 docs/conf.py|  1 +
 docs/mxdoc.py   |  8 ++--
 python/mxnet/gluon/model_zoo/vision/__init__.py |  5 +++--
 6 files changed, 27 insertions(+), 18 deletions(-)

diff --git a/docs/api/python/gluon/model_zoo.md 
b/docs/api/python/gluon/model_zoo.md
index a22b437..8310461 100644
--- a/docs/api/python/gluon/model_zoo.md
+++ b/docs/api/python/gluon/model_zoo.md
@@ -13,6 +13,7 @@ This document lists the model APIs in Gluon:
 :nosignatures:
 
 mxnet.gluon.model_zoo
+mxnet.gluon.model_zoo.vision
 ```
 
 The `Gluon Model Zoo` API, defined in the `gluon.model_zoo` package, provides 
pre-defined
@@ -182,6 +183,8 @@ In the rest of this document, we list routines provided by 
the `gluon.model_zoo`
 
 ```eval_rst
 
+.. automodule:: mxnet.gluon.model_zoo
+
 .. automodule:: mxnet.gluon.model_zoo.vision
 :members:
 :imported-members:
diff --git a/docs/api/python/ndarray/ndarray.md 
b/docs/api/python/ndarray/ndarray.md
index 68f8333..59ca4a6 100644
--- a/docs/api/python/ndarray/ndarray.md
+++ b/docs/api/python/ndarray/ndarray.md
@@ -559,13 +559,13 @@ The `ndarray` package provides several classes:
 .. autosummary::
 :nosignatures:
 
-mxnet.nd.random.uniform
-mxnet.nd.random.normal
-mxnet.nd.random.gamma
-mxnet.nd.random.exponential
-mxnet.nd.random.poisson
-mxnet.nd.random.negative_binomial
-mxnet.nd.random.generalized_negative_binomial
+mxnet.ndarray.random.uniform
+mxnet.ndarray.random.normal
+mxnet.ndarray.random.gamma
+mxnet.ndarray.random.exponential
+mxnet.ndarray.random.poisson
+mxnet.ndarray.random.negative_binomial
+mxnet.ndarray.random.generalized_negative_binomial
 mxnet.random.seed
 ```
 
diff --git a/docs/api/python/symbol/symbol.md b/docs/api/python/symbol/symbol.md
index 8d83086..e383597 100644
--- a/docs/api/python/symbol/symbol.md
+++ b/docs/api/python/symbol/symbol.md
@@ -558,13 +558,13 @@ Composite multiple symbols into a new one by an operator.
 .. autosummary::
 :nosignatures:
 
-mxnet.sym.random.uniform
-mxnet.sym.random.normal
-mxnet.sym.random.gamma
-mxnet.sym.random.exponential
-mxnet.sym.random.poisson
-mxnet.sym.random.negative_binomial
-mxnet.sym.random.generalized_negative_binomial
+mxnet.symbol.random.uniform
+mxnet.symbol.random.normal
+mxnet.symbol.random.gamma
+mxnet.symbol.random.exponential
+mxnet.symbol.random.poisson
+mxnet.symbol.random.negative_binomial
+mxnet.symbol.random.generalized_negative_binomial
 mxnet.random.seed
 ```
 
diff --git a/docs/conf.py b/docs/conf.py
index ad51323..d018408 100644
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -59,6 +59,7 @@ extensions = [
 'sphinx.ext.autosummary',
 'sphinx.ext.napoleon',
 'sphinx.ext.mathjax',
+'sphinx.ext.viewcode',
 'breathe',
 'mxdoc'
 ]
diff --git a/docs/mxdoc.py b/docs/mxdoc.py
index 26e4c9e..caf1356 100644
--- a/docs/mxdoc.py
+++ b/docs/mxdoc.py
@@ -62,8 +62,12 @@ def generate_doxygen(app):
 
 def build_mxnet(app):
 """Build mxnet .so lib"""
-_run_cmd("cd %s/.. && cp make/config.mk config.mk && make -j$(nproc) 
DEBUG=1" %
-app.builder.srcdir)
+if not os.path.exists(os.path.join(app.builder.srcdir, '..', 'config.mk')):
+_run_cmd("cd %s/.. && cp make/config.mk config.mk && make -j$(nproc) 
DEBUG=1" %
+app.builder.srcdir)
+else:
+_run_cmd("cd %s/.. && make -j$(nproc) DEBUG=1" %
+app.builder.srcdir)
 
 def build_r_docs(app):
 """build r pdf"""
diff --git a/python/mxnet/gluon/model_zoo/vision/__init__.py 
b/python/mxnet/gluon/model_zoo/vision/__init__.py
index a9a539b..619711e 100644
--- a/python/mxnet/gluon/model_zoo/vision/__init__.py
+++ b/python/mxnet/gluon/model_zoo/vision/__init__.py
@@ -30,6 +30,7 @@ This module contains definitions for the following model 
architectures:
 -  `MobileNet`_
 
 You can construct a model with random weights by calling its constructor:
+
 .. code::
 
 from mxnet.gluon.model_zoo import vision
@@ -39,8 +40,8 @@ You can construct a model with random weights by calling its 
constructor:
 densenet = vision.densenet_161()
 
 We provide pre-trained models for 

[GitHub] piiswrong closed pull request #8710: Remove experimental warning on Gluon and add Gluon tutorials

2017-11-19 Thread GitBox
piiswrong closed pull request #8710: Remove experimental warning on Gluon and 
add Gluon tutorials
URL: https://github.com/apache/incubator-mxnet/pull/8710
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/api/python/autograd/autograd.md 
b/docs/api/python/autograd/autograd.md
index de8188446b..410d6a94e2 100644
--- a/docs/api/python/autograd/autograd.md
+++ b/docs/api/python/autograd/autograd.md
@@ -1,14 +1,9 @@
 # Autograd Package
 
-
 ```eval_rst
 .. currentmodule:: mxnet.autograd
 ```
 
-```eval_rst
-.. warning:: This package is currently experimental and may change in the near 
future.
-```
-
 ## Overview
 
 The `autograd` package enables automatic
diff --git a/docs/api/python/gluon/data.md b/docs/api/python/gluon/data.md
index f72f3cd03f..0b5f959e32 100644
--- a/docs/api/python/gluon/data.md
+++ b/docs/api/python/gluon/data.md
@@ -15,10 +15,6 @@ This document lists the data APIs in Gluon:
 The `Gluon Data` API, defined in the `gluon.data` package, provides useful 
dataset loading
 and processing tools, as well as common public datasets.
 
-```eval_rst
-.. warning:: This package contains experimental APIs and may change in the 
near future.
-```
-
 In the rest of this document, we list routines provided by the `gluon.data` 
package.
 
 ## Data
diff --git a/docs/api/python/gluon/gluon.md b/docs/api/python/gluon/gluon.md
index 0ef6dbed0e..2ae766fdcb 100644
--- a/docs/api/python/gluon/gluon.md
+++ b/docs/api/python/gluon/gluon.md
@@ -5,10 +5,6 @@
 .. currentmodule:: mxnet.gluon
 ```
 
-```eval_rst
-.. warning:: This package is currently experimental and may change in the near 
future.
-```
-
 
 
 ## Overview
diff --git a/docs/api/python/gluon/model_zoo.md 
b/docs/api/python/gluon/model_zoo.md
index 18d9ae2cb8..a22b437640 100644
--- a/docs/api/python/gluon/model_zoo.md
+++ b/docs/api/python/gluon/model_zoo.md
@@ -18,10 +18,6 @@ This document lists the model APIs in Gluon:
 The `Gluon Model Zoo` API, defined in the `gluon.model_zoo` package, provides 
pre-defined
 and pre-trained models to help bootstrap machine learning applications.
 
-```eval_rst
-.. warning:: This package contains experimental APIs and may change in the 
near future.
-```
-
 In the rest of this document, we list routines provided by the 
`gluon.model_zoo` package.
 
 ### Vision
diff --git a/docs/tutorials/index.md b/docs/tutorials/index.md
index 40c381..6429dfb31b 100644
--- a/docs/tutorials/index.md
+++ b/docs/tutorials/index.md
@@ -2,13 +2,37 @@
 
 These tutorials introduce a few fundamental concepts in deep learning and how 
to implement them in _MXNet_. The _Basics_ section contains tutorials on 
manipulating arrays, building networks, loading/preprocessing data, etc. The 
_Training and Inference_ section talks about implementing Linear Regression, 
training a Handwritten digit classifier using MLP and CNN, running inferences 
using a pre-trained model, and lastly, efficiently training a large scale image 
classifier.
 
-```eval_rst
-.. Note:: We are working on a set of tutorials for the new imperative 
interface called Gluon. A preview version is hosted at http://gluon.mxnet.io.
-```
 
-## Python
+## Gluon
+
+Gluon is the high-level interface for MXNet. It is more intuitive and easier 
to use than the lower level interface.
+Gluon supports dynamic (define-by-run) graphs with JIT-compilation to achieve 
both flexibility and efficiency.
+This is a selected subset of Gluon tutorials. For the comprehensive tutorial 
on Gluon,
+please see [gluon.mxnet.io](http://gluon.mxnet.io).
+
+### Basics
+
+- [Manipulate data the MXNet way with 
ndarray](http://gluon.mxnet.io/chapter01_crashcourse/ndarray.html)
+- [Automatic differentiation with 
autograd](http://gluon.mxnet.io/chapter01_crashcourse/autograd.html)
+- [Linear regression with 
gluon](http://gluon.mxnet.io/chapter02_supervised-learning/linear-regression-gluon.html)
+- [Serialization - saving, loading and 
checkpointing](http://gluon.mxnet.io/chapter03_deep-neural-networks/serialization.html)
+
+### Neural Networks
+
+- [Multilayer perceptrons in 
gluon](http://gluon.mxnet.io/chapter03_deep-neural-networks/mlp-gluon.html)
+- [Convolutional Neural Networks in 
gluon](http://gluon.mxnet.io/chapter04_convolutional-neural-networks/cnn-gluon.html)
+- [Recurrent Neural Networks with 
gluon](http://gluon.mxnet.io/chapter05_recurrent-neural-networks/rnns-gluon.html)
+
+### Advanced
+
+- [Plumbing: A look under the hood of 
gluon](http://gluon.mxnet.io/chapter03_deep-neural-networks/plumbing.html)
+- [Designing a custom layer with 
gluon](http://gluon.mxnet.io/chapter03_deep-neural-networks/custom-layer.html)
+- [Fast, portable neural networks with Gluon 
HybridBlocks](http://gluon.mxnet.io/chapter07_distributed-learning/hybridize.html)
+- 

[incubator-mxnet] branch master updated: Remove experimental warning on Gluon and add Gluon tutorials (#8710)

2017-11-19 Thread jxie
This is an automated email from the ASF dual-hosted git repository.

jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new defd3c3  Remove experimental warning on Gluon and add Gluon tutorials 
(#8710)
defd3c3 is described below

commit defd3c396831aace17bdfea0fd8913a02d0d5d1c
Author: Eric Junyuan Xie 
AuthorDate: Sun Nov 19 21:55:59 2017 -0800

Remove experimental warning on Gluon and add Gluon tutorials (#8710)

* Remove experimental warning on Gluon and add Gluon tutorials

* fix

* fix

* Update autograd.md
---
 docs/api/python/autograd/autograd.md |  5 -
 docs/api/python/gluon/data.md|  4 
 docs/api/python/gluon/gluon.md   |  4 
 docs/api/python/gluon/model_zoo.md   |  4 
 docs/tutorials/index.md  | 34 +-
 5 files changed, 29 insertions(+), 22 deletions(-)

diff --git a/docs/api/python/autograd/autograd.md 
b/docs/api/python/autograd/autograd.md
index de81884..410d6a9 100644
--- a/docs/api/python/autograd/autograd.md
+++ b/docs/api/python/autograd/autograd.md
@@ -1,14 +1,9 @@
 # Autograd Package
 
-
 ```eval_rst
 .. currentmodule:: mxnet.autograd
 ```
 
-```eval_rst
-.. warning:: This package is currently experimental and may change in the near 
future.
-```
-
 ## Overview
 
 The `autograd` package enables automatic
diff --git a/docs/api/python/gluon/data.md b/docs/api/python/gluon/data.md
index f72f3cd..0b5f959 100644
--- a/docs/api/python/gluon/data.md
+++ b/docs/api/python/gluon/data.md
@@ -15,10 +15,6 @@ This document lists the data APIs in Gluon:
 The `Gluon Data` API, defined in the `gluon.data` package, provides useful 
dataset loading
 and processing tools, as well as common public datasets.
 
-```eval_rst
-.. warning:: This package contains experimental APIs and may change in the 
near future.
-```
-
 In the rest of this document, we list routines provided by the `gluon.data` 
package.
 
 ## Data
diff --git a/docs/api/python/gluon/gluon.md b/docs/api/python/gluon/gluon.md
index 0ef6dbe..2ae766f 100644
--- a/docs/api/python/gluon/gluon.md
+++ b/docs/api/python/gluon/gluon.md
@@ -5,10 +5,6 @@
 .. currentmodule:: mxnet.gluon
 ```
 
-```eval_rst
-.. warning:: This package is currently experimental and may change in the near 
future.
-```
-
 
 
 ## Overview
diff --git a/docs/api/python/gluon/model_zoo.md 
b/docs/api/python/gluon/model_zoo.md
index 18d9ae2..a22b437 100644
--- a/docs/api/python/gluon/model_zoo.md
+++ b/docs/api/python/gluon/model_zoo.md
@@ -18,10 +18,6 @@ This document lists the model APIs in Gluon:
 The `Gluon Model Zoo` API, defined in the `gluon.model_zoo` package, provides 
pre-defined
 and pre-trained models to help bootstrap machine learning applications.
 
-```eval_rst
-.. warning:: This package contains experimental APIs and may change in the 
near future.
-```
-
 In the rest of this document, we list routines provided by the 
`gluon.model_zoo` package.
 
 ### Vision
diff --git a/docs/tutorials/index.md b/docs/tutorials/index.md
index 40c..6429dfb 100644
--- a/docs/tutorials/index.md
+++ b/docs/tutorials/index.md
@@ -2,13 +2,37 @@
 
 These tutorials introduce a few fundamental concepts in deep learning and how 
to implement them in _MXNet_. The _Basics_ section contains tutorials on 
manipulating arrays, building networks, loading/preprocessing data, etc. The 
_Training and Inference_ section talks about implementing Linear Regression, 
training a Handwritten digit classifier using MLP and CNN, running inferences 
using a pre-trained model, and lastly, efficiently training a large scale image 
classifier.
 
-```eval_rst
-.. Note:: We are working on a set of tutorials for the new imperative 
interface called Gluon. A preview version is hosted at http://gluon.mxnet.io.
-```
 
-## Python
+## Gluon
+
+Gluon is the high-level interface for MXNet. It is more intuitive and easier 
to use than the lower level interface.
+Gluon supports dynamic (define-by-run) graphs with JIT-compilation to achieve 
both flexibility and efficiency.
+This is a selected subset of Gluon tutorials. For the comprehensive tutorial 
on Gluon,
+please see [gluon.mxnet.io](http://gluon.mxnet.io).
+
+### Basics
+
+- [Manipulate data the MXNet way with 
ndarray](http://gluon.mxnet.io/chapter01_crashcourse/ndarray.html)
+- [Automatic differentiation with 
autograd](http://gluon.mxnet.io/chapter01_crashcourse/autograd.html)
+- [Linear regression with 
gluon](http://gluon.mxnet.io/chapter02_supervised-learning/linear-regression-gluon.html)
+- [Serialization - saving, loading and 
checkpointing](http://gluon.mxnet.io/chapter03_deep-neural-networks/serialization.html)
+
+### Neural Networks
+
+- [Multilayer perceptrons in 
gluon](http://gluon.mxnet.io/chapter03_deep-neural-networks/mlp-gluon.html)
+- [Convolutional Neural Networks in 

[GitHub] liuzhi136 commented on issue #8720: Implementation Help!!!

2017-11-19 Thread GitBox
liuzhi136 commented on issue #8720: Implementation Help!!!
URL: 
https://github.com/apache/incubator-mxnet/issues/8720#issuecomment-345598672
 
 
   @szha I do not understand the meaning of "isolate the problem". Can you 
explain it? And Is it OK for you that I have your Wechat or email? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: fix group2ctx with null reqs (#8717)

2017-11-19 Thread jxie
This is an automated email from the ASF dual-hosted git repository.

jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 7fc0396  fix group2ctx with null reqs (#8717)
7fc0396 is described below

commit 7fc039639b288f80fa7fe6482de1a25e04261e5e
Author: Haibin Lin 
AuthorDate: Sun Nov 19 21:55:31 2017 -0800

fix group2ctx with null reqs (#8717)
---
 src/executor/graph_executor.cc  | 18 +
 tests/python/unittest/test_multi_device_exec.py | 26 +
 2 files changed, 32 insertions(+), 12 deletions(-)

diff --git a/src/executor/graph_executor.cc b/src/executor/graph_executor.cc
index ade8e83..01484da 100644
--- a/src/executor/graph_executor.cc
+++ b/src/executor/graph_executor.cc
@@ -321,6 +321,7 @@ Graph AssignContext(Graph g,
 const std::vector& in_arg_ctxes,
 const std::vector& arg_grad_ctxes,
 const std::vector& aux_state_ctxes,
+const std::vector& grad_req_types,
 size_t num_forward_inputs,
 size_t num_forward_outputs) {
   const auto& idx = g.indexed_graph();
@@ -385,9 +386,15 @@ Graph AssignContext(Graph g,
 
   // loop through backward input nodes and populate maps and lists
   // the backward input nodes is the gradient of the loss wrt the output
-  for (size_t i = num_forward_outputs; i < g.outputs.size(); ++i) {
+  size_t arg_grad_offset = 0;
+  // keep an offset into the arg_grad_ctxes vector,
+  // since g.outputs exclude arg_grad whose req == null
+  CHECK_GE(grad_req_types.size(), g.outputs.size() - num_forward_outputs)
+   << "insufficient number of grad_reqs";
+  for (size_t i = num_forward_outputs; i < g.outputs.size(); ++i, 
++arg_grad_offset) {
+while (grad_req_types[arg_grad_offset] == kNullOp) ++arg_grad_offset;
 const uint32_t nid = idx.outputs()[i].node_id;
-Context ctx = arg_grad_ctxes[i - num_forward_outputs];
+Context ctx = arg_grad_ctxes[arg_grad_offset];
 if (ctx2id.count(ctx) == 0) {
   ctx2id[ctx] = static_cast(ctx_list.size());
   ctx_list.push_back(ctx);
@@ -417,9 +424,11 @@ Graph AssignContext(Graph g,
   // if the assigned device of gradient node
   // corresponds to storage of grads
   auto _idx = g.indexed_graph();
-  for (size_t i = num_forward_outputs; i < g.outputs.size(); ++i) {
+  arg_grad_offset = 0;
+  for (size_t i = num_forward_outputs; i < g.outputs.size(); ++i, 
++arg_grad_offset) {
+while (grad_req_types[arg_grad_offset] == kNullOp) ++arg_grad_offset;
 const uint32_t nid = new_idx.outputs()[i].node_id;
-Context ctx = arg_grad_ctxes[i - num_forward_outputs];
+Context ctx = arg_grad_ctxes[arg_grad_offset];
 CHECK(ctx == vcontext[nid])
   << "Trying to save gradient to " << ctx
   << " while its source node \"" << new_idx[nid].source->attrs.name
@@ -1055,6 +1064,7 @@ Graph GraphExecutor::InitGraph(nnvm::Symbol symbol,
 in_arg_ctxes,
 arg_grad_ctxes,
 aux_state_ctxes,
+grad_req_types,
 num_forward_inputs_,
 num_forward_outputs_);
 
diff --git a/tests/python/unittest/test_multi_device_exec.py 
b/tests/python/unittest/test_multi_device_exec.py
index 0a2739d..aa279b1 100644
--- a/tests/python/unittest/test_multi_device_exec.py
+++ b/tests/python/unittest/test_multi_device_exec.py
@@ -20,6 +20,17 @@ import numpy as np
 import mxnet as mx
 
 def test_ctx_group():
+def check_ctx_group(group2ctx, grad_req, mlp, set_stage1):
+texec = mlp.simple_bind(mx.cpu(0),
+group2ctx=group2ctx,
+data=(1,200), grad_req=grad_req)
+
+for arr, name in zip(texec.arg_arrays, mlp.list_arguments()):
+if name in set_stage1:
+assert arr.context == group2ctx['stage1']
+else:
+assert arr.context == group2ctx['stage2']
+
 with mx.AttrScope(ctx_group='stage1'):
 data = mx.symbol.Variable('data')
 fc1  = mx.symbol.FullyConnected(data = data, name='fc1', 
num_hidden=128)
@@ -40,15 +51,14 @@ def test_ctx_group():
 'stage2' : mx.cpu(2)
 }
 
-texec = mlp.simple_bind(mx.cpu(0),
-group2ctx=group2ctx,
-data=(1,200))
+# generate reqs with null
+grad_req_with_null = {}
+for arg in mlp.list_arguments():
+grad_req_with_null[arg] = 'null' if arg == 'data' else 'write'
 
-for arr, name in zip(texec.arg_arrays, mlp.list_arguments()):
-if name in set_stage1:
-assert arr.context == group2ctx['stage1']
-else:
-assert arr.context == group2ctx['stage2']
+grad_reqs = ['write', grad_req_with_null]
+for grad_req 

[GitHub] piiswrong closed pull request #8717: fix group2ctx with null reqs

2017-11-19 Thread GitBox
piiswrong closed pull request #8717: fix group2ctx with null reqs
URL: https://github.com/apache/incubator-mxnet/pull/8717
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/src/executor/graph_executor.cc b/src/executor/graph_executor.cc
index ade8e838fd..01484dac29 100644
--- a/src/executor/graph_executor.cc
+++ b/src/executor/graph_executor.cc
@@ -321,6 +321,7 @@ Graph AssignContext(Graph g,
 const std::vector& in_arg_ctxes,
 const std::vector& arg_grad_ctxes,
 const std::vector& aux_state_ctxes,
+const std::vector& grad_req_types,
 size_t num_forward_inputs,
 size_t num_forward_outputs) {
   const auto& idx = g.indexed_graph();
@@ -385,9 +386,15 @@ Graph AssignContext(Graph g,
 
   // loop through backward input nodes and populate maps and lists
   // the backward input nodes is the gradient of the loss wrt the output
-  for (size_t i = num_forward_outputs; i < g.outputs.size(); ++i) {
+  size_t arg_grad_offset = 0;
+  // keep an offset into the arg_grad_ctxes vector,
+  // since g.outputs exclude arg_grad whose req == null
+  CHECK_GE(grad_req_types.size(), g.outputs.size() - num_forward_outputs)
+   << "insufficient number of grad_reqs";
+  for (size_t i = num_forward_outputs; i < g.outputs.size(); ++i, 
++arg_grad_offset) {
+while (grad_req_types[arg_grad_offset] == kNullOp) ++arg_grad_offset;
 const uint32_t nid = idx.outputs()[i].node_id;
-Context ctx = arg_grad_ctxes[i - num_forward_outputs];
+Context ctx = arg_grad_ctxes[arg_grad_offset];
 if (ctx2id.count(ctx) == 0) {
   ctx2id[ctx] = static_cast(ctx_list.size());
   ctx_list.push_back(ctx);
@@ -417,9 +424,11 @@ Graph AssignContext(Graph g,
   // if the assigned device of gradient node
   // corresponds to storage of grads
   auto _idx = g.indexed_graph();
-  for (size_t i = num_forward_outputs; i < g.outputs.size(); ++i) {
+  arg_grad_offset = 0;
+  for (size_t i = num_forward_outputs; i < g.outputs.size(); ++i, 
++arg_grad_offset) {
+while (grad_req_types[arg_grad_offset] == kNullOp) ++arg_grad_offset;
 const uint32_t nid = new_idx.outputs()[i].node_id;
-Context ctx = arg_grad_ctxes[i - num_forward_outputs];
+Context ctx = arg_grad_ctxes[arg_grad_offset];
 CHECK(ctx == vcontext[nid])
   << "Trying to save gradient to " << ctx
   << " while its source node \"" << new_idx[nid].source->attrs.name
@@ -1055,6 +1064,7 @@ Graph GraphExecutor::InitGraph(nnvm::Symbol symbol,
 in_arg_ctxes,
 arg_grad_ctxes,
 aux_state_ctxes,
+grad_req_types,
 num_forward_inputs_,
 num_forward_outputs_);
 
diff --git a/tests/python/unittest/test_multi_device_exec.py 
b/tests/python/unittest/test_multi_device_exec.py
index 0a2739d9bb..aa279b1837 100644
--- a/tests/python/unittest/test_multi_device_exec.py
+++ b/tests/python/unittest/test_multi_device_exec.py
@@ -20,6 +20,17 @@
 import mxnet as mx
 
 def test_ctx_group():
+def check_ctx_group(group2ctx, grad_req, mlp, set_stage1):
+texec = mlp.simple_bind(mx.cpu(0),
+group2ctx=group2ctx,
+data=(1,200), grad_req=grad_req)
+
+for arr, name in zip(texec.arg_arrays, mlp.list_arguments()):
+if name in set_stage1:
+assert arr.context == group2ctx['stage1']
+else:
+assert arr.context == group2ctx['stage2']
+
 with mx.AttrScope(ctx_group='stage1'):
 data = mx.symbol.Variable('data')
 fc1  = mx.symbol.FullyConnected(data = data, name='fc1', 
num_hidden=128)
@@ -40,15 +51,14 @@ def test_ctx_group():
 'stage2' : mx.cpu(2)
 }
 
-texec = mlp.simple_bind(mx.cpu(0),
-group2ctx=group2ctx,
-data=(1,200))
+# generate reqs with null
+grad_req_with_null = {}
+for arg in mlp.list_arguments():
+grad_req_with_null[arg] = 'null' if arg == 'data' else 'write'
 
-for arr, name in zip(texec.arg_arrays, mlp.list_arguments()):
-if name in set_stage1:
-assert arr.context == group2ctx['stage1']
-else:
-assert arr.context == group2ctx['stage2']
+grad_reqs = ['write', grad_req_with_null]
+for grad_req in grad_reqs:
+check_ctx_group(group2ctx, grad_req, mlp, set_stage1)
 
 def test_ctx_group_sparse():
 with mx.AttrScope(ctx_group='stage1'):


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub 

[GitHub] leezu commented on issue #7931: MKL-DNN integration: request for reviews

2017-11-19 Thread GitBox
leezu commented on issue #7931: MKL-DNN integration: request for reviews
URL: https://github.com/apache/incubator-mxnet/pull/7931#issuecomment-345598574
 
 
   @sbodenstein MKL with v0.11 is quite buggy. I often got `inf` values during 
training for no obvious reason (i.e. training is stable without mkl / on GPU).
   I'm not sure about v0.12 due to 
https://github.com/apache/incubator-mxnet/issues/8280 .
   
   (i.e. even without MKL-DNN MKL is buggy)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] leezu commented on issue #7931: MKL-DNN integration: request for reviews

2017-11-19 Thread GitBox
leezu commented on issue #7931: MKL-DNN integration: request for reviews
URL: https://github.com/apache/incubator-mxnet/pull/7931#issuecomment-345598574
 
 
   @sbodenstein MKL with v0.11 is quite buggy. I often got `inf` values during 
training for no obvious reason (i.e. training is stable without mkl / on GPU).
   I'm not sure about v0.12 due to 
https://github.com/apache/incubator-mxnet/issues/8280 .
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Jerryzcn closed issue #8723: nn.block.save_params not working properly.

2017-11-19 Thread GitBox
Jerryzcn closed issue #8723: nn.block.save_params not working properly.
URL: https://github.com/apache/incubator-mxnet/issues/8723
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Jerryzcn commented on issue #8723: nn.block.save_params not working properly.

2017-11-19 Thread GitBox
Jerryzcn commented on issue #8723: nn.block.save_params not working properly.
URL: 
https://github.com/apache/incubator-mxnet/issues/8723#issuecomment-345597125
 
 
   I see thanks! Closing issue.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #8720: Implementation Help!!!

2017-11-19 Thread GitBox
szha commented on issue #8720: Implementation Help!!!
URL: 
https://github.com/apache/incubator-mxnet/issues/8720#issuecomment-345595431
 
 
   What have you tried to isolate the problem?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #8723: nn.block.save_params not working properly.

2017-11-19 Thread GitBox
szha commented on issue #8723: nn.block.save_params not working properly.
URL: 
https://github.com/apache/incubator-mxnet/issues/8723#issuecomment-345593756
 
 
   I see. block.save_params has the option to strip away the network prefix 
which is on by default. turning it off should fix the problem. Alternatively 
you can explicitly set the prefix of a block to be empty.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 commented on a change in pull request #8721: fix custom op for backward compatibility

2017-11-19 Thread GitBox
anirudh2290 commented on a change in pull request #8721: fix custom op for 
backward compatibility
URL: https://github.com/apache/incubator-mxnet/pull/8721#discussion_r151902607
 
 

 ##
 File path: tests/python/unittest/test_operator.py
 ##
 @@ -3652,6 +3652,42 @@ def create_operator(self, ctx, shapes, dtypes):
 assert (x.grad.stype == 'csr')
 assert (y.stype == 'csr')
 assert (aux.stype == 'csr')
+
+# test for backward compatibility, i.e. the correctness of default 
implementation of 
 
 Review comment:
   Please remove whitespaces here and above.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 commented on issue #8721: fix custom op for backward compatibility

2017-11-19 Thread GitBox
anirudh2290 commented on issue #8721: fix custom op for backward compatibility
URL: https://github.com/apache/incubator-mxnet/pull/8721#issuecomment-345591574
 
 
   Thank you for fixing this and adding the tests! Yes, since the output of 
stypes which is the in_grads would depend on arguments length and not output 
length. Can you please let me know what LR example you are referring to?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #8721: fix custom op for backward compatibility

2017-11-19 Thread GitBox
eric-haibin-lin commented on issue #8721: fix custom op for backward 
compatibility
URL: https://github.com/apache/incubator-mxnet/pull/8721#issuecomment-345590950
 
 
   @anirudh2290 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mbaijal commented on issue #8175: WIP: Julia CI build

2017-11-19 Thread GitBox
mbaijal commented on issue #8175: WIP: Julia CI build
URL: https://github.com/apache/incubator-mxnet/pull/8175#issuecomment-345590057
 
 
   Hi @iblis17 I should be able to help you with this. 
   You can ping me on slack at mbaijal.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Jerryzcn commented on issue #8723: nn.block.save_params not working properly.

2017-11-19 Thread GitBox
Jerryzcn commented on issue #8723: nn.block.save_params not working properly.
URL: 
https://github.com/apache/incubator-mxnet/issues/8723#issuecomment-345589651
 
 
   AssertionError: Parameter net0_conv0_weight is missing in file 
checkpoint\latest-net-0-0.0.params. Basically, the weights are saved using 
different name then the load function.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhreshold commented on issue #8716: ImageDetIter uses type list as inside making as_in_context break

2017-11-19 Thread GitBox
zhreshold commented on issue #8716: ImageDetIter uses type list as inside 
making as_in_context break
URL: 
https://github.com/apache/incubator-mxnet/issues/8716#issuecomment-345589253
 
 
   `as_in_context` don't support list of context.  


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhreshold commented on issue #8715: Documentation for MultiBoxPrior is not substantial

2017-11-19 Thread GitBox
zhreshold commented on issue #8715: Documentation for MultiBoxPrior is not 
substantial 
URL: 
https://github.com/apache/incubator-mxnet/issues/8715#issuecomment-345589157
 
 
   You can try help(mx.nd.contrib.MultiBoxPrior) for help info in python. 
However, I have no idea why the generated function signature are really bad. 
   @szha Any clue?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jiajiechen commented on issue #8684: [BugFix][CoreML Converter] Dense layers w/o bias.

2017-11-19 Thread GitBox
jiajiechen commented on issue #8684: [BugFix][CoreML Converter] Dense layers 
w/o bias.
URL: https://github.com/apache/incubator-mxnet/pull/8684#issuecomment-345588252
 
 
   This looks good to me. 
   nitpick: I will suggest 
   ```python
   has_bias = not param.pop('no_bias', False)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #8723: nn.block.save_params not working properly.

2017-11-19 Thread GitBox
szha commented on issue #8723: nn.block.save_params not working properly.
URL: 
https://github.com/apache/incubator-mxnet/issues/8723#issuecomment-345587528
 
 
   It should work both way. What's the error that you saw?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] liuzhi136 commented on issue #8720: Implementation Help!!!

2017-11-19 Thread GitBox
liuzhi136 commented on issue #8720: Implementation Help!!!
URL: 
https://github.com/apache/incubator-mxnet/issues/8720#issuecomment-345585456
 
 
   @mli @piiswrong @szha 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] iblis17 commented on issue #8175: WIP: Julia CI build

2017-11-19 Thread GitBox
iblis17 commented on issue #8175: WIP: Julia CI build
URL: https://github.com/apache/incubator-mxnet/pull/8175#issuecomment-345583668
 
 
   bump?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] liumilan commented on issue #8500: program crash when run sparse model predict

2017-11-19 Thread GitBox
liumilan commented on issue #8500: program crash when run sparse model predict
URL: 
https://github.com/apache/incubator-mxnet/issues/8500#issuecomment-345583241
 
 
   @eric-haibin-lin ok,i will try later


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] YujiOshima opened a new pull request #8722: Profiler: set cpu/gpu num during execution

2017-11-19 Thread GitBox
YujiOshima opened a new pull request #8722: Profiler: set cpu/gpu num during 
execution
URL: https://github.com/apache/incubator-mxnet/pull/8722
 
 
   Signed-off-by: YujiOshima 
   
   ## Description ##
   In Profiler, set CPU and GPU number dynamically. 
   Enable to use profiler in a large number of CPU or GPU environment.
   
   @ZihengJiang 
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage
   - [ ] For user-facing API changes, API doc string has been updated. For new 
C++ functions in header files, their functionalities and arguments are 
well-documented. 
   - [ ] To my best knowledge, examples are either not affected by this change, 
or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] atticcas commented on issue #8308: Error with Python custom operator in distributed training

2017-11-19 Thread GitBox
atticcas commented on issue #8308: Error with Python custom operator in 
distributed training
URL: 
https://github.com/apache/incubator-mxnet/issues/8308#issuecomment-345576806
 
 
   Ran in to the same problem.
   Did some dive into the code and it seems like something with how MXNet 
handles distributed computation. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZiyueHuang commented on issue #8500: program crash when run sparse model predict

2017-11-19 Thread GitBox
ZiyueHuang commented on issue #8500: program crash when run sparse model predict
URL: 
https://github.com/apache/incubator-mxnet/issues/8500#issuecomment-345576013
 
 
   Current LR example is broken due to the custom op. This is fixed in 
https://github.com/apache/incubator-mxnet/pull/8721.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZiyueHuang opened a new pull request #8721: fix custom op for backward compatibility

2017-11-19 Thread GitBox
ZiyueHuang opened a new pull request #8721: fix custom op for backward 
compatibility
URL: https://github.com/apache/incubator-mxnet/pull/8721
 
 
   ## Description ##
   Before this PR, the LR in sparse example is broken due to to the custom op, 
and the added test will throw err message,
   ```
   Error in mult.infer_type: Traceback (most recent call last):
 File "/home/hanfeng/zyh/mxnet/python/mxnet/operator.py", line 737, in 
infer_storage_type_backward_entry
   "stypes, got %d."%(total_outputs, len(ostype))
   AssertionError: InferStorageTypeBackward Error: expecting 2 entries in 
returned output stypes, got 1.
   
   [10:27:12] /home/hanfeng/zyh/mxnet/dmlc-core/include/dmlc/./logging.h:308: 
[10:27:12] src/operator/custom/custom.cc:383: Check failed: 
reinterpret_cast( 
params.info->callbacks[kCustomOpPropBackwardInferStorageType])( stypes.size(), 
stypes.data(), params.info->contexts[kCustomOpPropBackwardInferStorageType])
   ```
   cc @eric-haibin-lin @anirudh2290
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage
   - [ ] For user-facing API changes, API doc string has been updated. For new 
C++ functions in header files, their functionalities and arguments are 
well-documented. 
   - [ ] To my best knowledge, examples are either not affected by this change, 
or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] solin319 commented on issue #8107: "add warmup lr_scheduler" create a new pr

2017-11-19 Thread GitBox
solin319 commented on issue #8107:  "add warmup lr_scheduler" create a new pr
URL: https://github.com/apache/incubator-mxnet/pull/8107#issuecomment-345572425
 
 
   I have  update the parameters name and description.
   @piiswrong 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yajiedesign commented on issue #8703: [DISCUSSION] (when) Should we deprecate support for python2?

2017-11-19 Thread GitBox
yajiedesign commented on issue #8703: [DISCUSSION] (when) Should we deprecate 
support for python2?
URL: 
https://github.com/apache/incubator-mxnet/issues/8703#issuecomment-345569605
 
 
   synchronization with numpy,stop add new feature when 2019.1.1


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yajiedesign commented on issue #8703: [DISCUSSION] (when) Should we deprecate support for python2?

2017-11-19 Thread GitBox
yajiedesign commented on issue #8703: [DISCUSSION] (when) Should we deprecate 
support for python2?
URL: 
https://github.com/apache/incubator-mxnet/issues/8703#issuecomment-345569605
 
 
   like numpy,stop add new feature when 2019.1.1


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yajiedesign commented on issue #8714: win10 cuda8.0 vs2013 compile problem

2017-11-19 Thread GitBox
yajiedesign commented on issue #8714: win10 cuda8.0 vs2013 compile problem
URL: 
https://github.com/apache/incubator-mxnet/issues/8714#issuecomment-345569063
 
 
   include submodule?
   try use vs2015.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yajiedesign commented on issue #8702: [DISCUSSION] Should we deprecate Makefile and only use CMake?

2017-11-19 Thread GitBox
yajiedesign commented on issue #8702: [DISCUSSION] Should we deprecate Makefile 
and only use CMake?
URL: 
https://github.com/apache/incubator-mxnet/issues/8702#issuecomment-345568868
 
 
   @cjolivier01 yes, lint is out of date,but fix it not difficulty .


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] liuzhi136 opened a new issue #8720: Implementation Help!!!

2017-11-19 Thread GitBox
liuzhi136 opened a new issue #8720: Implementation Help!!!
URL: https://github.com/apache/incubator-mxnet/issues/8720
 
 
   I'm trying to implement SAN model proposed in "Stacked Attention Network for 
Image Question Answering" in mxnet, which can be download below. 
   
   
[Yang_Stacked_Attention_Networks_CVPR_2016_paper.pdf](https://github.com/apache/incubator-mxnet/files/1486170/Yang_Stacked_Attention_Networks_CVPR_2016_paper.pdf)
   
   My implementations looks like below :
   
[san_vqamodel_with_gluon.txt](https://github.com/apache/incubator-mxnet/files/1486193/san_vqamodel_with_gluon.txt)
   
   
   I had downloaded the DAQUAR dataset and use the reduced version to reproduce 
the result of this paper. I think the model described clearly enough in this 
paper. However, this model I implement doesn't converge at all and always 
fluctuate after several epochs, and its training and validation error less than 
3. I've tried various hyperparameter sets to adjust the training procedure and 
does not work out. It's urgent problem for me, and I really need help to solve 
it as soon as possible. I really want to know if my implementation is correct. 
Does any could help me? Cause it's a really urgent. 
   Any help will be appreciated! 
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] BiranLi closed issue #7665: Request on supporting Pooling symbol with NHWC.

2017-11-19 Thread GitBox
BiranLi closed issue #7665: Request on supporting Pooling symbol with NHWC.
URL: https://github.com/apache/incubator-mxnet/issues/7665
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Soonhwan-Kwon commented on issue #8674: ADD CapsNet example

2017-11-19 Thread GitBox
Soonhwan-Kwon commented on issue #8674: ADD CapsNet example
URL: https://github.com/apache/incubator-mxnet/pull/8674#issuecomment-345564863
 
 
   I expect that there are many users of MXNet finding implementation of 
CapsNet. We're in the middle of implementing data augmentation and will make 
effort to follow up the accuracy soon.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Soonhwan-Kwon commented on issue #8674: ADD CapsNet example

2017-11-19 Thread GitBox
Soonhwan-Kwon commented on issue #8674: ADD CapsNet example
URL: https://github.com/apache/incubator-mxnet/pull/8674#issuecomment-345564863
 
 
   I expect that there are many users of MXNet finding implementation of 
CapsNet. We're in the middle of implement data augmentation and will make 
effort to follow up the accuracy soon.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] burness commented on issue #8621: fix row_sparse demo tutorials doc

2017-11-19 Thread GitBox
burness commented on issue #8621: fix row_sparse demo tutorials doc
URL: https://github.com/apache/incubator-mxnet/pull/8621#issuecomment-345564109
 
 
   Ok! I will try @eric-haibin-lin 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 opened a new pull request #8719: POC Tune without Launch specialization macros (WIP for post 1.0 rc0)

2017-11-19 Thread GitBox
cjolivier01 opened a new pull request #8719: POC Tune without Launch 
specialization macros (WIP for post 1.0 rc0)
URL: https://github.com/apache/incubator-mxnet/pull/8719
 
 
   @piiswrong 
   ## Description ##
   (Brief description on what this PR is about)
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage
   - [ ] For user-facing API changes, API doc string has been updated. For new 
C++ functions in header files, their functionalities and arguments are 
well-documented. 
   - [ ] To my best knowledge, examples are either not affected by this change, 
or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Revert "2bit gradient compression (#8662)" (#8711)

2017-11-19 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 504f42f  Revert "2bit gradient compression (#8662)" (#8711)
504f42f is described below

commit 504f42ffc5ae26980d39da199516c37d7f9fc56c
Author: Sheng Zha 
AuthorDate: Sun Nov 19 15:51:42 2017 -0800

Revert "2bit gradient compression (#8662)" (#8711)

This reverts commit a499f892c9ee6f59ccfb57c9e431c91014078891.
---
 example/image-classification/common/fit.py |  44 ++--
 example/rnn/lstm_bucketing.py  |   1 +
 include/mxnet/c_api.h  |  13 -
 include/mxnet/kvstore.h|  15 --
 python/mxnet/gluon/trainer.py  |  12 +-
 python/mxnet/kvstore.py|  62 -
 python/mxnet/module/bucketing_module.py|  17 +-
 python/mxnet/module/module.py  |  11 +-
 src/c_api/c_api.cc |  14 --
 src/kvstore/comm.h |  87 +--
 src/kvstore/gradient_compression-inl.h | 155 
 src/kvstore/gradient_compression.cc| 193 --
 src/kvstore/gradient_compression.cu|  40 ---
 src/kvstore/gradient_compression.h | 138 --
 src/kvstore/kvstore.cc |   2 +-
 src/kvstore/kvstore_dist.h | 388 -
 src/kvstore/kvstore_dist_server.h  | 143 ++-
 src/kvstore/kvstore_local.h|   7 -
 tests/nightly/dist_sync_kvstore.py | 120 +
 tests/nightly/test_kvstore.py  | 200 ++-
 tools/bandwidth/measure.py |   6 +-
 21 files changed, 167 insertions(+), 1501 deletions(-)

diff --git a/example/image-classification/common/fit.py 
b/example/image-classification/common/fit.py
index 2b002c7..51a1abe 100755
--- a/example/image-classification/common/fit.py
+++ b/example/image-classification/common/fit.py
@@ -103,11 +103,6 @@ def add_fit_args(parser):
help='1 means test reading speed without training')
 train.add_argument('--dtype', type=str, default='float32',
help='precision: float32 or float16')
-train.add_argument('--gc-type', type=str, default='none',
-   help='type of gradient compression to use, \
- takes `2bit` or `none` for now')
-train.add_argument('--gc-threshold', type=float, default=0.5,
-   help='threshold for 2bit gradient compression')
 return train
 
 def fit(args, network, data_loader, **kwargs):
@@ -119,9 +114,6 @@ def fit(args, network, data_loader, **kwargs):
 """
 # kvstore
 kv = mx.kvstore.create(args.kv_store)
-if args.gc_type != 'none':
-kv.set_gradient_compression({'type': args.gc_type,
- 'threshold': args.gc_threshold})
 
 # logging
 head = '%(asctime)-15s Node[' + str(kv.rank) + '] %(message)s'
@@ -170,10 +162,10 @@ def fit(args, network, data_loader, **kwargs):
 
 lr_scheduler  = lr_scheduler
 optimizer_params = {
-'learning_rate': lr,
-'wd' : args.wd,
-'lr_scheduler': lr_scheduler,
-'multi_precision': True}
+'learning_rate': lr,
+'wd' : args.wd,
+'lr_scheduler': lr_scheduler,
+'multi_precision': True}
 
 # Only a limited number of optimizers have 'momentum' property
 has_momentum = {'sgd', 'dcasgd', 'nag'}
@@ -203,17 +195,17 @@ def fit(args, network, data_loader, **kwargs):
 
 # run
 model.fit(train,
-  begin_epoch= args.load_epoch if args.load_epoch else 0,
-  num_epoch  = args.num_epochs,
-  eval_data  = val,
-  eval_metric= eval_metrics,
-  kvstore= kv,
-  optimizer  = args.optimizer,
-  optimizer_params   = optimizer_params,
-  initializer= initializer,
-  arg_params = arg_params,
-  aux_params = aux_params,
-  batch_end_callback = batch_end_callbacks,
-  epoch_end_callback = checkpoint,
-  allow_missing  = True,
-  monitor= monitor)
+begin_epoch= args.load_epoch if args.load_epoch else 0,
+num_epoch  = args.num_epochs,
+eval_data  = val,
+eval_metric= eval_metrics,
+kvstore= kv,
+optimizer  = args.optimizer,
+optimizer_params   = optimizer_params,
+initializer= initializer,
+arg_params = arg_params,
+aux_params = aux_params,
+batch_end_callback = batch_end_callbacks,
+

[GitHub] szha closed pull request #8711: Revert "2bit gradient compression"

2017-11-19 Thread GitBox
szha closed pull request #8711: Revert "2bit gradient compression"
URL: https://github.com/apache/incubator-mxnet/pull/8711
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #8707: Fail to build amalgamation for Android in latest version

2017-11-19 Thread GitBox
eric-haibin-lin commented on issue #8707: Fail to build amalgamation for 
Android in latest version
URL: 
https://github.com/apache/incubator-mxnet/issues/8707#issuecomment-345557830
 
 
   @arank could you help diagnose the issue?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #8500: program crash when run sparse model predict

2017-11-19 Thread GitBox
eric-haibin-lin commented on issue #8500: program crash when run sparse model 
predict
URL: 
https://github.com/apache/incubator-mxnet/issues/8500#issuecomment-345557450
 
 
   The cpu _backward_dot operator was improved by at least 3x in #8611 
   Do you want to sync with master and run it again?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Soonhwan-Kwon commented on issue #8674: ADD CapsNet example

2017-11-19 Thread GitBox
Soonhwan-Kwon commented on issue #8674: ADD CapsNet example
URL: https://github.com/apache/incubator-mxnet/pull/8674#issuecomment-345554566
 
 
   The original paper's result was 0.25% and we expect close accuracy(0.34%) 
can be achieved by adding data augmentation(scale shift) like other 
implementations of other platforms(keras, tensorflow) which achieved 0.34 % 
(the average) result. The result of 0.25 is hard to achieve but we expect close 
enough accuracy can be obtained soon.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Soonhwan-Kwon commented on issue #8674: ADD CapsNet example

2017-11-19 Thread GitBox
Soonhwan-Kwon commented on issue #8674: ADD CapsNet example
URL: https://github.com/apache/incubator-mxnet/pull/8674#issuecomment-345554566
 
 
   The original paper's result was 0.25% and we expect it can be achieved by 
adding data augmentation(scale shift) like other implementations of other 
platforms(keras, tensorflow) which achieved 0.34 % (the average) result. The 
result of 0.25 is hard to achieve but we expect close enough accuracy can be 
obtained soon.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mbaijal commented on issue #8401: Make make lint compatible with python3 (don't call python2 explicitly)

2017-11-19 Thread GitBox
mbaijal commented on issue #8401: Make make lint compatible with python3 (don't 
call python2 explicitly)
URL: https://github.com/apache/incubator-mxnet/pull/8401#issuecomment-34127
 
 
   Hi @larroy I think you need to rebase this PR with latest master to make it 
pass unit tests.
   Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Soonhwan-Kwon commented on issue #8674: ADD CapsNet example

2017-11-19 Thread GitBox
Soonhwan-Kwon commented on issue #8674: ADD CapsNet example
URL: https://github.com/apache/incubator-mxnet/pull/8674#issuecomment-345554566
 
 
   The original paper's result was 0.25% and we expect it can be achieved by 
adding data augmentation(scale shift) like other implementations of other 
platforms(keras, tensorflow) which achieved 0.34 % (the average) result.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Soonhwan-Kwon commented on issue #8674: ADD CapsNet example

2017-11-19 Thread GitBox
Soonhwan-Kwon commented on issue #8674: ADD CapsNet example
URL: https://github.com/apache/incubator-mxnet/pull/8674#issuecomment-345554566
 
 
   The original paper's result was 0.25% and we expect it can be achieved by 
adding data augmentation(scale shift) like other implementations of other 
platforms(keras, tensorflow) which achieved the same result of paper.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Soonhwan-Kwon commented on issue #8674: ADD CapsNet example

2017-11-19 Thread GitBox
Soonhwan-Kwon commented on issue #8674: ADD CapsNet example
URL: https://github.com/apache/incubator-mxnet/pull/8674#issuecomment-345554566
 
 
   The original paper's result was 0.39% and we expect it can be achieved by 
adding data augmentation(scale shift) like other implementations of other 
platforms(keras, tensorflow) which achieved the same result of paper.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Soonhwan-Kwon commented on issue #8674: ADD CapsNet example

2017-11-19 Thread GitBox
Soonhwan-Kwon commented on issue #8674: ADD CapsNet example
URL: https://github.com/apache/incubator-mxnet/pull/8674#issuecomment-345554566
 
 
   The original paper's result was 0.39% and we expect it can be achieved by 
adding data augmentation(scale shift) like other implementations of other 
platforms which achieved the same result of paper.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Soonhwan-Kwon commented on issue #8674: ADD CapsNet example

2017-11-19 Thread GitBox
Soonhwan-Kwon commented on issue #8674: ADD CapsNet example
URL: https://github.com/apache/incubator-mxnet/pull/8674#issuecomment-345554566
 
 
   The original paper's result was 0.39% and we expect it can be achieved by 
adding data augmentation(scale shift) like other implementation of other 
platforms.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Soonhwan-Kwon commented on a change in pull request #8674: ADD CapsNet example

2017-11-19 Thread GitBox
Soonhwan-Kwon commented on a change in pull request #8674: ADD CapsNet example
URL: https://github.com/apache/incubator-mxnet/pull/8674#discussion_r151879998
 
 

 ##
 File path: example/capsnet/README.md
 ##
 @@ -0,0 +1,32 @@
+**CapsNet-MXNet**
+=
+
+This example is MXNet implementation of 
[CapsNet](https://arxiv.org/abs/1710.09829):  
+Sara Sabour, Nicholas Frosst, Geoffrey E Hinton. Dynamic Routing Between 
Capsules. NIPS 2017
+- The Current best test error is 0.5%  
+
+Due to the permission issue, this example is maintained in this 
[repository](https://github.com/samsungsds-rnd/capsnet.mxnet) separately.
 
 Review comment:
   If you mean the last line, Pull Request can takes days to accept, meanwhile 
some follow-up changes can be updated to our temporary repository 
https://github.com/samsungsds-rnd/capsnet.mxnet. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha closed issue #8319: [WIP-NewFeature] ONNX support for MXNet

2017-11-19 Thread GitBox
szha closed issue #8319: [WIP-NewFeature] ONNX support for MXNet
URL: https://github.com/apache/incubator-mxnet/issues/8319
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha opened a new pull request #8718: Doc src and fix

2017-11-19 Thread GitBox
szha opened a new pull request #8718: Doc src and fix
URL: https://github.com/apache/incubator-mxnet/pull/8718
 
 
   ## Description ##
   Add a link for viewing source code in api doc. Updated doc can be found at 
http://mxnet-doc.s3-accelerate.dualstack.amazonaws.com/api/python/index.html
   
   ## Checklist ##
   ### Essentials ###
   - [x] Passed code style checking (`make lint`)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage
   - [x] For user-facing API changes, API doc string has been updated. For new 
C++ functions in header files, their functionalities and arguments are 
well-documented. 
   - [x] To my best knowledge, examples are either not affected by this change, 
or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] add `sphinx.ext.viewcode`
   - [x] fix links
   
   ## Comments ##
   - Code that is generated at run-time isn't available for viewing (e.g. 
frontend functions for operators)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #8107: "add warmup lr_scheduler" create a new pr

2017-11-19 Thread GitBox
piiswrong commented on a change in pull request #8107:  "add warmup 
lr_scheduler" create a new pr
URL: https://github.com/apache/incubator-mxnet/pull/8107#discussion_r151878202
 
 

 ##
 File path: python/mxnet/lr_scheduler.py
 ##
 @@ -100,16 +100,27 @@ class MultiFactorScheduler(LRScheduler):
 
 Then calculate the new learning rate by::
 
-   base_lr * pow(factor, k+1)
+base_lr * pow(factor, k+1)
+
+When warmup_step>1, warmup the learning rate by a const value for first 
warmup_step steps.
+It returns a new learning rate by::
+
+begin_lr + (num_update - 1) * const_update
 
 Parameters
 --
 step: list of int
-The list of steps to schedule a change
+The list of steps to schedule a change.
 factor: float
 The factor to change the learning rate.
+warmup_step : int
+Changes the learning rate for first 'warmup_step' updates.
+begin_lr : float, optional
+The learning rate at begin.
+stop_lr : float, optional
+Stop updating the learning rate if it is less than this value.
 
 Review comment:
   begin_lr and stop_lr are confusing names and the does doesn't explain 
exactly what they mean


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: replace `has_key` by `in` (#8317)

2017-11-19 Thread jxie
This is an automated email from the ASF dual-hosted git repository.

jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 7f7e13d  replace `has_key` by `in` (#8317)
7f7e13d is described below

commit 7f7e13dee74ceda384b6201e71beccb1e71a8c31
Author: Chih-Ming Chen 
AuthorDate: Mon Nov 20 05:07:48 2017 +0800

replace `has_key` by `in` (#8317)

* replace `has_key` by `in`

* Update utils.py
---
 python/mxnet/notebook/callback.py  | 2 +-
 tests/nightly/TestDoc/doc_spell_checker.py | 2 +-
 tools/accnn/rank_selection.py  | 2 +-
 tools/accnn/utils.py   | 5 +++--
 4 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/python/mxnet/notebook/callback.py 
b/python/mxnet/notebook/callback.py
index 56321b7..776900f 100644
--- a/python/mxnet/notebook/callback.py
+++ b/python/mxnet/notebook/callback.py
@@ -367,7 +367,7 @@ class LiveLearningCurve(LiveBokehChart):
 metrics = {}
 metrics['elapsed'] = datetime.datetime.now() - self.start_time
 for key, value in metrics.items():
-if not self._data[df_name].has_key(key):
+if key not in self._data[df_name]:
 self._data[df_name][key] = []
 self._data[df_name][key].append(value)
 
diff --git a/tests/nightly/TestDoc/doc_spell_checker.py 
b/tests/nightly/TestDoc/doc_spell_checker.py
index a7b8b25..a33807e 100644
--- a/tests/nightly/TestDoc/doc_spell_checker.py
+++ b/tests/nightly/TestDoc/doc_spell_checker.py
@@ -92,7 +92,7 @@ def check_doc(file_content, spell_checker, spell_check_ret):
 """
 spell_checker.set_text(file_content)
 for error in spell_checker:
-if spell_check_ret.has_key(error.word):
+if error.word in spell_check_ret:
 spell_check_ret[error.word] += 1
 else:
 spell_check_ret[error.word] = 1
diff --git a/tools/accnn/rank_selection.py b/tools/accnn/rank_selection.py
index 66937b2..c5c0261 100644
--- a/tools/accnn/rank_selection.py
+++ b/tools/accnn/rank_selection.py
@@ -81,7 +81,7 @@ def get_ranksel(model, ratio):
 if nxt_c > EC:
   continue
 nxt_v = dp[now][now_c] + math.log(S[i][d])
-if dp[nxt].has_key(nxt_c):
+if nxt_c in dp[nxt]:
   if nxt_v > dp[nxt][nxt_c]:
 dp[nxt][nxt_c] = nxt_v
 dpc[i][nxt_c] = (d,now_c)
diff --git a/tools/accnn/utils.py b/tools/accnn/utils.py
index 25fb188..2795f85 100644
--- a/tools/accnn/utils.py
+++ b/tools/accnn/utils.py
@@ -20,6 +20,7 @@ import copy
 import json
 import ast
 
+
 def load_model(args):
   devs = mx.cpu() if args.gpus == None else [mx.gpu(int(i)) for i in 
args.gpus.split(',')]
   return mx.model.FeedForward.load(args.model, args.load_epoch, ctx=devs)
@@ -29,7 +30,7 @@ def topsort(nodes):
   deg = [0]*n
   g = [[] for _ in xrange(n)]
   for i,node in enumerate(nodes):
-if node.has_key('inputs'):
+if 'inputs' in node:
   for j in node['inputs']:
 deg[i] += 1
 g[j[0]].append(i)
@@ -45,7 +46,7 @@ def topsort(nodes):
 q.append(j)
   new_ids=dict([(node['name'],i) for i,node in enumerate(res)])
   for node in res:
-if node.has_key('inputs'):
+if 'inputs' in node:
   for j in node['inputs']:
 j[0]=new_ids[nodes[j[0]]['name']]
   return res

-- 
To stop receiving notification emails like this one, please contact
['"comm...@mxnet.apache.org" '].


[GitHub] piiswrong closed pull request #8317: replace `has_key` by `in`

2017-11-19 Thread GitBox
piiswrong closed pull request #8317: replace `has_key` by `in`
URL: https://github.com/apache/incubator-mxnet/pull/8317
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/python/mxnet/notebook/callback.py 
b/python/mxnet/notebook/callback.py
index 56321b715b..776900fe59 100644
--- a/python/mxnet/notebook/callback.py
+++ b/python/mxnet/notebook/callback.py
@@ -367,7 +367,7 @@ def _process_batch(self, param, df_name):
 metrics = {}
 metrics['elapsed'] = datetime.datetime.now() - self.start_time
 for key, value in metrics.items():
-if not self._data[df_name].has_key(key):
+if key not in self._data[df_name]:
 self._data[df_name][key] = []
 self._data[df_name][key].append(value)
 
diff --git a/tests/nightly/TestDoc/doc_spell_checker.py 
b/tests/nightly/TestDoc/doc_spell_checker.py
index a7b8b250c9..a33807e3d5 100644
--- a/tests/nightly/TestDoc/doc_spell_checker.py
+++ b/tests/nightly/TestDoc/doc_spell_checker.py
@@ -92,7 +92,7 @@ def check_doc(file_content, spell_checker, spell_check_ret):
 """
 spell_checker.set_text(file_content)
 for error in spell_checker:
-if spell_check_ret.has_key(error.word):
+if error.word in spell_check_ret:
 spell_check_ret[error.word] += 1
 else:
 spell_check_ret[error.word] = 1
diff --git a/tools/accnn/rank_selection.py b/tools/accnn/rank_selection.py
index 66937b2859..c5c026114a 100644
--- a/tools/accnn/rank_selection.py
+++ b/tools/accnn/rank_selection.py
@@ -81,7 +81,7 @@ def get_ranksel(model, ratio):
 if nxt_c > EC:
   continue
 nxt_v = dp[now][now_c] + math.log(S[i][d])
-if dp[nxt].has_key(nxt_c):
+if nxt_c in dp[nxt]:
   if nxt_v > dp[nxt][nxt_c]:
 dp[nxt][nxt_c] = nxt_v
 dpc[i][nxt_c] = (d,now_c)
diff --git a/tools/accnn/utils.py b/tools/accnn/utils.py
index 25fb188956..2795f8558f 100644
--- a/tools/accnn/utils.py
+++ b/tools/accnn/utils.py
@@ -20,6 +20,7 @@
 import json
 import ast
 
+
 def load_model(args):
   devs = mx.cpu() if args.gpus == None else [mx.gpu(int(i)) for i in 
args.gpus.split(',')]
   return mx.model.FeedForward.load(args.model, args.load_epoch, ctx=devs)
@@ -29,7 +30,7 @@ def topsort(nodes):
   deg = [0]*n
   g = [[] for _ in xrange(n)]
   for i,node in enumerate(nodes):
-if node.has_key('inputs'):
+if 'inputs' in node:
   for j in node['inputs']:
 deg[i] += 1
 g[j[0]].append(i)
@@ -45,7 +46,7 @@ def topsort(nodes):
 q.append(j)
   new_ids=dict([(node['name'],i) for i,node in enumerate(res)])
   for node in res:
-if node.has_key('inputs'):
+if 'inputs' in node:
   for j in node['inputs']:
 j[0]=new_ids[nodes[j[0]]['name']]
   return res


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong closed pull request #8611: optimization for dot(csr.T, dense) = rsp

2017-11-19 Thread GitBox
piiswrong closed pull request #8611: optimization for dot(csr.T, dense) = rsp
URL: https://github.com/apache/incubator-mxnet/pull/8611
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/src/operator/tensor/dot-inl.h b/src/operator/tensor/dot-inl.h
index 7ab4710090..2432703291 100644
--- a/src/operator/tensor/dot-inl.h
+++ b/src/operator/tensor/dot-inl.h
@@ -30,9 +30,10 @@
 #include 
 #include 
 #include 
-#include "./init_op.h"
+#include "./util/tensor_util-inl.h"
 #include "../mshadow_op.h"
 #include "../elemwise_op_common.h"
+#include "./init_op.h"
 #include "../mxnet_op.h"
 #ifdef __CUDACC__
 #include "./dot-inl.cuh"
@@ -364,19 +365,17 @@ struct DotCsrTransDnsDnsByRowBlocks {
 
 /*!
  * \brief CPU Kernel of dot(csr.T(), dns) = rsp
- * Parallelization by row blocks.
- * This kernel fills up the row_idx array of the rsp
- * with 1 for nonzero rows and 0 for zero rows.
- * The matrix will be compacted after this kernel call.
+ * Parallelization by row blocks which evenly partition the non-zero rows.
  */
 struct DotCsrTransDnsRspByRowBlocks {
   /*!
* \brief
* \param i the i-th thread
*/
-  template
+  template
   MSHADOW_CINLINE static void Map(int i,
   DType* out,
+  nnvm::dim_t* row_flg_sum,
   RType* row_idx,
   const DType* data_l,
   const IType* indptr_l,
@@ -384,21 +383,25 @@ struct DotCsrTransDnsRspByRowBlocks {
   const DType* data_r,
   const nnvm::dim_t seg_len,
   const nnvm::dim_t num_rows_l,
-  const nnvm::dim_t num_rows,
+  const nnvm::dim_t nnr,
   const nnvm::dim_t num_cols) {
 using nnvm::dim_t;
 const dim_t seg_start = i * seg_len;
-if (seg_start >= num_rows) return;
+if (seg_start >= nnr) return;
 const dim_t seg_end = (i + 1) * seg_len;
+const dim_t col_start = row_idx[seg_start];
+const dim_t col_end = seg_end >= nnr ? (row_idx[nnr-1] + 1) : 
row_idx[seg_end];
 for (dim_t j = 0; j < num_rows_l; ++j) {
   if (indptr_l[j] == indptr_l[j+1]) continue;
   const dim_t offset_r = j * num_cols;
   for (IType k = indptr_l[j]; k < indptr_l[j+1]; ++k) {
 const CType col_idx = col_idx_l[k];
-if (col_idx < seg_start || col_idx >= seg_end) continue;
-const dim_t offset_out = col_idx * num_cols;
-row_idx[col_idx] = 1;
+if (col_idx < col_start || col_idx >= col_end) continue;
+
+const nnvm::dim_t rsp_row = row_flg_sum[col_idx] - 1;
+const nnvm::dim_t offset_out = rsp_row * num_cols;
 const DType val = data_l[k];
+
 for (dim_t l = 0; l < num_cols; ++l) {
   out[offset_out+l] += data_r[offset_r+l] * val;
 }
@@ -605,43 +608,51 @@ inline void DotCsrDnsRspImpl(const OpContext& ctx,
   const TBlob col_idx_l = lhs.aux_data(csr::kIdx);
   const TBlob& data_r = rhs;
 
-  // pre-allocate spaces for ret using the dense dimension size
-  ret->CheckAndAlloc({mshadow::Shape1(lhs.shape()[1])});
-  const TBlob data_out = ret->data();
-  const TBlob row_idx_out = ret->aux_data(rowsparse::kIdx);
-
   MSHADOW_SGL_DBL_TYPE_SWITCH(data_l.type_flag_, DType, {  // data type
 MSHADOW_IDX_TYPE_SWITCH(indptr_l.type_flag_, IType, {  // indptr type
   MSHADOW_IDX_TYPE_SWITCH(col_idx_l.type_flag_, CType, {  // col idx type
-MSHADOW_IDX_TYPE_SWITCH(row_idx_out.type_flag_, RType, {  // row idx 
type
+MSHADOW_IDX_TYPE_SWITCH(ret->aux_type(rowsparse::kIdx), RType, {  // 
row idx type
+  const dim_t num_rows = lhs.shape()[1];
+  size_t workspace_size = 2 * (num_rows * sizeof(dim_t));
+  mshadow::Tensor workspace =
+ctx.requested[0].get_space_typed(
+mshadow::Shape1(workspace_size), s);
+  dim_t* row_flg = reinterpret_cast(workspace.dptr_);
+  dim_t* prefix_sum = row_flg + num_rows;
+
+  Fill(s, TBlob(row_flg, mshadow::Shape1(num_rows), 
cpu::kDevMask), kWriteTo, 0);
+  mxnet_op::Kernel::Launch(s, 
lhs.aux_shape(csr::kIdx)[0], row_flg,
+col_idx_l.dptr());
+
+  prefix_sum[0] = row_flg[0];
+  for (nnvm::dim_t i = 1; i < num_rows; i++) {
+prefix_sum[i] = prefix_sum[i - 1] + row_flg[i];
+  }
+  dim_t nnr = prefix_sum[num_rows - 1];
+
+  if (nnr == 0) {
+FillZerosRspImpl(s, *ret);
+return;
+  }
+
+  ret->CheckAndAlloc({mshadow::Shape1(nnr)});
+  

[incubator-mxnet] branch master updated: optimization for dot(csr.T, dense) = rsp (#8611)

2017-11-19 Thread jxie
This is an automated email from the ASF dual-hosted git repository.

jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new f79d22d  optimization for dot(csr.T, dense) = rsp (#8611)
f79d22d is described below

commit f79d22db25847453a9a286eb19e9064c246a82d4
Author: Ziyue Huang 
AuthorDate: Mon Nov 20 05:07:10 2017 +0800

optimization for dot(csr.T, dense) = rsp (#8611)

* optimization for dot(csr.T, dense) = rsp

* remove unneccessary headers

* load balance

* minor fix and update comments

* resolve

* trigger

* trigger
---
 src/operator/tensor/dot-inl.h | 93 ---
 1 file changed, 52 insertions(+), 41 deletions(-)

diff --git a/src/operator/tensor/dot-inl.h b/src/operator/tensor/dot-inl.h
index 7ab4710..2432703 100644
--- a/src/operator/tensor/dot-inl.h
+++ b/src/operator/tensor/dot-inl.h
@@ -30,9 +30,10 @@
 #include 
 #include 
 #include 
-#include "./init_op.h"
+#include "./util/tensor_util-inl.h"
 #include "../mshadow_op.h"
 #include "../elemwise_op_common.h"
+#include "./init_op.h"
 #include "../mxnet_op.h"
 #ifdef __CUDACC__
 #include "./dot-inl.cuh"
@@ -364,19 +365,17 @@ struct DotCsrTransDnsDnsByRowBlocks {
 
 /*!
  * \brief CPU Kernel of dot(csr.T(), dns) = rsp
- * Parallelization by row blocks.
- * This kernel fills up the row_idx array of the rsp
- * with 1 for nonzero rows and 0 for zero rows.
- * The matrix will be compacted after this kernel call.
+ * Parallelization by row blocks which evenly partition the non-zero rows.
  */
 struct DotCsrTransDnsRspByRowBlocks {
   /*!
* \brief
* \param i the i-th thread
*/
-  template
+  template
   MSHADOW_CINLINE static void Map(int i,
   DType* out,
+  nnvm::dim_t* row_flg_sum,
   RType* row_idx,
   const DType* data_l,
   const IType* indptr_l,
@@ -384,21 +383,25 @@ struct DotCsrTransDnsRspByRowBlocks {
   const DType* data_r,
   const nnvm::dim_t seg_len,
   const nnvm::dim_t num_rows_l,
-  const nnvm::dim_t num_rows,
+  const nnvm::dim_t nnr,
   const nnvm::dim_t num_cols) {
 using nnvm::dim_t;
 const dim_t seg_start = i * seg_len;
-if (seg_start >= num_rows) return;
+if (seg_start >= nnr) return;
 const dim_t seg_end = (i + 1) * seg_len;
+const dim_t col_start = row_idx[seg_start];
+const dim_t col_end = seg_end >= nnr ? (row_idx[nnr-1] + 1) : 
row_idx[seg_end];
 for (dim_t j = 0; j < num_rows_l; ++j) {
   if (indptr_l[j] == indptr_l[j+1]) continue;
   const dim_t offset_r = j * num_cols;
   for (IType k = indptr_l[j]; k < indptr_l[j+1]; ++k) {
 const CType col_idx = col_idx_l[k];
-if (col_idx < seg_start || col_idx >= seg_end) continue;
-const dim_t offset_out = col_idx * num_cols;
-row_idx[col_idx] = 1;
+if (col_idx < col_start || col_idx >= col_end) continue;
+
+const nnvm::dim_t rsp_row = row_flg_sum[col_idx] - 1;
+const nnvm::dim_t offset_out = rsp_row * num_cols;
 const DType val = data_l[k];
+
 for (dim_t l = 0; l < num_cols; ++l) {
   out[offset_out+l] += data_r[offset_r+l] * val;
 }
@@ -605,43 +608,51 @@ inline void DotCsrDnsRspImpl(const OpContext& ctx,
   const TBlob col_idx_l = lhs.aux_data(csr::kIdx);
   const TBlob& data_r = rhs;
 
-  // pre-allocate spaces for ret using the dense dimension size
-  ret->CheckAndAlloc({mshadow::Shape1(lhs.shape()[1])});
-  const TBlob data_out = ret->data();
-  const TBlob row_idx_out = ret->aux_data(rowsparse::kIdx);
-
   MSHADOW_SGL_DBL_TYPE_SWITCH(data_l.type_flag_, DType, {  // data type
 MSHADOW_IDX_TYPE_SWITCH(indptr_l.type_flag_, IType, {  // indptr type
   MSHADOW_IDX_TYPE_SWITCH(col_idx_l.type_flag_, CType, {  // col idx type
-MSHADOW_IDX_TYPE_SWITCH(row_idx_out.type_flag_, RType, {  // row idx 
type
+MSHADOW_IDX_TYPE_SWITCH(ret->aux_type(rowsparse::kIdx), RType, {  // 
row idx type
+  const dim_t num_rows = lhs.shape()[1];
+  size_t workspace_size = 2 * (num_rows * sizeof(dim_t));
+  mshadow::Tensor workspace =
+ctx.requested[0].get_space_typed(
+mshadow::Shape1(workspace_size), s);
+  dim_t* row_flg = reinterpret_cast(workspace.dptr_);
+  dim_t* prefix_sum = row_flg + num_rows;
+
+  Fill(s, TBlob(row_flg, mshadow::Shape1(num_rows), 
cpu::kDevMask), kWriteTo, 0);
+  mxnet_op::Kernel::Launch(s, 

[GitHub] piiswrong commented on issue #8674: ADD CapsNet example

2017-11-19 Thread GitBox
piiswrong commented on issue #8674: ADD CapsNet example
URL: https://github.com/apache/incubator-mxnet/pull/8674#issuecomment-345549454
 
 
   Does the results match the original paper?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #8674: ADD CapsNet example

2017-11-19 Thread GitBox
piiswrong commented on a change in pull request #8674: ADD CapsNet example
URL: https://github.com/apache/incubator-mxnet/pull/8674#discussion_r151877975
 
 

 ##
 File path: example/capsnet/README.md
 ##
 @@ -0,0 +1,32 @@
+**CapsNet-MXNet**
+=
+
+This example is MXNet implementation of 
[CapsNet](https://arxiv.org/abs/1710.09829):  
+Sara Sabour, Nicholas Frosst, Geoffrey E Hinton. Dynamic Routing Between 
Capsules. NIPS 2017
+- The Current best test error is 0.5%  
+
+Due to the permission issue, this example is maintained in this 
[repository](https://github.com/samsungsds-rnd/capsnet.mxnet) separately.
 
 Review comment:
   what does this mean?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Fixing the monitor callback of the bucketing module. (#8696)

2017-11-19 Thread jxie
This is an automated email from the ASF dual-hosted git repository.

jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 9dcdf57  Fixing the monitor callback of the bucketing module. (#8696)
9dcdf57 is described below

commit 9dcdf575a92aa710e800f36ea070cc910ea44ee7
Author: Tobias Domhan 
AuthorDate: Mon Nov 20 02:32:35 2017 +0530

Fixing the monitor callback of the bucketing module. (#8696)
---
 python/mxnet/module/bucketing_module.py | 4 
 1 file changed, 4 insertions(+)

diff --git a/python/mxnet/module/bucketing_module.py 
b/python/mxnet/module/bucketing_module.py
index 4a5330e..0bea260 100644
--- a/python/mxnet/module/bucketing_module.py
+++ b/python/mxnet/module/bucketing_module.py
@@ -92,6 +92,7 @@ class BucketingModule(BaseModule):
 self._curr_module = None
 self._curr_bucket_key = None
 self._params_dirty = False
+self._monitor = None
 
 def _reset_bind(self):
 """Internal utility function to reset binding."""
@@ -367,6 +368,8 @@ class BucketingModule(BaseModule):
 module.bind(data_shapes, label_shapes, 
self._curr_module.for_training,
 self._curr_module.inputs_need_grad,
 force_rebind=False, 
shared_module=self._buckets[self._default_bucket_key])
+if self._monitor is not None:
+module.install_monitor(self._monitor)
 self._buckets[bucket_key] = module
 
 self._curr_module = self._buckets[bucket_key]
@@ -510,5 +513,6 @@ class BucketingModule(BaseModule):
 def install_monitor(self, mon):
 """Installs monitor on all executors """
 assert self.binded
+self._monitor = mon
 for mod in self._buckets.values():
 mod.install_monitor(mon)

-- 
To stop receiving notification emails like this one, please contact
['"comm...@mxnet.apache.org" '].


[GitHub] piiswrong closed pull request #8696: Fixing the monitor callback of the bucketing module.

2017-11-19 Thread GitBox
piiswrong closed pull request #8696: Fixing the monitor callback of the 
bucketing module.
URL: https://github.com/apache/incubator-mxnet/pull/8696
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/python/mxnet/module/bucketing_module.py 
b/python/mxnet/module/bucketing_module.py
index dd6cafb277..fa92c5d1a1 100644
--- a/python/mxnet/module/bucketing_module.py
+++ b/python/mxnet/module/bucketing_module.py
@@ -85,6 +85,7 @@ def __init__(self, sym_gen, default_bucket_key=None, 
logger=logging,
 self._curr_module = None
 self._curr_bucket_key = None
 self._params_dirty = False
+self._monitor = None
 
 def _reset_bind(self):
 """Internal utility function to reset binding."""
@@ -356,6 +357,8 @@ def switch_bucket(self, bucket_key, data_shapes, 
label_shapes=None):
 module.bind(data_shapes, label_shapes, 
self._curr_module.for_training,
 self._curr_module.inputs_need_grad,
 force_rebind=False, 
shared_module=self._buckets[self._default_bucket_key])
+if self._monitor is not None:
+module.install_monitor(self._monitor)
 self._buckets[bucket_key] = module
 
 self._curr_module = self._buckets[bucket_key]
@@ -499,5 +502,6 @@ def symbol(self):
 def install_monitor(self, mon):
 """Installs monitor on all executors """
 assert self.binded
+self._monitor = mon
 for mod in self._buckets.values():
 mod.install_monitor(mon)


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong closed pull request #8698: [COREML] Update the json getter

2017-11-19 Thread GitBox
piiswrong closed pull request #8698: [COREML] Update the json getter
URL: https://github.com/apache/incubator-mxnet/pull/8698
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/tools/coreml/converter/_layers.py 
b/tools/coreml/converter/_layers.py
index fe00232828..4c5ebc6fb0 100644
--- a/tools/coreml/converter/_layers.py
+++ b/tools/coreml/converter/_layers.py
@@ -38,6 +38,30 @@ def _get_node_name(net, node_id):
 def _get_node_shape(net, node_id):
 return net['nodes'][node_id]['shape']
 
+def _get_attrs(node):
+"""get attribute dict from node
+
+This functions keeps backward compatibility
+for both attr and attrs key in the json field.
+
+Parameters
+--
+node : dict
+   The json graph Node
+
+Returns
+---
+attrs : dict
+   The attr dict, returns empty dict if
+   the field do not exist.
+"""
+if 'attrs' in node:
+return node['attrs']
+elif 'attr' in node:
+return node['attr']
+else:
+return {}
+
 
 # TODO These operators still need to be converted (listing in order of 
priority):
 # High priority:
@@ -108,7 +132,7 @@ def convert_transpose(net, node, module, builder):
 """
 input_name, output_name = _get_input_output_name(net, node)
 name = node['name']
-param = node['attr']
+param = _get_attrs(node)
 
 axes = literal_eval(param['axes'])
 builder.add_permute(name, axes, input_name, output_name)
@@ -180,7 +204,7 @@ def convert_activation(net, node, module, builder):
 """
 input_name, output_name = _get_input_output_name(net, node)
 name = node['name']
-mx_non_linearity = node['attr']['act_type']
+mx_non_linearity = _get_attrs(node)['act_type']
 #TODO add SCALED_TANH, SOFTPLUS, SOFTSIGN, SIGMOID_HARD, LEAKYRELU, PRELU, 
ELU, PARAMETRICSOFTPLUS, THRESHOLDEDRELU, LINEAR
 if mx_non_linearity == 'relu':
 non_linearity = 'RELU'
@@ -281,7 +305,7 @@ def convert_convolution(net, node, module, builder):
 """
 input_name, output_name = _get_input_output_name(net, node)
 name = node['name']
-param = node['attr']
+param = _get_attrs(node)
 inputs = node['inputs']
 args, _ = module.get_params()
 
@@ -361,7 +385,7 @@ def convert_pooling(net, node, module, builder):
 """
 input_name, output_name = _get_input_output_name(net, node)
 name = node['name']
-param = node['attr']
+param = _get_attrs(node)
 
 layer_type_mx = param['pool_type']
 if layer_type_mx == 'max':
@@ -445,9 +469,9 @@ def convert_batchnorm(net, node, module, builder):
 
 eps = 1e-3 # Default value of eps for MXNet.
 use_global_stats = False # Default value of use_global_stats for MXNet.
-if 'attr' in node:
-if 'eps' in node['attr']:
-eps = literal_eval(node['attr']['eps'])
+attrs = _get_attrs(node)
+if 'eps' in attrs:
+eps = literal_eval(attrs['eps'])
 
 args, aux = module.get_params()
 gamma = args[_get_node_name(net, inputs[1][0])].asnumpy()
@@ -511,7 +535,7 @@ def convert_deconvolution(net, node, module, builder):
 """
 input_name, output_name = _get_input_output_name(net, node)
 name = node['name']
-param = node['attr']
+param = _get_attrs(node)
 inputs = node['inputs']
 args, _ = module.get_params()
 


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: [COREML] Update the json getter (#8698)

2017-11-19 Thread jxie
This is an automated email from the ASF dual-hosted git repository.

jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 38a032c  [COREML] Update the json getter (#8698)
38a032c is described below

commit 38a032c886f56f94cdad004c89fd4e1926f85ba6
Author: Tianqi Chen 
AuthorDate: Sun Nov 19 12:53:19 2017 -0800

[COREML] Update the json getter (#8698)

* [COREML] Update the json getter

* add docstring
---
 tools/coreml/converter/_layers.py | 40 +++
 1 file changed, 32 insertions(+), 8 deletions(-)

diff --git a/tools/coreml/converter/_layers.py 
b/tools/coreml/converter/_layers.py
index fe00232..4c5ebc6 100644
--- a/tools/coreml/converter/_layers.py
+++ b/tools/coreml/converter/_layers.py
@@ -38,6 +38,30 @@ def _get_node_name(net, node_id):
 def _get_node_shape(net, node_id):
 return net['nodes'][node_id]['shape']
 
+def _get_attrs(node):
+"""get attribute dict from node
+
+This functions keeps backward compatibility
+for both attr and attrs key in the json field.
+
+Parameters
+--
+node : dict
+   The json graph Node
+
+Returns
+---
+attrs : dict
+   The attr dict, returns empty dict if
+   the field do not exist.
+"""
+if 'attrs' in node:
+return node['attrs']
+elif 'attr' in node:
+return node['attr']
+else:
+return {}
+
 
 # TODO These operators still need to be converted (listing in order of 
priority):
 # High priority:
@@ -108,7 +132,7 @@ def convert_transpose(net, node, module, builder):
 """
 input_name, output_name = _get_input_output_name(net, node)
 name = node['name']
-param = node['attr']
+param = _get_attrs(node)
 
 axes = literal_eval(param['axes'])
 builder.add_permute(name, axes, input_name, output_name)
@@ -180,7 +204,7 @@ def convert_activation(net, node, module, builder):
 """
 input_name, output_name = _get_input_output_name(net, node)
 name = node['name']
-mx_non_linearity = node['attr']['act_type']
+mx_non_linearity = _get_attrs(node)['act_type']
 #TODO add SCALED_TANH, SOFTPLUS, SOFTSIGN, SIGMOID_HARD, LEAKYRELU, PRELU, 
ELU, PARAMETRICSOFTPLUS, THRESHOLDEDRELU, LINEAR
 if mx_non_linearity == 'relu':
 non_linearity = 'RELU'
@@ -281,7 +305,7 @@ def convert_convolution(net, node, module, builder):
 """
 input_name, output_name = _get_input_output_name(net, node)
 name = node['name']
-param = node['attr']
+param = _get_attrs(node)
 inputs = node['inputs']
 args, _ = module.get_params()
 
@@ -361,7 +385,7 @@ def convert_pooling(net, node, module, builder):
 """
 input_name, output_name = _get_input_output_name(net, node)
 name = node['name']
-param = node['attr']
+param = _get_attrs(node)
 
 layer_type_mx = param['pool_type']
 if layer_type_mx == 'max':
@@ -445,9 +469,9 @@ def convert_batchnorm(net, node, module, builder):
 
 eps = 1e-3 # Default value of eps for MXNet.
 use_global_stats = False # Default value of use_global_stats for MXNet.
-if 'attr' in node:
-if 'eps' in node['attr']:
-eps = literal_eval(node['attr']['eps'])
+attrs = _get_attrs(node)
+if 'eps' in attrs:
+eps = literal_eval(attrs['eps'])
 
 args, aux = module.get_params()
 gamma = args[_get_node_name(net, inputs[1][0])].asnumpy()
@@ -511,7 +535,7 @@ def convert_deconvolution(net, node, module, builder):
 """
 input_name, output_name = _get_input_output_name(net, node)
 name = node['name']
-param = node['attr']
+param = _get_attrs(node)
 inputs = node['inputs']
 args, _ = module.get_params()
 

-- 
To stop receiving notification emails like this one, please contact
['"comm...@mxnet.apache.org" '].


[GitHub] pracheer commented on issue #8698: [COREML] Update the json getter

2017-11-19 Thread GitBox
pracheer commented on issue #8698: [COREML] Update the json getter
URL: https://github.com/apache/incubator-mxnet/pull/8698#issuecomment-345548076
 
 
   Yes please!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen commented on issue #8698: [COREML] Update the json getter

2017-11-19 Thread GitBox
tqchen commented on issue #8698: [COREML] Update the json getter
URL: https://github.com/apache/incubator-mxnet/pull/8698#issuecomment-345547195
 
 
   Merge this ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pracheer commented on issue #8684: [BugFix][CoreML Converter] Dense layers w/o bias.

2017-11-19 Thread GitBox
pracheer commented on issue #8684: [BugFix][CoreML Converter] Dense layers w/o 
bias.
URL: https://github.com/apache/incubator-mxnet/pull/8684#issuecomment-345546919
 
 
   @jiajiechen @srikris 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pracheer commented on issue #8703: [DISCUSSION] (when) Should we deprecate support for python2?

2017-11-19 Thread GitBox
pracheer commented on issue #8703: [DISCUSSION] (when) Should we deprecate 
support for python2?
URL: 
https://github.com/apache/incubator-mxnet/issues/8703#issuecomment-345546205
 
 
   I'm assuming there are still a lot of production customers who are using 
Python2 and will continue to do so for some foreseeable future. These customers 
may/will be reluctant to move to Python3 until they are really forced to 
(either due to lack of support for python2 or new features in python3). If we 
deprecate python2 too "soon", the chances are we are going to face wrath of 
such customers and it may not provide a happy customer experience. (Unless 
there is a specific feature that can't be handled w/ python2)
   
   We can let customers/developers (and rest of the python ecosystem?) move to 
python 3 on their own while we continue to support python2. Probably around 
2019 and early 2020, we'll see a lot of people moving to python3. It may then 
be a good time to seriously consider deprecating python2.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] 01/01: Revert "2bit gradient compression (#8662)"

2017-11-19 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch revert-8662-gc-pr
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 2e58c0e162e081f7240db24f251d65f1d60b5f86
Author: Sheng Zha 
AuthorDate: Sat Nov 18 22:59:38 2017 -0800

Revert "2bit gradient compression (#8662)"

This reverts commit a499f892c9ee6f59ccfb57c9e431c91014078891.
---
 example/image-classification/common/fit.py |  44 ++--
 example/rnn/lstm_bucketing.py  |   1 +
 include/mxnet/c_api.h  |  13 -
 include/mxnet/kvstore.h|  15 --
 python/mxnet/gluon/trainer.py  |  12 +-
 python/mxnet/kvstore.py|  62 -
 python/mxnet/module/bucketing_module.py|  17 +-
 python/mxnet/module/module.py  |  11 +-
 src/c_api/c_api.cc |  14 --
 src/kvstore/comm.h |  87 +--
 src/kvstore/gradient_compression-inl.h | 155 
 src/kvstore/gradient_compression.cc| 193 --
 src/kvstore/gradient_compression.cu|  40 ---
 src/kvstore/gradient_compression.h | 138 --
 src/kvstore/kvstore.cc |   2 +-
 src/kvstore/kvstore_dist.h | 388 -
 src/kvstore/kvstore_dist_server.h  | 143 ++-
 src/kvstore/kvstore_local.h|   7 -
 tests/nightly/dist_sync_kvstore.py | 120 +
 tests/nightly/test_kvstore.py  | 200 ++-
 tools/bandwidth/measure.py |   6 +-
 21 files changed, 167 insertions(+), 1501 deletions(-)

diff --git a/example/image-classification/common/fit.py 
b/example/image-classification/common/fit.py
index 2b002c7..51a1abe 100755
--- a/example/image-classification/common/fit.py
+++ b/example/image-classification/common/fit.py
@@ -103,11 +103,6 @@ def add_fit_args(parser):
help='1 means test reading speed without training')
 train.add_argument('--dtype', type=str, default='float32',
help='precision: float32 or float16')
-train.add_argument('--gc-type', type=str, default='none',
-   help='type of gradient compression to use, \
- takes `2bit` or `none` for now')
-train.add_argument('--gc-threshold', type=float, default=0.5,
-   help='threshold for 2bit gradient compression')
 return train
 
 def fit(args, network, data_loader, **kwargs):
@@ -119,9 +114,6 @@ def fit(args, network, data_loader, **kwargs):
 """
 # kvstore
 kv = mx.kvstore.create(args.kv_store)
-if args.gc_type != 'none':
-kv.set_gradient_compression({'type': args.gc_type,
- 'threshold': args.gc_threshold})
 
 # logging
 head = '%(asctime)-15s Node[' + str(kv.rank) + '] %(message)s'
@@ -170,10 +162,10 @@ def fit(args, network, data_loader, **kwargs):
 
 lr_scheduler  = lr_scheduler
 optimizer_params = {
-'learning_rate': lr,
-'wd' : args.wd,
-'lr_scheduler': lr_scheduler,
-'multi_precision': True}
+'learning_rate': lr,
+'wd' : args.wd,
+'lr_scheduler': lr_scheduler,
+'multi_precision': True}
 
 # Only a limited number of optimizers have 'momentum' property
 has_momentum = {'sgd', 'dcasgd', 'nag'}
@@ -203,17 +195,17 @@ def fit(args, network, data_loader, **kwargs):
 
 # run
 model.fit(train,
-  begin_epoch= args.load_epoch if args.load_epoch else 0,
-  num_epoch  = args.num_epochs,
-  eval_data  = val,
-  eval_metric= eval_metrics,
-  kvstore= kv,
-  optimizer  = args.optimizer,
-  optimizer_params   = optimizer_params,
-  initializer= initializer,
-  arg_params = arg_params,
-  aux_params = aux_params,
-  batch_end_callback = batch_end_callbacks,
-  epoch_end_callback = checkpoint,
-  allow_missing  = True,
-  monitor= monitor)
+begin_epoch= args.load_epoch if args.load_epoch else 0,
+num_epoch  = args.num_epochs,
+eval_data  = val,
+eval_metric= eval_metrics,
+kvstore= kv,
+optimizer  = args.optimizer,
+optimizer_params   = optimizer_params,
+initializer= initializer,
+arg_params = arg_params,
+aux_params = aux_params,
+batch_end_callback = batch_end_callbacks,
+epoch_end_callback = checkpoint,
+allow_missing  = True,
+monitor= monitor)
diff --git a/example/rnn/lstm_bucketing.py 

[incubator-mxnet] branch revert-8662-gc-pr updated (eff6bb6 -> 2e58c0e)

2017-11-19 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch revert-8662-gc-pr
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


omit eff6bb6  Revert "2bit gradient compression (#8662)"
omit ff1af09  Revert "Restored some copyright attribution that were 
accidentally removed. (#8688)"
 add d2a856a  Restored some copyright attribution that were accidentally 
removed. (#8688)
 add bf7a0ff  Invert environment check (#8706)
 add 6ca944b  disable test causing mklml-cpu to fail (#8713)
 new 2e58c0e  Revert "2bit gradient compression (#8662)"

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (eff6bb6)
\
 N -- N -- N   refs/heads/revert-8662-gc-pr (2e58c0e)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 amalgamation/dmlc-minimum0.cc|  1 +
 cmake/Modules/FindJeMalloc.cmake | 10 +-
 cpp-package/include/mxnet-cpp/MxNetCpp.h |  1 +
 cpp-package/include/mxnet-cpp/base.h |  1 +
 cpp-package/include/mxnet-cpp/executor.h |  1 +
 cpp-package/include/mxnet-cpp/initializer.h  |  1 +
 cpp-package/include/mxnet-cpp/io.h   |  1 +
 cpp-package/include/mxnet-cpp/kvstore.h  |  1 +
 cpp-package/include/mxnet-cpp/lr_scheduler.h |  1 +
 cpp-package/include/mxnet-cpp/metric.h   |  1 +
 cpp-package/include/mxnet-cpp/model.h|  1 +
 cpp-package/include/mxnet-cpp/monitor.h  |  1 +
 cpp-package/include/mxnet-cpp/ndarray.h  |  1 +
 cpp-package/include/mxnet-cpp/op_map.h   |  1 +
 cpp-package/include/mxnet-cpp/op_suppl.h |  1 +
 cpp-package/include/mxnet-cpp/op_util.h  |  1 +
 cpp-package/include/mxnet-cpp/operator.h |  1 +
 cpp-package/include/mxnet-cpp/optimizer.h|  1 +
 cpp-package/include/mxnet-cpp/shape.h|  1 +
 cpp-package/include/mxnet-cpp/symbol.h   |  1 +
 docker/install/scala.sh  |  2 +-
 example/image-classification/symbols/vgg.py  | 16 
 include/mxnet/base.h |  1 +
 include/mxnet/c_api.h|  1 +
 include/mxnet/c_predict_api.h|  1 +
 include/mxnet/engine.h   |  1 +
 include/mxnet/executor.h |  1 +
 include/mxnet/io.h   |  1 +
 include/mxnet/kvstore.h  |  1 +
 include/mxnet/ndarray.h  |  1 +
 include/mxnet/op_attr_types.h|  1 +
 include/mxnet/operator.h |  1 +
 include/mxnet/operator_util.h|  1 +
 include/mxnet/resource.h |  1 +
 include/mxnet/storage.h  |  1 +
 include/mxnet/tensor_blob.h  |  1 +
 perl-package/AI-MXNet/lib/AI/MXNet/Types.pm  |  2 +-
 plugin/caffe/caffe_blob.cc   |  1 +
 plugin/caffe/caffe_blob.h|  1 +
 plugin/caffe/caffe_common.cc |  1 +
 plugin/caffe/caffe_common.h  |  1 +
 plugin/caffe/caffe_data_iter.cc  |  1 +
 plugin/caffe/caffe_fieldentry.h  |  1 +
 plugin/caffe/caffe_loss-inl.h|  1 +
 plugin/caffe/caffe_loss.cc   |  1 +
 plugin/caffe/caffe_loss.cu   |  1 +
 plugin/caffe/caffe_op-inl.h  |  1 +
 plugin/caffe/caffe_op.cc |  1 +
 plugin/caffe/caffe_op.cu |  1 +
 plugin/caffe/caffe_stream.cc |  1 +
 plugin/caffe/caffe_stream.h  |  1 +
 plugin/opencv/cv_api.cc  |  1 +
 

[incubator-mxnet] branch master updated: disable test causing mklml-cpu to fail (#8713)

2017-11-19 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 6ca944b  disable test causing mklml-cpu to fail (#8713)
6ca944b is described below

commit 6ca944baa8b10214990a9db17cac1c34fb4a927d
Author: mbaijal <30911248+mbai...@users.noreply.github.com>
AuthorDate: Sun Nov 19 12:06:51 2017 -0800

disable test causing mklml-cpu to fail (#8713)
---
 tests/python/unittest/test_operator.py | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tests/python/unittest/test_operator.py 
b/tests/python/unittest/test_operator.py
index 18544ba..55a3a57 100644
--- a/tests/python/unittest/test_operator.py
+++ b/tests/python/unittest/test_operator.py
@@ -1043,6 +1043,7 @@ def test_convolution_grouping():
 np.testing.assert_allclose(arr1.asnumpy(), arr2.asnumpy(), rtol=1e-3, 
atol=1e-4)
 
 
+@unittest.skip("test fails intermittently. temporarily disabled till it gets 
fixed. tracked at https://github.com/apache/incubator-mxnet/issues/8712;)
 def test_depthwise_convolution():
 for num_base in [1, 4, 16, 32, 64]:
 for kernel in [(3,3), (5,5)]:

-- 
To stop receiving notification emails like this one, please contact
['"comm...@mxnet.apache.org" '].


[GitHub] szha closed pull request #8713: Disable 'test_operator.test_depthwise_convolution' which fails in [Python2: MKLML-CPU] and [Python3: MKLML-CPU]

2017-11-19 Thread GitBox
szha closed pull request #8713: Disable 
'test_operator.test_depthwise_convolution' which fails in [Python2: MKLML-CPU] 
and [Python3: MKLML-CPU]
URL: https://github.com/apache/incubator-mxnet/pull/8713
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/tests/python/unittest/test_operator.py 
b/tests/python/unittest/test_operator.py
index 18544bade5..55a3a57218 100644
--- a/tests/python/unittest/test_operator.py
+++ b/tests/python/unittest/test_operator.py
@@ -1043,6 +1043,7 @@ def test_convolution_grouping():
 np.testing.assert_allclose(arr1.asnumpy(), arr2.asnumpy(), rtol=1e-3, 
atol=1e-4)
 
 
+@unittest.skip("test fails intermittently. temporarily disabled till it gets 
fixed. tracked at https://github.com/apache/incubator-mxnet/issues/8712;)
 def test_depthwise_convolution():
 for num_base in [1, 4, 16, 32, 64]:
 for kernel in [(3,3), (5,5)]:


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on issue #8702: [DISCUSSION] Should we deprecate Makefile and only use CMake?

2017-11-19 Thread GitBox
larroy commented on issue #8702: [DISCUSSION] Should we deprecate Makefile and 
only use CMake?
URL: 
https://github.com/apache/incubator-mxnet/issues/8702#issuecomment-345544511
 
 
   @piiswrong CMake has become the de-facto standard to build C/C++ projects. 
Seems installing in those platforms is just make && make install. But indeed is 
an additional dependency, but Make is a separate program as well. Why do you 
say it's painful? the CMake installation doesn't seem too complex.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on issue #8702: [DISCUSSION] Should we deprecate Makefile and only use CMake?

2017-11-19 Thread GitBox
larroy commented on issue #8702: [DISCUSSION] Should we deprecate Makefile and 
only use CMake?
URL: 
https://github.com/apache/incubator-mxnet/issues/8702#issuecomment-345544304
 
 
   +1 for CMake


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #8702: [DISCUSSION] Should we deprecate Makefile and only use CMake?

2017-11-19 Thread GitBox
cjolivier01 commented on issue #8702: [DISCUSSION] Should we deprecate Makefile 
and only use CMake?
URL: 
https://github.com/apache/incubator-mxnet/issues/8702#issuecomment-345541401
 
 
   btw, CMake has the lint stuff, or more likely, a partial/outdated version of 
it. 
   Targets are mxnet_lint, dmlc_lint, mshadow_lint.
   
   I also have CMake packages in another project I've made to do gcov analysis 
that I could pull in and maybe help the local infra ppl to add to this project.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin opened a new pull request #8717: fix group2ctx with null reqs

2017-11-19 Thread GitBox
eric-haibin-lin opened a new pull request #8717: fix group2ctx with null reqs
URL: https://github.com/apache/incubator-mxnet/pull/8717
 
 
   ## Description ##
   It looks like the index into `arg_grad_ctxes` was not correct because  
`g.outputs` don't include the arg_grad whose grad_req is null. 
   
   @reminisce @piiswrong @tqchen @ZiyueHuang 
   
   ## Checklist ##
   ### Essentials ###
   - [x] Passed code style checking (`make lint`)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage
   - [x] For user-facing API changes, API doc string has been updated. For new 
C++ functions in header files, their functionalities and arguments are 
well-documented. 
   - [x] To my best knowledge, examples are either not affected by this change, 
or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sbodenstein commented on issue #7931: MKL-DNN integration: request for reviews

2017-11-19 Thread GitBox
sbodenstein commented on issue #7931: MKL-DNN integration: request for reviews
URL: https://github.com/apache/incubator-mxnet/pull/7931#issuecomment-345538097
 
 
   @ykim362: do you know if bugs, like the resnet convergence bug, are still 
unsolved with v0.11 MKL-DNN? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen commented on issue #8702: [DISCUSSION] Should we deprecate Makefile and only use CMake?

2017-11-19 Thread GitBox
tqchen commented on issue #8702: [DISCUSSION] Should we deprecate Makefile and 
only use CMake?
URL: 
https://github.com/apache/incubator-mxnet/issues/8702#issuecomment-345537323
 
 
   CMake is great, and we can still use the configuration based tricks 
https://github.com/dmlc/tvm/blob/master/CMakeLists.txt#L10  to rely on a 
config.cmake to customize the build option.
   
   We might want to leave simple rules like lint to  Makefile (or as a python 
script) since it does not really help to bring them to cmake. The only problem 
we might see is it increases the burden on build on embedded systems like 
raspberry Pi where CMake version is not great. Could be resolved by adding 
clear instruction to build cmake from source.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen commented on issue #8702: [DISCUSSION] Should we deprecate Makefile and only use CMake?

2017-11-19 Thread GitBox
tqchen commented on issue #8702: [DISCUSSION] Should we deprecate Makefile and 
only use CMake?
URL: 
https://github.com/apache/incubator-mxnet/issues/8702#issuecomment-345537323
 
 
   CMake is great, and we cans till use the configuration based tricks 
https://github.com/dmlc/tvm/blob/master/CMakeLists.txt#L10  to rely on a 
config.cmake to customize the build option.
   
   We might want to leave simple rules like lint to  Makefile (or as a python 
script) since it does not really help to bring them to cmake.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZiyueHuang commented on issue #8611: optimization for dot(csr.T, dense) = rsp

2017-11-19 Thread GitBox
ZiyueHuang commented on issue #8611: optimization for dot(csr.T, dense) = rsp
URL: https://github.com/apache/incubator-mxnet/pull/8611#issuecomment-345533714
 
 
   Add benchmark for `n=2` case,
   
   Before,
   
   ```
   python dot.py --num-omp-threads 16
   
 mxnet sparse dot benchmark: dot(csr, default) = default
 (matrix multiplication: (m x k)^T * (k x n) = m x n)
   
lhs_density(%)  rhs_density(%)contextmkn  
t_sparse(ms)   t_dense(ms)  speedup
   1.0   100.0 cpu(0)  256  1002
 41.61 36.01 0.87
   ```
   
   After,
   
   ```
   python dot.py --num-omp-threads 16
   
 mxnet sparse dot benchmark: dot(csr, default) = default
 (matrix multiplication: (m x k)^T * (k x n) = m x n)
   
lhs_density(%)  rhs_density(%)contextmkn  
t_sparse(ms)   t_dense(ms)  speedup
   1.0   100.0 cpu(0)  256  1002
 14.44 32.46 2.25
   ```
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >