[GitHub] indhub closed pull request #8327: update ps lite

2017-10-18 Thread git
indhub closed pull request #8327: update ps lite
URL: https://github.com/apache/incubator-mxnet/pull/8327
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/ps-lite b/ps-lite
index acdb698fa3..bdd4c67e9e 16
--- a/ps-lite
+++ b/ps-lite
@@ -1 +1 @@
-Subproject commit acdb698fa3bb80929ef83bb37c705f025e119b82
+Subproject commit bdd4c67e9e34dc0b8350ce306b0caa737eb31c83


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha closed pull request #8329: fluent methods for missed ops

2017-10-17 Thread git
szha closed pull request #8329: fluent methods for missed ops
URL: https://github.com/apache/incubator-mxnet/pull/8329
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/api/python/ndarray/ndarray.md 
b/docs/api/python/ndarray/ndarray.md
index 615b9dc5a7..09564c2f20 100644
--- a/docs/api/python/ndarray/ndarray.md
+++ b/docs/api/python/ndarray/ndarray.md
@@ -125,6 +125,7 @@ The `ndarray` package provides several classes:
 
 NDArray.T
 NDArray.reshape
+NDArray.reshape_like
 NDArray.flatten
 NDArray.expand_dims
 NDArray.split
@@ -194,6 +195,7 @@ The `ndarray` package provides several classes:
 NDArray.topk
 NDArray.argmax
 NDArray.argmin
+NDArray.argmax_channel
 ```
 
 ### Arithmetic operations
@@ -266,7 +268,22 @@ The `ndarray` package provides several classes:
 
 NDArray.sqrt
 NDArray.rsqrt
+NDArray.cbrt
+NDArray.rcbrt
 NDArray.square
+NDArray.reciprocal
+```
+
+## Basic neural network functions
+
+```eval_rst
+.. autosummary::
+:nosignatures:
+
+NDArray.relu
+NDArray.sigmoid
+NDArray.softmax
+NDArray.log_softmax
 ```
 
 ### In-place arithmetic operations
@@ -358,6 +375,7 @@ The `ndarray` package provides several classes:
 
 cast
 reshape
+reshape_like
 flatten
 expand_dims
 ```
@@ -394,6 +412,7 @@ The `ndarray` package provides several classes:
 
 concat
 split
+stack
 ```
 
 ### Indexing routines
@@ -514,11 +533,13 @@ The `ndarray` package provides several classes:
 power
 sqrt
 rsqrt
+cbrt
+rcbrt
 square
 reciprocal
 ```
 
-### Logic functions
+### Comparison
 
 ```eval_rst
 .. autosummary::
@@ -559,6 +580,18 @@ The `ndarray` package provides several classes:
 argsort
 argmax
 argmin
+argmax_channel
+```
+
+### Sequence operation
+
+```eval_rst
+.. autosummary::
+:nosignatures:
+
+SequenceLast
+SequenceMask
+SequenceReverse
 ```
 
 ### Miscellaneous
@@ -592,6 +625,8 @@ The `ndarray` package provides several classes:
 SoftmaxOutput
 softmax
 log_softmax
+relu
+sigmoid
 ```
 
 ### More
diff --git a/docs/api/python/symbol/symbol.md b/docs/api/python/symbol/symbol.md
index 7570e18ba7..e93976d603 100644
--- a/docs/api/python/symbol/symbol.md
+++ b/docs/api/python/symbol/symbol.md
@@ -143,9 +143,23 @@ Composite multiple symbols into a new one by an operator.
 
 Symbol.sqrt
 Symbol.rsqrt
+Symbol.cbrt
+Symbol.rcbrt
 Symbol.square
 ```
 
+## Basic neural network functions
+
+```eval_rst
+.. autosummary::
+:nosignatures:
+
+Symbol.relu
+Symbol.sigmoid
+Symbol.softmax
+Symbol.log_softmax
+```
+
  Comparison operators
 
 ```eval_rst
@@ -178,6 +192,7 @@ Composite multiple symbols into a new one by an operator.
 
 Symbol.astype
 Symbol.reshape
+Symbol.reshape_like
 Symbol.flatten
 Symbol.expand_dims
 ```
@@ -246,6 +261,7 @@ Composite multiple symbols into a new one by an operator.
 Symbol.topk
 Symbol.argmax
 Symbol.argmin
+Symbol.argmax_channel
 ```
 
 ### Query information
@@ -355,6 +371,7 @@ Composite multiple symbols into a new one by an operator.
 
 cast
 reshape
+reshape_like
 flatten
 expand_dims
 ```
@@ -391,6 +408,7 @@ Composite multiple symbols into a new one by an operator.
 
 concat
 split
+stack
 ```
 
 ### Indexing routines
@@ -424,7 +442,6 @@ Composite multiple symbols into a new one by an operator.
 broadcast_div
 broadcast_mod
 negative
-reciprocal
 dot
 batch_dot
 add_n
@@ -492,7 +509,6 @@ Composite multiple symbols into a new one by an operator.
 trunc
 ```
 
-
 ### Exponents and logarithms
 
 ```eval_rst
@@ -519,9 +535,10 @@ Composite multiple symbols into a new one by an operator.
 cbrt
 rcbrt
 square
+reciprocal
 ```
 
-### Logic functions
+### Comparison
 
 ```eval_rst
 .. autosummary::
@@ -534,6 +551,7 @@ Composite multiple symbols into a new one by an operator.
 broadcast_lesser
 broadcast_lesser_equal
 ```
+
 ### Random sampling
 
 ```eval_rst
@@ -561,6 +579,18 @@ Composite multiple symbols into a new one by an operator.
 argsort
 argmax
 argmin
+argmax_channel
+```
+
+### Sequence operation
+
+```eval_rst
+.. autosummary::
+:nosignatures:
+
+SequenceLast
+SequenceMask
+SequenceReverse
 ```
 
 ### Miscellaneous
@@ -596,6 +626,8 @@ Composite multiple symbols into a new one by an operator.
 SoftmaxOutput
 softmax
 log_softmax
+relu
+sigmoid
 ```
 
 ### More
diff --git a/python/mxnet/ndarray/ndarray.py b/python/mxnet/ndarray/ndarray.py
index 2f9972b21b..1cd9f40e52 100644
--- a/python/mxnet/ndarray/ndarray.py
+++ b/python/mxnet/ndarray/ndarr

[GitHub] szha closed pull request #8320: Update rnn.md

2017-10-17 Thread git
szha closed pull request #8320: Update rnn.md
URL: https://github.com/apache/incubator-mxnet/pull/8320
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] indhub closed pull request #8300: fixed broken links. https was pointing to http for mxnet.io

2017-10-17 Thread git
indhub closed pull request #8300: fixed broken links. https was pointing to 
http for mxnet.io
URL: https://github.com/apache/incubator-mxnet/pull/8300
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/tutorials/r/symbol.md b/docs/tutorials/r/symbol.md
index 63f3a53bca..6ab4dc2d3d 100644
--- a/docs/tutorials/r/symbol.md
+++ b/docs/tutorials/r/symbol.md
@@ -104,7 +104,7 @@ In the example, *net* is used as a function to apply to an 
existing symbol
 
 ## Training a Neural Net
 
-The [model API](../../../R-package/R/model.R) is a thin wrapper around the 
symbolic executors to support neural net training.
+The [model 
API](https://github.com/apache/incubator-mxnet/blob/master/R-package/R/model.R) 
is a thin wrapper around the symbolic executors to support neural net training.
 
 We encourage you to read [Symbolic Configuration and Execution in Pictures for 
python package](../../api/python/symbol_in_pictures/symbol_in_pictures.md)for a 
detailed explanation of concepts in pictures.
 
diff --git a/docs/tutorials/sparse/row_sparse.md 
b/docs/tutorials/sparse/row_sparse.md
index e2f0a12c0f..6a69341da9 100644
--- a/docs/tutorials/sparse/row_sparse.md
+++ b/docs/tutorials/sparse/row_sparse.md
@@ -271,7 +271,7 @@ rsp_retained = mx.nd.sparse.retain(rsp, mx.nd.array([0, 1]))
 
 ## Sparse Operators and Storage Type Inference
 
-Operators that have specialized implementation for sparse arrays can be 
accessed in ``mx.nd.sparse``. You can read the [mxnet.ndarray.sparse API 
documentation](https://mxnet.io/versions/master/api/python/ndarray/sparse.html) 
to find what sparse operators are available.
+Operators that have specialized implementation for sparse arrays can be 
accessed in ``mx.nd.sparse``. You can read the [mxnet.ndarray.sparse API 
documentation](http://mxnet.io/versions/master/api/python/ndarray/sparse.html) 
to find what sparse operators are available.
 
 
 ```python
diff --git a/docs/tutorials/sparse/train.md b/docs/tutorials/sparse/train.md
index d6e3f4e82a..22ce039ee7 100644
--- a/docs/tutorials/sparse/train.md
+++ b/docs/tutorials/sparse/train.md
@@ -99,7 +99,7 @@ f = mx.sym.sparse.elemwise_add(c, c)
 ### Storage Type Inference
 
 What will be the output storage types of sparse symbols? In MXNet, for any 
sparse symbol, the result storage types are inferred based on storage types of 
inputs.
-You can read the [Sparse Symbol 
API](https://mxnet.io/versions/master/api/python/symbol/sparse.html) 
documentation to find what output storage types are. In the example below we 
will try out the storage types introduced in the Row Sparse and Compressed 
Sparse Row tutorials: `default` (dense), `csr`, and `row_sparse`.
+You can read the [Sparse Symbol 
API](http://mxnet.io/versions/master/api/python/symbol/sparse.html) 
documentation to find what output storage types are. In the example below we 
will try out the storage types introduced in the Row Sparse and Compressed 
Sparse Row tutorials: `default` (dense), `csr`, and `row_sparse`.
 
 
 ```python


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] astonzhang closed pull request #8324: Fix typo in Gluon L1loss

2017-10-17 Thread git
astonzhang closed pull request #8324: Fix typo in Gluon L1loss
URL: https://github.com/apache/incubator-mxnet/pull/8324
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/python/mxnet/gluon/loss.py b/python/mxnet/gluon/loss.py
index c8cdbede3d..8d4f151f23 100644
--- a/python/mxnet/gluon/loss.py
+++ b/python/mxnet/gluon/loss.py
@@ -128,7 +128,7 @@ class L1Loss(Loss):
 """Calculates the mean absolute error between output and label:
 
 .. math::
-L = \\frac{1}{2}\\sum_i \\vert {output}_i - {label}_i \\vert.
+L = \\sum_i \\vert {output}_i - {label}_i \\vert.
 
 Output and label must have the same shape.
 


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mli closed issue #8319: [WIP-NewFeature] ONNX support for MXNet

2017-10-17 Thread git
mli closed issue #8319: [WIP-NewFeature] ONNX support for MXNet
URL: https://github.com/apache/incubator-mxnet/issues/8319
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 closed pull request #8311: Revert "[CMAKE] Fix windows cmake build"

2017-10-17 Thread git
cjolivier01 closed pull request #8311: Revert "[CMAKE] Fix windows cmake build"
URL: https://github.com/apache/incubator-mxnet/pull/8311
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] KellenSunderland commented on issue #8125: Enable smoothing in softmax operator

2017-10-17 Thread git
KellenSunderland commented on issue #8125: Enable smoothing in softmax operator
URL: https://github.com/apache/incubator-mxnet/pull/8125#issuecomment-337267387
 
 
   Thanks for getting this one in, big impact for our team.  
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 opened a new pull request #8316: Fix unused type warning

2017-10-17 Thread git
cjolivier01 opened a new pull request #8316: Fix unused type warning
URL: https://github.com/apache/incubator-mxnet/pull/8316
 
 
   ## Description ##
   (Brief description on what this PR is about)
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage
   - [ ] For user-facing API changes, API doc string has been updated.
   - [ ] To my best knowledge, examples are either not affected by this change, 
or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Intersting edge cases to note here
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 closed pull request #8277: v0.12 regression: Fix registration of children for Block

2017-10-17 Thread git
cjolivier01 closed pull request #8277: v0.12 regression: Fix registration of 
children for Block
URL: https://github.com/apache/incubator-mxnet/pull/8277
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 closed pull request #8125: Enable smoothing in softmax operator

2017-10-17 Thread git
cjolivier01 closed pull request #8125: Enable smoothing in softmax operator
URL: https://github.com/apache/incubator-mxnet/pull/8125
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 closed pull request #8301: Preparing for 0.12.0.rc0: Final changes before RC

2017-10-17 Thread git
cjolivier01 closed pull request #8301: Preparing for 0.12.0.rc0: Final changes 
before RC
URL: https://github.com/apache/incubator-mxnet/pull/8301
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #8311: Revert "[CMAKE] Fix windows cmake build"

2017-10-17 Thread git
cjolivier01 commented on issue #8311: Revert "[CMAKE] Fix windows cmake build"
URL: https://github.com/apache/incubator-mxnet/pull/8311#issuecomment-337251807
 
 
   Not yet
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZiyueHuang commented on a change in pull request #8259: check_format of ndrray, mainly for csr

2017-10-17 Thread git
ZiyueHuang commented on a change in pull request #8259: check_format of ndrray, 
mainly for csr
URL: https://github.com/apache/incubator-mxnet/pull/8259#discussion_r145037719
 
 

 ##
 File path: src/common/utils.h
 ##
 @@ -43,9 +43,84 @@
 #include 
 #include 
 
+#include "../operator/mxnet_op.h"
+#include "../ndarray/ndarray_function.h"
+
 namespace mxnet {
 namespace common {
 
+struct indptr_check {
+  template
+  MSHADOW_XINLINE static void Map(int i, mshadow::default_real_t* out, const 
DType* in,
+  const nnvm::dim_t end, const nnvm::dim_t 
idx_size) {
+if ((in[i+1] < in[i]) || (i == 0 && in[i] != static_cast(0)) ||
+(i == end && in[i] < static_cast(idx_size))) out[0] = 1;
+  }
+};
+
+struct idx_check {
+  template
+  MSHADOW_XINLINE static void Map(int i, mshadow::default_real_t* out,
+  const DType* in, const nnvm::dim_t ncols) {
+if (in[i] >= static_cast(ncols)) out[0] = 1;
+  }
+};
+
+template
+void CheckFormatWrapper(const RunContext &rctx, const NDArray *input,
+NDArray *cpu_ret, const bool &full_check);
+
+template
+void CheckFormatImpl(const RunContext &rctx, const NDArray *input,
+ NDArray *cpu_ret, const bool &full_check) {
 
 Review comment:
   I think `stream->wait` can be called at bottom only once, since `Eval` and 
`Copy` use the same stream.
 
--------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on issue #8292: mx.nd.array indexing broken in armv7 / raspberrypi / jessie 8.0 (5 dimensional tensor)

2017-10-17 Thread git
larroy commented on issue #8292: mx.nd.array indexing broken in armv7 / 
raspberrypi / jessie 8.0 (5 dimensional tensor)
URL: 
https://github.com/apache/incubator-mxnet/issues/8292#issuecomment-337184148
 
 
   I'm looking at this issue.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on a change in pull request #8313: Ci test randomness

2017-10-17 Thread git
larroy commented on a change in pull request #8313: Ci test randomness
URL: https://github.com/apache/incubator-mxnet/pull/8313#discussion_r145079970
 
 

 ##
 File path: python/mxnet/test_utils.py
 ##
 @@ -1538,3 +1546,53 @@ def discard_stderr():
 finally:
 os.dup2(old_stderr, stderr_fileno)
 bit_bucket.close()
+
+@contextmanager
+def np_random_seed(seed=None):
+"""
+Runs a code block with a new state of np.random.
+To impose rng determinism, invoke e.g. as in:
+
+with np_random_seed(1234):
+...
+
+To impose rng non-determinism, invoke as in:
+
+with np_random_seed():
+...
+
+"""
+
+try:
+saved_rng_state = np.random.get_state()
+np.random.seed(seed)
+yield
+finally:
+# Reinstate prior state of np.random
+np.random.set_state(saved_rng_state)
+
+# Set seed and output to stderr (to avoid default nosetests filtering of 
stdout)
+def set_np_random_seed(seed=None, ostream=sys.stderr):
+"""Set the np.random seed and announce the value to an output stream
+
+Parameters
+--
+
+seed: int
+Seed to pass to np.random.seed().  Should be None to set and output
+a randomly chosen value.
+ostream :
+Stream to announce the new seed value to.
+
+The expected use of this function is to set the seed globally before
+a suite of tests and output the set value to help reproduce any failures
+that are dependent on the random data.  To fix the seed for a single test
+without modifying the randomness of subsequent tests in the same file,
+use 'np_random_seed'.
+"""
+if seed is None:
+seed = np.random.randint(0, np.iinfo(np.uint32).max)
+if ostream is not None:
+ostream.write('Setting np.random seed to %s.\n' % seed)
 
 Review comment:
   Why not use logging? I think is better to writing to stdout or stderr in 
general.
 
--------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on issue #8313: Ci test randomness

2017-10-17 Thread git
larroy commented on issue #8313: Ci test randomness
URL: https://github.com/apache/incubator-mxnet/pull/8313#issuecomment-337179592
 
 
   LGTM, modulo the comments below.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on a change in pull request #8313: Ci test randomness

2017-10-17 Thread git
larroy commented on a change in pull request #8313: Ci test randomness
URL: https://github.com/apache/incubator-mxnet/pull/8313#discussion_r145079333
 
 

 ##
 File path: python/mxnet/test_utils.py
 ##
 @@ -830,6 +834,8 @@ def check_numeric_gradient(sym, location, aux_states=None, 
numeric_eps=1e-3, rto
 if ctx is None:
 ctx = default_context()
 
+_rng = get_rng()
 
 Review comment:
   why the _ prefix? is not a member?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on issue #8313: Ci test randomness

2017-10-17 Thread git
larroy commented on issue #8313: Ci test randomness
URL: https://github.com/apache/incubator-mxnet/pull/8313#issuecomment-337179592
 
 
   LGTM, modulo the comments above.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] miraclewkf closed issue #8315: There is a bug in metric.py

2017-10-17 Thread git
miraclewkf closed issue #8315: There is a bug in metric.py
URL: https://github.com/apache/incubator-mxnet/issues/8315
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] miraclewkf opened a new issue #8315: There is a bug in metric.py

2017-10-17 Thread git
miraclewkf opened a new issue #8315: There is a bug in metric.py
URL: https://github.com/apache/incubator-mxnet/issues/8315
 
 
   In the base class EvalMetric(object), these is a bug in update_dict function 
of this base class:
   ```
   def update_dict(self, label, pred):
   """Update the internal evaluation with named label and pred
   Parameters
   --
   labels : OrderedDict of str -> NDArray
   name to array mapping for labels.
   preds : list of NDArray
   name to array mapping of predicted outputs.
   """
   if self.output_names is not None:
   **pred = [pred[name] for name in self.output_names]**
   else:
   pred = list(pred.values())
   
   if self.label_names is not None:
   **label = [label[name] for name in self.label_names]**
   else:
   label = list(label.values())
   
   self.update(label, pred)
   ```
   The bug is if `self.output_names` is not None,  `pred = [pred[name] for name 
in self.output_names]` can't work, because self.output_names is an 'int' 
object, for example 2, but 'int' object is not iterable
 
----
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on issue #8311: Revert "[CMAKE] Fix windows cmake build"

2017-10-17 Thread git
piiswrong commented on issue #8311: Revert "[CMAKE] Fix windows cmake build"
URL: https://github.com/apache/incubator-mxnet/pull/8311#issuecomment-337143272
 
 
   So are we merging this or not?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] edmBernard commented on issue #8291: Import error in SSD example

2017-10-17 Thread git
edmBernard commented on issue #8291: Import error in SSD example
URL: 
https://github.com/apache/incubator-mxnet/issues/8291#issuecomment-337142762
 
 
   @agataradys it seem ssd example have been port to python3 
   your error come from the different behaviour in import between python2 and 
python3
   either you can switch to python3 or you can try to import this at the 
beginning of the file : `from __future__ import absolute_import`
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] DickJC123 commented on issue #8313: Ci test randomness

2017-10-17 Thread git
DickJC123 commented on issue #8313: Ci test randomness
URL: https://github.com/apache/incubator-mxnet/pull/8313#issuecomment-337142795
 
 
   Tests that are downstream from a test that previously set a seed will now 
see random data, where before they would not have.  I also took out a global 
seed(42) call in test_operator_gpu.py, so some of those tests will see 
randomness for the first time.  
   
   It?s not too hard to make these tests robust. Takes me less than an hour per 
failure to understand the reason and correct it. With this PR, we?ll be able to 
reinvoke any failure and add instrumentation to understand it.
   
   
   > On Oct 16, 2017, at 10:28 PM, Chris Olivier  
wrote:
   > 
   > Does this change the random behavior of tests which don?t call one of your 
new functions?
   > 
   > ?
   > You are receiving this because you authored the thread.
   > Reply to this email directly, view it on GitHub, or mute the thread.
   > 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZiyueHuang commented on issue #8225: eye for dense and sparse

2017-10-17 Thread git
ZiyueHuang commented on issue #8225: eye for dense and sparse
URL: https://github.com/apache/incubator-mxnet/pull/8225#issuecomment-337139330
 
 
   Why build fails here? Do you have any idea? @eric-haibin-lin @piiswrong 
   
   These codes can successfully compile and pass the unittest on my machine.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] chenxu31 commented on issue #7590: Gradient function not returning enough gradient

2017-10-16 Thread git
chenxu31 commented on issue #7590: Gradient function not returning enough 
gradient
URL: 
https://github.com/apache/incubator-mxnet/issues/7590#issuecomment-337135670
 
 
   I have had the same problem, any solution?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZiyueHuang commented on a change in pull request #8259: check_format of ndrray, mainly for csr

2017-10-16 Thread git
ZiyueHuang commented on a change in pull request #8259: check_format of ndrray, 
mainly for csr
URL: https://github.com/apache/incubator-mxnet/pull/8259#discussion_r145037719
 
 

 ##
 File path: src/common/utils.h
 ##
 @@ -43,9 +43,84 @@
 #include 
 #include 
 
+#include "../operator/mxnet_op.h"
+#include "../ndarray/ndarray_function.h"
+
 namespace mxnet {
 namespace common {
 
+struct indptr_check {
+  template
+  MSHADOW_XINLINE static void Map(int i, mshadow::default_real_t* out, const 
DType* in,
+  const nnvm::dim_t end, const nnvm::dim_t 
idx_size) {
+if ((in[i+1] < in[i]) || (i == 0 && in[i] != static_cast(0)) ||
+(i == end && in[i] < static_cast(idx_size))) out[0] = 1;
+  }
+};
+
+struct idx_check {
+  template
+  MSHADOW_XINLINE static void Map(int i, mshadow::default_real_t* out,
+  const DType* in, const nnvm::dim_t ncols) {
+if (in[i] >= static_cast(ncols)) out[0] = 1;
+  }
+};
+
+template
+void CheckFormatWrapper(const RunContext &rctx, const NDArray *input,
+NDArray *cpu_ret, const bool &full_check);
+
+template
+void CheckFormatImpl(const RunContext &rctx, const NDArray *input,
+ NDArray *cpu_ret, const bool &full_check) {
 
 Review comment:
   `stream->wait` is used for `Eval` and `Copy` with `input` and `xpu_ret`. I 
think these `wait` are neccessary for the corresponding computations. Please 
correct me if I'm wrong.
   
   But I think `stream->wait` can be called at bottom only once, since `Eval` 
and `Copy` use the same stream. Is this safe?
 
--------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] agataradys commented on issue #8291: Import error in SSD example

2017-10-16 Thread git
agataradys commented on issue #8291: Import error in SSD example
URL: 
https://github.com/apache/incubator-mxnet/issues/8291#issuecomment-337134052
 
 
   @edmBernard I use python2
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZiyueHuang commented on a change in pull request #8259: check_format of ndrray, mainly for csr

2017-10-16 Thread git
ZiyueHuang commented on a change in pull request #8259: check_format of ndrray, 
mainly for csr
URL: https://github.com/apache/incubator-mxnet/pull/8259#discussion_r145037719
 
 

 ##
 File path: src/common/utils.h
 ##
 @@ -43,9 +43,84 @@
 #include 
 #include 
 
+#include "../operator/mxnet_op.h"
+#include "../ndarray/ndarray_function.h"
+
 namespace mxnet {
 namespace common {
 
+struct indptr_check {
+  template
+  MSHADOW_XINLINE static void Map(int i, mshadow::default_real_t* out, const 
DType* in,
+  const nnvm::dim_t end, const nnvm::dim_t 
idx_size) {
+if ((in[i+1] < in[i]) || (i == 0 && in[i] != static_cast(0)) ||
+(i == end && in[i] < static_cast(idx_size))) out[0] = 1;
+  }
+};
+
+struct idx_check {
+  template
+  MSHADOW_XINLINE static void Map(int i, mshadow::default_real_t* out,
+  const DType* in, const nnvm::dim_t ncols) {
+if (in[i] >= static_cast(ncols)) out[0] = 1;
+  }
+};
+
+template
+void CheckFormatWrapper(const RunContext &rctx, const NDArray *input,
+NDArray *cpu_ret, const bool &full_check);
+
+template
+void CheckFormatImpl(const RunContext &rctx, const NDArray *input,
+ NDArray *cpu_ret, const bool &full_check) {
 
 Review comment:
   `stream->wait` is used for `Eval` and `Copy` with `input` and `xpu_ret`. I 
think these `wait` are neccessary for the corresponding computations. Please 
correct me if I' wrong.
 
----
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZiyueHuang commented on a change in pull request #8259: check_format of ndrray, mainly for csr

2017-10-16 Thread git
ZiyueHuang commented on a change in pull request #8259: check_format of ndrray, 
mainly for csr
URL: https://github.com/apache/incubator-mxnet/pull/8259#discussion_r145037719
 
 

 ##
 File path: src/common/utils.h
 ##
 @@ -43,9 +43,84 @@
 #include 
 #include 
 
+#include "../operator/mxnet_op.h"
+#include "../ndarray/ndarray_function.h"
+
 namespace mxnet {
 namespace common {
 
+struct indptr_check {
+  template
+  MSHADOW_XINLINE static void Map(int i, mshadow::default_real_t* out, const 
DType* in,
+  const nnvm::dim_t end, const nnvm::dim_t 
idx_size) {
+if ((in[i+1] < in[i]) || (i == 0 && in[i] != static_cast(0)) ||
+(i == end && in[i] < static_cast(idx_size))) out[0] = 1;
+  }
+};
+
+struct idx_check {
+  template
+  MSHADOW_XINLINE static void Map(int i, mshadow::default_real_t* out,
+  const DType* in, const nnvm::dim_t ncols) {
+if (in[i] >= static_cast(ncols)) out[0] = 1;
+  }
+};
+
+template
+void CheckFormatWrapper(const RunContext &rctx, const NDArray *input,
+NDArray *cpu_ret, const bool &full_check);
+
+template
+void CheckFormatImpl(const RunContext &rctx, const NDArray *input,
+ NDArray *cpu_ret, const bool &full_check) {
 
 Review comment:
   `stream->wait` is used for `Eval` and `Copy` with `input` and `xpu_ret`. I 
think these `wait` are neccessary for the corresponding computations. Please 
correct me if I'm wrong.
 
----
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZiyueHuang commented on a change in pull request #8259: check_format of ndrray, mainly for csr

2017-10-16 Thread git
ZiyueHuang commented on a change in pull request #8259: check_format of ndrray, 
mainly for csr
URL: https://github.com/apache/incubator-mxnet/pull/8259#discussion_r145037719
 
 

 ##
 File path: src/common/utils.h
 ##
 @@ -43,9 +43,84 @@
 #include 
 #include 
 
+#include "../operator/mxnet_op.h"
+#include "../ndarray/ndarray_function.h"
+
 namespace mxnet {
 namespace common {
 
+struct indptr_check {
+  template
+  MSHADOW_XINLINE static void Map(int i, mshadow::default_real_t* out, const 
DType* in,
+  const nnvm::dim_t end, const nnvm::dim_t 
idx_size) {
+if ((in[i+1] < in[i]) || (i == 0 && in[i] != static_cast(0)) ||
+(i == end && in[i] < static_cast(idx_size))) out[0] = 1;
+  }
+};
+
+struct idx_check {
+  template
+  MSHADOW_XINLINE static void Map(int i, mshadow::default_real_t* out,
+  const DType* in, const nnvm::dim_t ncols) {
+if (in[i] >= static_cast(ncols)) out[0] = 1;
+  }
+};
+
+template
+void CheckFormatWrapper(const RunContext &rctx, const NDArray *input,
+NDArray *cpu_ret, const bool &full_check);
+
+template
+void CheckFormatImpl(const RunContext &rctx, const NDArray *input,
+ NDArray *cpu_ret, const bool &full_check) {
 
 Review comment:
   `stream->wait` is used for `Eval` and `Copy` with `input` and `xpu_ret`. I 
think these `wait` are neccessary for the corresponding computations.
 
--------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #8309: asnumpy is slowly ,how can I speed up it?

2017-10-16 Thread git
szha commented on issue #8309: asnumpy is slowly ,how can I speed up it?
URL: 
https://github.com/apache/incubator-mxnet/issues/8309#issuecomment-337097361
 
 
   The operations are lazy-evaluated, so it's expected for the collection 
method to take most of the time. You can use the profiler to get the actual 
time for each backend operation. 
https://mxnet.incubator.apache.org/versions/master/how_to/perf.html?highlight=profile#profiler
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZiyueHuang commented on a change in pull request #8259: check_format of ndrray, mainly for csr

2017-10-16 Thread git
ZiyueHuang commented on a change in pull request #8259: check_format of ndrray, 
mainly for csr
URL: https://github.com/apache/incubator-mxnet/pull/8259#discussion_r145034987
 
 

 ##
 File path: src/ndarray/ndarray.cc
 ##
 @@ -1214,6 +1214,31 @@ void NDArray::SyncCopyToCPU(void *data, size_t size) 
const {
   }
 }
 
+void NDArray::CheckFormat(const bool full_check) const {
+  NDArray cpu_ret = NDArray(mshadow::Shape1(1), Context::CPU());
+  auto err = cpu_ret.data().dptr();
+  *err = 0;
 
 Review comment:
   Yes, this line should be removed.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZiyueHuang commented on a change in pull request #8259: check_format of ndrray, mainly for csr

2017-10-16 Thread git
ZiyueHuang commented on a change in pull request #8259: check_format of ndrray, 
mainly for csr
URL: https://github.com/apache/incubator-mxnet/pull/8259#discussion_r145035545
 
 

 ##
 File path: src/common/utils.h
 ##
 @@ -43,9 +43,84 @@
 #include 
 #include 
 
+#include "../operator/mxnet_op.h"
+#include "../ndarray/ndarray_function.h"
+
 namespace mxnet {
 namespace common {
 
+struct indptr_check {
+  template
+  MSHADOW_XINLINE static void Map(int i, mshadow::default_real_t* out, const 
DType* in,
+  const nnvm::dim_t end, const nnvm::dim_t 
idx_size) {
+if ((in[i+1] < in[i]) || (i == 0 && in[i] != static_cast(0)) ||
+(i == end && in[i] < static_cast(idx_size))) out[0] = 1;
+  }
+};
+
+struct idx_check {
+  template
+  MSHADOW_XINLINE static void Map(int i, mshadow::default_real_t* out,
+  const DType* in, const nnvm::dim_t ncols) {
+if (in[i] >= static_cast(ncols)) out[0] = 1;
+  }
+};
+
+template
+void CheckFormatWrapper(const RunContext &rctx, const NDArray *input,
+NDArray *cpu_ret, const bool &full_check);
+
+template
+void CheckFormatImpl(const RunContext &rctx, const NDArray *input,
+ NDArray *cpu_ret, const bool &full_check) {
 
 Review comment:
   `xpu_ret` is used for kernel launch with `input`, which can be on cpu/gpu. 
Then copy the `err` number to `cpu_ret` for checking outside of engine, mainly 
for inspecting data on cpu.
   ```
   auto err = cpu_ret.data().dptr();
   CHECK_EQ(*err, 0) << "Check validity of the CSRNDArray";
   ```
   If data is on gpu, can't do like this.
 
----
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZiyueHuang commented on a change in pull request #8259: check_format of ndrray, mainly for csr

2017-10-16 Thread git
ZiyueHuang commented on a change in pull request #8259: check_format of ndrray, 
mainly for csr
URL: https://github.com/apache/incubator-mxnet/pull/8259#discussion_r145035545
 
 

 ##
 File path: src/common/utils.h
 ##
 @@ -43,9 +43,84 @@
 #include 
 #include 
 
+#include "../operator/mxnet_op.h"
+#include "../ndarray/ndarray_function.h"
+
 namespace mxnet {
 namespace common {
 
+struct indptr_check {
+  template
+  MSHADOW_XINLINE static void Map(int i, mshadow::default_real_t* out, const 
DType* in,
+  const nnvm::dim_t end, const nnvm::dim_t 
idx_size) {
+if ((in[i+1] < in[i]) || (i == 0 && in[i] != static_cast(0)) ||
+(i == end && in[i] < static_cast(idx_size))) out[0] = 1;
+  }
+};
+
+struct idx_check {
+  template
+  MSHADOW_XINLINE static void Map(int i, mshadow::default_real_t* out,
+  const DType* in, const nnvm::dim_t ncols) {
+if (in[i] >= static_cast(ncols)) out[0] = 1;
+  }
+};
+
+template
+void CheckFormatWrapper(const RunContext &rctx, const NDArray *input,
+NDArray *cpu_ret, const bool &full_check);
+
+template
+void CheckFormatImpl(const RunContext &rctx, const NDArray *input,
+ NDArray *cpu_ret, const bool &full_check) {
 
 Review comment:
   `xpu_ret` is used for kernel launch with `input`, which can be on cpu/gpu. 
Then copy the `err` number to `cpu_ret` for checking outside of engine, mainly 
for inspecting data.
   ```
   auto err = cpu_ret.data().dptr();
   CHECK_EQ(*err, 0) << "Check validity of the CSRNDArray";
   ```
   If data is on gpu, can't do like this.
 
----
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZiyueHuang commented on a change in pull request #8259: check_format of ndrray, mainly for csr

2017-10-16 Thread git
ZiyueHuang commented on a change in pull request #8259: check_format of ndrray, 
mainly for csr
URL: https://github.com/apache/incubator-mxnet/pull/8259#discussion_r145035132
 
 

 ##
 File path: src/common/utils.h
 ##
 @@ -43,9 +43,84 @@
 #include 
 #include 
 
+#include "../operator/mxnet_op.h"
+#include "../ndarray/ndarray_function.h"
+
 namespace mxnet {
 namespace common {
 
+struct indptr_check {
+  template
+  MSHADOW_XINLINE static void Map(int i, mshadow::default_real_t* out, const 
DType* in,
+  const nnvm::dim_t end, const nnvm::dim_t 
idx_size) {
+if ((in[i+1] < in[i]) || (i == 0 && in[i] != static_cast(0)) ||
+(i == end && in[i] < static_cast(idx_size))) out[0] = 1;
+  }
+};
+
+struct idx_check {
+  template
+  MSHADOW_XINLINE static void Map(int i, mshadow::default_real_t* out,
+  const DType* in, const nnvm::dim_t ncols) {
+if (in[i] >= static_cast(ncols)) out[0] = 1;
+  }
+};
+
+template
+void CheckFormatWrapper(const RunContext &rctx, const NDArray *input,
+NDArray *cpu_ret, const bool &full_check);
+
+template
+void CheckFormatImpl(const RunContext &rctx, const NDArray *input,
+ NDArray *cpu_ret, const bool &full_check) {
 
 Review comment:
   `xpu_ret ` is on the same context with `input`. `cpu_ret` here is used for 
inspecting data (err number) on cpu.
 
----
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZiyueHuang commented on a change in pull request #8259: check_format of ndrray, mainly for csr

2017-10-16 Thread git
ZiyueHuang commented on a change in pull request #8259: check_format of ndrray, 
mainly for csr
URL: https://github.com/apache/incubator-mxnet/pull/8259#discussion_r145034987
 
 

 ##
 File path: src/ndarray/ndarray.cc
 ##
 @@ -1214,6 +1214,31 @@ void NDArray::SyncCopyToCPU(void *data, size_t size) 
const {
   }
 }
 
+void NDArray::CheckFormat(const bool full_check) const {
+  NDArray cpu_ret = NDArray(mshadow::Shape1(1), Context::CPU());
+  auto err = cpu_ret.data().dptr();
+  *err = 0;
 
 Review comment:
   No, this line should be removed.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] edmBernard commented on issue #8309: asnumpy is slowly ,how can I speed up it?

2017-10-16 Thread git
edmBernard commented on issue #8309: asnumpy is slowly ,how can I speed up it?
URL: 
https://github.com/apache/incubator-mxnet/issues/8309#issuecomment-337128366
 
 
   explanation : https://github.com/apache/incubator-mxnet/issues/6974
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] edmBernard commented on issue #8309: asnumpy is slowly ,how can I speed up it?

2017-10-16 Thread git
edmBernard commented on issue #8309: asnumpy is slowly ,how can I speed up it?
URL: 
https://github.com/apache/incubator-mxnet/issues/8309#issuecomment-337128366
 
 
   https://github.com/apache/incubator-mxnet/issues/6974
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin closed pull request #8314: fix wrong documentation for make_loss

2017-10-16 Thread git
eric-haibin-lin closed pull request #8314: fix wrong documentation for make_loss
URL: https://github.com/apache/incubator-mxnet/pull/8314
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #8313: Ci test randomness

2017-10-16 Thread git
cjolivier01 commented on issue #8313: Ci test randomness
URL: https://github.com/apache/incubator-mxnet/pull/8313#issuecomment-337121770
 
 
   Does this change the random behavior of tests which don?t call one of your 
new functions?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin opened a new pull request #8314: fix wrong documentation for make_loss

2017-10-16 Thread git
eric-haibin-lin opened a new pull request #8314: fix wrong documentation for 
make_loss
URL: https://github.com/apache/incubator-mxnet/pull/8314
 
 
   ## Description ##
   (Brief description on what this PR is about)
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage
   - [ ] For user-facing API changes, API doc string has been updated.
   - [ ] To my best knowledge, examples are either not affected by this change, 
or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Intersting edge cases to note here
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong closed pull request #8241: Negative begin and end support for csr slice

2017-10-16 Thread git
piiswrong closed pull request #8241: Negative begin and end support for csr 
slice
URL: https://github.com/apache/incubator-mxnet/pull/8241
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhreshold commented on issue #8309: asnumpy is slowly ,how can I speed up it?

2017-10-16 Thread git
zhreshold commented on issue #8309: asnumpy is slowly ,how can I speed up it?
URL: 
https://github.com/apache/incubator-mxnet/issues/8309#issuecomment-337119518
 
 
   USE_PROFILER is switched off in pre-built packages.
   You will need to build from source and enable USE_PROFILER = 1 and build to 
make it work
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhangqianghd commented on issue #8309: asnumpy is slowly ,how can I speed up it?

2017-10-16 Thread git
zhangqianghd commented on issue #8309: asnumpy is slowly ,how can I speed up it?
URL: 
https://github.com/apache/incubator-mxnet/issues/8309#issuecomment-337119049
 
 
   My platform is mac os and install mxnet by pip.
   
   When I add mx profile into the code and run ,I got the following Error.
   
   MXNetError: [13:00:02] src/c_api/c_api.cc:104: Need to compile with 
USE_PROFILER=1 for MXNet Profiler
   
   Stack trace returned 4 entries:
   [bt] (0) 0   libmxnet.so 0x00010f365228 
_ZN4dmlc15LogMessageFatalD2Ev + 40
   [bt] (1) 1   libmxnet.so 0x0001101ce396 
MXSetProfilerConfig + 86
   [bt] (2) 2   libffi.6.dylib  0x00010cef8884 
ffi_call_unix64 + 76
   [bt] (3) 3   ??? 0x7ffee4735730 0x0 + 
140732731184944
   
   so I need reinstall mxnet from source code and compile with USE_PROFILER=1
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhangqianghd commented on issue #8309: asnumpy is slowly ,how can I speed up it?

2017-10-16 Thread git
zhangqianghd commented on issue #8309: asnumpy is slowly ,how can I speed up it?
URL: 
https://github.com/apache/incubator-mxnet/issues/8309#issuecomment-337119049
 
 
   My platform is mac os and install mxnet by pip.
   
   When I add mx profile into the code and run ,I got the following Error.
   
   MXNetError: [13:00:02] src/c_api/c_api.cc:104: Need to compile with 
USE_PROFILER=1 for MXNet Profiler
   
   Stack trace returned 4 entries:
   [bt] (0) 0   libmxnet.so 0x00010f365228 
_ZN4dmlc15LogMessageFatalD2Ev + 40
   [bt] (1) 1   libmxnet.so 0x0001101ce396 
MXSetProfilerConfig + 86
   [bt] (2) 2   libffi.6.dylib  0x00010cef8884 
ffi_call_unix64 + 76
   [bt] (3) 3   ??? 0x7ffee4735730 0x0 + 
140732731184944
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #8259: check_format of ndrray, mainly for csr

2017-10-16 Thread git
eric-haibin-lin commented on a change in pull request #8259: check_format of 
ndrray, mainly for csr
URL: https://github.com/apache/incubator-mxnet/pull/8259#discussion_r145027741
 
 

 ##
 File path: src/common/utils.h
 ##
 @@ -43,9 +43,84 @@
 #include 
 #include 
 
+#include "../operator/mxnet_op.h"
+#include "../ndarray/ndarray_function.h"
+
 namespace mxnet {
 namespace common {
 
+struct indptr_check {
+  template
+  MSHADOW_XINLINE static void Map(int i, mshadow::default_real_t* out, const 
DType* in,
+  const nnvm::dim_t end, const nnvm::dim_t 
idx_size) {
+if ((in[i+1] < in[i]) || (i == 0 && in[i] != static_cast(0)) ||
+(i == end && in[i] < static_cast(idx_size))) out[0] = 1;
+  }
+};
+
+struct idx_check {
+  template
+  MSHADOW_XINLINE static void Map(int i, mshadow::default_real_t* out,
+  const DType* in, const nnvm::dim_t ncols) {
+if (in[i] >= static_cast(ncols)) out[0] = 1;
+  }
+};
+
+template
+void CheckFormatWrapper(const RunContext &rctx, const NDArray *input,
+NDArray *cpu_ret, const bool &full_check);
+
+template
+void CheckFormatImpl(const RunContext &rctx, const NDArray *input,
+ NDArray *cpu_ret, const bool &full_check) {
+  using namespace op::mxnet_op;
+  auto stype = input->storage_type();
+  auto err = cpu_ret->data().dptr();
+  *err = 0;
+  if (stype == kCSRStorage) {
+const TShape shape = input->shape();
+const TShape idx_shape = input->aux_shape(csr::kIdx);
+const TShape indptr_shape = input->aux_shape(csr::kIndPtr);
+const TShape storage_shape = input->storage_shape();
+if ((shape.ndim() != 2) ||
+(idx_shape.ndim() != 1 || indptr_shape.ndim() != 1 || 
storage_shape.ndim() != 1) ||
+(indptr_shape[0] != shape[0] + 1) ||
+(idx_shape[0] != storage_shape[0])) {
+  *err = 1;
+  return;
+}
+if (full_check) {
+  NDArray xpu_ret = NDArray(mshadow::Shape1(1), rctx.get_ctx());
+  TBlob xpu_tmp = xpu_ret.data();
+  ndarray::Eval(0, &xpu_tmp, rctx);
+  rctx.get_stream()->Wait();
+  auto indptr_type = input->aux_type(csr::kIndPtr);
 
 Review comment:
   let's reduce the usage of auto for simple types
 
----
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #8259: check_format of ndrray, mainly for csr

2017-10-16 Thread git
eric-haibin-lin commented on a change in pull request #8259: check_format of 
ndrray, mainly for csr
URL: https://github.com/apache/incubator-mxnet/pull/8259#discussion_r145027051
 
 

 ##
 File path: python/mxnet/ndarray/ndarray.py
 ##
 @@ -1285,6 +1285,16 @@ def broadcast_to(self, shape):
 return op.broadcast_to(self, shape=tuple(shape))
 # pylint: enable= undefined-variable
 
+def check_format(self, full_check=True):
 
 Review comment:
   Not sure about adding this to `NDArray` class. I suggest `BaseSparseNDArray` 
in `sparse.py` instead since for dense there's nothing to check. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #8259: check_format of ndrray, mainly for csr

2017-10-16 Thread git
eric-haibin-lin commented on a change in pull request #8259: check_format of 
ndrray, mainly for csr
URL: https://github.com/apache/incubator-mxnet/pull/8259#discussion_r145026908
 
 

 ##
 File path: src/ndarray/ndarray.cc
 ##
 @@ -1214,6 +1214,31 @@ void NDArray::SyncCopyToCPU(void *data, size_t size) 
const {
   }
 }
 
+void NDArray::CheckFormat(const bool full_check) const {
+  NDArray cpu_ret = NDArray(mshadow::Shape1(1), Context::CPU());
+  auto err = cpu_ret.data().dptr();
+  *err = 0;
 
 Review comment:
   Is `err` already set in `CheckFormatImpl `? Does `CheckFormatImpl` rely on 
input value being initialized to a default value?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #8259: check_format of ndrray, mainly for csr

2017-10-16 Thread git
eric-haibin-lin commented on a change in pull request #8259: check_format of 
ndrray, mainly for csr
URL: https://github.com/apache/incubator-mxnet/pull/8259#discussion_r145027224
 
 

 ##
 File path: src/common/utils.h
 ##
 @@ -43,9 +43,84 @@
 #include 
 #include 
 
+#include "../operator/mxnet_op.h"
+#include "../ndarray/ndarray_function.h"
+
 namespace mxnet {
 namespace common {
 
+struct indptr_check {
 
 Review comment:
   Please add brief comment about what is checked
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #8259: check_format of ndrray, mainly for csr

2017-10-16 Thread git
eric-haibin-lin commented on a change in pull request #8259: check_format of 
ndrray, mainly for csr
URL: https://github.com/apache/incubator-mxnet/pull/8259#discussion_r145027240
 
 

 ##
 File path: src/common/utils.h
 ##
 @@ -43,9 +43,84 @@
 #include 
 #include 
 
+#include "../operator/mxnet_op.h"
+#include "../ndarray/ndarray_function.h"
+
 namespace mxnet {
 namespace common {
 
+struct indptr_check {
+  template
+  MSHADOW_XINLINE static void Map(int i, mshadow::default_real_t* out, const 
DType* in,
+  const nnvm::dim_t end, const nnvm::dim_t 
idx_size) {
+if ((in[i+1] < in[i]) || (i == 0 && in[i] != static_cast(0)) ||
+(i == end && in[i] < static_cast(idx_size))) out[0] = 1;
+  }
+};
+
+struct idx_check {
 
 Review comment:
   Same here: please add brief comment about what is checked
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #8259: check_format of ndrray, mainly for csr

2017-10-16 Thread git
eric-haibin-lin commented on a change in pull request #8259: check_format of 
ndrray, mainly for csr
URL: https://github.com/apache/incubator-mxnet/pull/8259#discussion_r145027549
 
 

 ##
 File path: src/common/utils.h
 ##
 @@ -43,9 +43,84 @@
 #include 
 #include 
 
+#include "../operator/mxnet_op.h"
+#include "../ndarray/ndarray_function.h"
+
 namespace mxnet {
 namespace common {
 
+struct indptr_check {
+  template
+  MSHADOW_XINLINE static void Map(int i, mshadow::default_real_t* out, const 
DType* in,
+  const nnvm::dim_t end, const nnvm::dim_t 
idx_size) {
+if ((in[i+1] < in[i]) || (i == 0 && in[i] != static_cast(0)) ||
+(i == end && in[i] < static_cast(idx_size))) out[0] = 1;
+  }
+};
+
+struct idx_check {
+  template
+  MSHADOW_XINLINE static void Map(int i, mshadow::default_real_t* out,
+  const DType* in, const nnvm::dim_t ncols) {
+if (in[i] >= static_cast(ncols)) out[0] = 1;
 
 Review comment:
   We should allow different error values to indicate what doesn't pass the 
check - whether it's indptr, or indices, etc. The current err message is not 
very informative.
 
--------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #8259: check_format of ndrray, mainly for csr

2017-10-16 Thread git
eric-haibin-lin commented on a change in pull request #8259: check_format of 
ndrray, mainly for csr
URL: https://github.com/apache/incubator-mxnet/pull/8259#discussion_r145028053
 
 

 ##
 File path: src/common/utils.h
 ##
 @@ -43,9 +43,84 @@
 #include 
 #include 
 
+#include "../operator/mxnet_op.h"
+#include "../ndarray/ndarray_function.h"
+
 namespace mxnet {
 namespace common {
 
+struct indptr_check {
+  template
+  MSHADOW_XINLINE static void Map(int i, mshadow::default_real_t* out, const 
DType* in,
+  const nnvm::dim_t end, const nnvm::dim_t 
idx_size) {
+if ((in[i+1] < in[i]) || (i == 0 && in[i] != static_cast(0)) ||
+(i == end && in[i] < static_cast(idx_size))) out[0] = 1;
+  }
+};
+
+struct idx_check {
+  template
+  MSHADOW_XINLINE static void Map(int i, mshadow::default_real_t* out,
+  const DType* in, const nnvm::dim_t ncols) {
+if (in[i] >= static_cast(ncols)) out[0] = 1;
+  }
+};
+
+template
+void CheckFormatWrapper(const RunContext &rctx, const NDArray *input,
+NDArray *cpu_ret, const bool &full_check);
+
+template
+void CheckFormatImpl(const RunContext &rctx, const NDArray *input,
+ NDArray *cpu_ret, const bool &full_check) {
 
 Review comment:
   Is it possible to pass `ret` on the same ctx as `input` and avoid all these 
`stream->wait` calls in this function?
 
--------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhreshold commented on issue #8272: Gluon mnist loader not working

2017-10-16 Thread git
zhreshold commented on issue #8272: Gluon mnist loader not working
URL: 
https://github.com/apache/incubator-mxnet/issues/8272#issuecomment-336667196
 
 
   The 0.11.0 version pip wheel didn't include request as dependency, we have 
fixed it in the recent nightly builds.
   You have two options
   
   - Install request manually: `sudo -H pip install requests`
   - Install the latest build by `sudo -H pip install -U mxnet --pre`
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 closed pull request #8304: remove usage of install command from code gen

2017-10-16 Thread git
cjolivier01 closed pull request #8304: remove usage of install command from 
code gen
URL: https://github.com/apache/incubator-mxnet/pull/8304
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] DickJC123 opened a new pull request #8313: Ci test randomness

2017-10-16 Thread git
DickJC123 opened a new pull request #8313: Ci test randomness
URL: https://github.com/apache/incubator-mxnet/pull/8313
 
 
   This PR proposes two new features to how CI uses random data for testing:
   
   1. A 'with np_random_seed(NNN)' syntax to allow a single test to run with a 
set seed without forcing determinism on other tests in the same file, and
   2. A set_np_random_seed() util function to set the seed randomly and output 
that setting to the nosetests log file.  This permits a failure seen in a log 
file to be reproduced exactly.
   
   Numerous individual tests have been also made more robust.  Most often the 
failures were the result of the finite difference method gradient not matching 
the symbol's gradient at a point when the gradient was either large (e.g. 
cube-root) or discontinuous (e.g. max).  In other cases, tolerances needed to 
be added or increased.
   
   The proposed new approach is demonstrated on a simple test file of three 
tests.  Assuming that the second test needs a set seed for robustness, the file 
might appear as:
   
   `def test_op1():
   
   
   def test_op2():
   np.random.seed(1234)
   
   
   def test_op3():
   
   `
   Even though test_op3() is OK with nondeterministic data, it will have only a 
single dataset because it is run after test_op2, which sets the seed.  Also, if 
test_op1() were to fail, there would be no way to reproduce the failure, except 
for running the test individually to produce a new and hopefully similar 
failure.
   
   With the proposed approach, the test file becomes:
   
   `set_np_random_seed()
   
   def test_op1():
   
   
   def test_op2():
   with np_random_seed(1234):
   
   
   def test_op3():
   
   `
   
   set_np_random_seed() is in the global region of the test file and will set 
the seed differently for each run.  The value set will appear in the nosetests 
output.  The body of the test_op2 test now appears after the 'with 
np_random_seed(1234):' statement.  This has the effect of setting the seed 
before the test runs, then reinstating the rng state after the test is over 
(failed or passed).  Thus test_op3 will run with non-deterministic data.  
Finally, if say test_op1() fails, the seed is known so the failure can be 
reproduced.  Further use of np.random.get_state() and np.random.set_state() can 
result in a modified test file that can be invoked as 'nosetests --verbose -s 
test_file.py:test_op1'. 
   
   ## Checklist ##
   ### Essentials ###
   - [X ] Passed code style checking (`make lint`)
   - [X ] Changes are complete (i.e. I finished coding on this PR)
   - [X ] All changes have test coverage
   - [N/A ] For user-facing API changes, API doc string has been updated.
   - [X ] To my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Intersting edge cases to note here
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhreshold commented on issue #8310: Bug in ./example/

2017-10-16 Thread git
zhreshold commented on issue #8310: Bug in ./example/
URL: 
https://github.com/apache/incubator-mxnet/issues/8310#issuecomment-337112594
 
 
   Thanks for reporting.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #8294: NCCL integration

2017-10-16 Thread git
eric-haibin-lin commented on a change in pull request #8294: NCCL integration
URL: https://github.com/apache/incubator-mxnet/pull/8294#discussion_r145023005
 
 

 ##
 File path: python/mxnet/model.py
 ##
 @@ -104,15 +105,18 @@ def _initialize_kvstore(kvstore, param_arrays, 
arg_params, param_names, update_o
 
 def _update_params_on_kvstore(param_arrays, grad_arrays, kvstore, param_names):
 """Perform update of param_arrays from grad_arrays on kvstore."""
-for index, pair in enumerate(zip(param_arrays, grad_arrays)):
-arg_list, grad_list = pair
-if grad_list[0] is None:
-continue
-name = param_names[index]
+size = len(grad_arrays)
+start = 0
+# Use aggregation by default only with NCCL
+default_batch = 16 if 'nccl' in kvstore.type else 1
 
 Review comment:
   where does the magic number `16` come from? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #8294: NCCL integration

2017-10-16 Thread git
eric-haibin-lin commented on a change in pull request #8294: NCCL integration
URL: https://github.com/apache/incubator-mxnet/pull/8294#discussion_r145023209
 
 

 ##
 File path: src/kvstore/comm.h
 ##
 @@ -58,7 +76,10 @@ class Comm {
*/
   virtual void Broadcast(
   int key, const NDArray& src,
-  const std::vector dst, int priority) = 0;
+  const std::vector dst, int priority) = 0;
+
 
 Review comment:
   Could you add brief comments for these two methods? Are they only for nccl? 
Do we want to declare it only when MXNET_USE_NCCL is set?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #8294: NCCL integration

2017-10-16 Thread git
eric-haibin-lin commented on a change in pull request #8294: NCCL integration
URL: https://github.com/apache/incubator-mxnet/pull/8294#discussion_r145023103
 
 

 ##
 File path: python/mxnet/model.py
 ##
 @@ -104,15 +105,18 @@ def _initialize_kvstore(kvstore, param_arrays, 
arg_params, param_names, update_o
 
 def _update_params_on_kvstore(param_arrays, grad_arrays, kvstore, param_names):
 """Perform update of param_arrays from grad_arrays on kvstore."""
-for index, pair in enumerate(zip(param_arrays, grad_arrays)):
-arg_list, grad_list = pair
-if grad_list[0] is None:
-continue
-name = param_names[index]
+size = len(grad_arrays)
+start = 0
+# Use aggregation by default only with NCCL
+default_batch = 16 if 'nccl' in kvstore.type else 1
+batch = int(os.getenv('MXNET_UPDATE_AGGREGATION_SIZE', default_batch))
+while(start < size):
+end = start + batch if start + batch < size else size
 # push gradient, priority is negative index
-kvstore.push(name, grad_list, priority=-index)
+kvstore.push(param_names[start:end], grad_arrays[start:end], 
priority=-start)
 # pull back the weights
-kvstore.pull(name, arg_list, priority=-index)
+kvstore.pull(param_names[start:end], param_arrays[start:end], 
priority=-start)
+start = end
 
 def _update_params(param_arrays, grad_arrays, updater, num_device,
kvstore=None, param_names=None):
 
 Review comment:
   Is this function not updated with batch aggregation?
 
--------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #8294: NCCL integration

2017-10-16 Thread git
eric-haibin-lin commented on a change in pull request #8294: NCCL integration
URL: https://github.com/apache/incubator-mxnet/pull/8294#discussion_r145023564
 
 

 ##
 File path: src/kvstore/kvstore_local.h
 ##
 @@ -61,7 +61,10 @@ class KVStoreLocal : public KVStore {
   }
 
   virtual ~KVStoreLocal() {
-delete comm_;
 
 Review comment:
   I think `delete nullptr` is safe
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #8294: NCCL integration

2017-10-16 Thread git
eric-haibin-lin commented on a change in pull request #8294: NCCL integration
URL: https://github.com/apache/incubator-mxnet/pull/8294#discussion_r145024038
 
 

 ##
 File path: include/mxnet/storage.h
 ##
 @@ -78,6 +78,16 @@ class Storage {
*/
   virtual ~Storage() {}
   /*!
+   * \brief Returns mutex used by storage manager
+   */
+  std::mutex& GetMutex(Context::DeviceType dev) {
 
 Review comment:
   Could you add brief description when mutex is required?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #8294: NCCL integration

2017-10-16 Thread git
eric-haibin-lin commented on a change in pull request #8294: NCCL integration
URL: https://github.com/apache/incubator-mxnet/pull/8294#discussion_r145022944
 
 

 ##
 File path: python/mxnet/model.py
 ##
 @@ -104,15 +105,18 @@ def _initialize_kvstore(kvstore, param_arrays, 
arg_params, param_names, update_o
 
 def _update_params_on_kvstore(param_arrays, grad_arrays, kvstore, param_names):
 """Perform update of param_arrays from grad_arrays on kvstore."""
-for index, pair in enumerate(zip(param_arrays, grad_arrays)):
-arg_list, grad_list = pair
-if grad_list[0] is None:
-continue
-name = param_names[index]
+size = len(grad_arrays)
+start = 0
+# Use aggregation by default only with NCCL
+default_batch = 16 if 'nccl' in kvstore.type else 1
+batch = int(os.getenv('MXNET_UPDATE_AGGREGATION_SIZE', default_batch))
+while(start < size):
 
 Review comment:
   nit: ` while start < size:`
   
 
--------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #8294: NCCL integration

2017-10-16 Thread git
eric-haibin-lin commented on a change in pull request #8294: NCCL integration
URL: https://github.com/apache/incubator-mxnet/pull/8294#discussion_r145022588
 
 

 ##
 File path: include/mxnet/kvstore.h
 ##
 @@ -162,7 +162,7 @@ class KVStore {
* \param priority Priority of the action.
*/
   virtual void Pull(const std::vector& keys,
-const std::vector& values,
+const std::vector& values,
 
 Review comment:
   Is it really necessary to change the interface here? Was this causing memory 
issues in pool_storage_manager?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #8294: NCCL integration

2017-10-16 Thread git
eric-haibin-lin commented on a change in pull request #8294: NCCL integration
URL: https://github.com/apache/incubator-mxnet/pull/8294#discussion_r145023673
 
 

 ##
 File path: src/kvstore/kvstore_local.h
 ##
 @@ -256,14 +260,14 @@ class KVStoreLocal : public KVStore {
   /**
* \brief group values on keys for pull
*/
-  void GroupKVPairsPull(const std::vector& keys,
-const std::vector& values,
-std::vector *uniq_keys,
-std::vector> *grouped_vals) {
+  virtual void GroupKVPairsPull(const std::vector& keys,
 
 Review comment:
   Why is `virtual` added here? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #8282: Error: dot?gemm: matrix shape mismatch

2017-10-16 Thread git
eric-haibin-lin commented on issue #8282: Error: dot?gemm: matrix shape mismatch
URL: 
https://github.com/apache/incubator-mxnet/issues/8282#issuecomment-337111593
 
 
   Do you mind posting some code snippet here so that others can reproduce the 
issue?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jeremiedb commented on issue #8306: [R] Bug in mx.nd.one.hot or gan code in example

2017-10-16 Thread git
jeremiedb commented on issue #8306: [R] Bug in mx.nd.one.hot or gan code in 
example
URL: 
https://github.com/apache/incubator-mxnet/issues/8306#issuecomment-337110960
 
 
   What version of MXNet package and R are you using? 
   I didn't had issue with the following commands in both 0.10.1 (Windows) and 
0.11.1 (Ubuntu) : 
   
   ```
   > library(mxnet)
   > digit <- mx.nd.array(rep(1, times=5))
   > data <- mx.nd.one.hot(indices = digit, depth = 3)
   > data
[,1] [,2] [,3] [,4] [,5]
   [1,]00000
   [2,]11111
   [3,]00000
   > class(digit)
   [1] "MXNDArray"
   ```
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #8264: Operators for mean(csr, axis=0) and mean(csr, axis=1)

2017-10-16 Thread git
eric-haibin-lin commented on a change in pull request #8264: Operators for 
mean(csr, axis=0) and mean(csr, axis=1)
URL: https://github.com/apache/incubator-mxnet/pull/8264#discussion_r145021948
 
 

 ##
 File path: src/operator/tensor/broadcast_reduce_op.h
 ##
 @@ -566,7 +566,7 @@ struct SumCsrKernel {
   }
 };
 
-template 
+template 
 
 Review comment:
   Please add brief comment regarding what normalize is for
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #8311: Revert "[CMAKE] Fix windows cmake build"

2017-10-16 Thread git
cjolivier01 commented on issue #8311: Revert "[CMAKE] Fix windows cmake build"
URL: https://github.com/apache/incubator-mxnet/pull/8311#issuecomment-337107057
 
 
   There is a theory that this PR is breaking the build, although that theory 
has yet to be proven and there is some doubt...
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on issue #8311: Revert "[CMAKE] Fix windows cmake build"

2017-10-16 Thread git
cjolivier01 commented on issue #8311: Revert "[CMAKE] Fix windows cmake build"
URL: https://github.com/apache/incubator-mxnet/pull/8311#issuecomment-337107057
 
 
   There is a theory that this PR is breaking the build, although that theory 
has yet to be proven...
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] akturtle opened a new issue #8312: Gradient function not returning enough gradients

2017-10-16 Thread git
akturtle opened a new issue #8312: Gradient function not returning enough 
gradients
URL: https://github.com/apache/incubator-mxnet/issues/8312
 
 
   Note: Providing complete information in the most concise form is the best 
way to get help. This issue template serves as the checklist for essential 
information to most of the technical issues.
   
   ## Description
   Simple_bind error with custom OP with auxiliary_states
   
   Package used (Python/R/Scala/Julia):
   I'm using Python
   
   
   MXNet commit hash:
   (Paste the output of `git rev-parse HEAD` here.)
   a5edbf94094581ee27157eae4f2113115a3994e7
   
   
   
   ## Error Message:
   mxnet/dmlc-core/include/dmlc/./logging.h:308: [10:44:55] 
src/pass/gradient.cc:159: Check failed: (*rit)->inputs.size() == 
input_grads.size() (4 vs. 2) Gradient function not returning enough gradient
   
   Stack trace returned 10 entries:
   [bt] (0) 
/home/xfz/tools/mxnet/python/mxnet/../../lib/libmxnet.so(_ZN4dmlc15LogMessageFatalD1Ev+0x29)
 [0x7f359e70b199]
   [bt] (1) 
/home/xfz/tools/mxnet/python/mxnet/../../lib/libmxnet.so(+0x26dcf8f) 
[0x7f35a06a9f8f]
   [bt] (2) 
/home/xfz/tools/mxnet/python/mxnet/../../lib/libmxnet.so(_ZNSt17_Function_handlerIFN4nnvm5GraphES1_EPS2_E9_M_invokeERKSt9_Any_dataS1_+0x11f)
 [0x7f359f39ba9f]
   [bt] (3) 
/home/xfz/tools/mxnet/python/mxnet/../../lib/libmxnet.so(_ZN4nnvm11ApplyPassesENS_5GraphERKSt6vectorISsSaISsEE+0x501)
 [0x7f35a06cdfc1]
   [bt] (4) 
/home/xfz/tools/mxnet/python/mxnet/../../lib/libmxnet.so(_ZN4nnvm9ApplyPassENS_5GraphERKSs+0x8e)
 [0x7f359f6b62ae]
   [bt] (5) 
/home/xfz/tools/mxnet/python/mxnet/../../lib/libmxnet.so(_ZN4nnvm4pass8GradientENS_5GraphESt6vectorINS_9NodeEntryESaIS3_EES5_S5_St8functionIFS3_OS5_EES6_IFiRKNS_4NodeEEES6_IFS3_RKS3_SG_EES2_IPKNS_2OpESaISL_EESs+0x865)
 [0x7f359f711b95]
   [bt] (6) 
/home/xfz/tools/mxnet/python/mxnet/../../lib/libmxnet.so(_ZN5mxnet4exec13GraphExecutor13InitFullGraphEN4nnvm6SymbolERKSt6vectorINS_9OpReqTypeESaIS5_EE+0x81e)
 [0x7f359f701a6e]
   [bt] (7) 
/home/xfz/tools/mxnet/python/mxnet/../../lib/libmxnet.so(_ZN5mxnet4exec13GraphExecutor9InitGraphEN4nnvm6SymbolERKNS_7ContextERKSt3mapISsS4_St4lessISsESaISt4pairIKSsS4_EEERKSt6vectorIS4_SaIS4_EESL_SL_RKSH_INS_9OpReqTypeESaISM_EE+0x4f)
 [0x7f359f7023ef]
   [bt] (8) 
/home/xfz/tools/mxnet/python/mxnet/../../lib/libmxnet.so(_ZN5mxnet4exec13GraphExecutor4InitEN4nnvm6SymbolERKNS_7ContextERKSt3mapISsS4_St4lessISsESaISt4pairIKSsS4_EEERKSt6vectorIS4_SaIS4_EESL_SL_RKSt13unordered_mapISsNS2_6TShapeESt4hashISsESt8equal_toISsESaISA_ISB_SN_EEERKSM_ISsiSP_SR_SaISA_ISB_iEEERKSH_INS_9OpReqTypeESaIS12_EERKSt13unordered_setISsSP_SR_SaISsEEPSH_INS_7NDArrayESaIS1C_EES1F_S1F_PSM_ISsS1C_SP_SR_SaISA_ISB_S1C_EEEPNS_8ExecutorERKSM_INS2_9NodeEntryES1C_NS2_13NodeEntryHashENS2_14NodeEntryEqualESaISA_IKS1M_S1C_EEE+0xa0)
 [0x7f359f704070]
   [bt] (9) 
/home/xfz/tools/mxnet/python/mxnet/../../lib/libmxnet.so(_ZN5mxnet8Executor10SimpleBindEN4nnvm6SymbolERKNS_7ContextERKSt3mapISsS3_St4lessISsESaISt4pairIKSsS3_EEERKSt6vectorIS3_SaIS3_EESK_SK_RKSt13unordered_mapISsNS1_6TShapeESt4hashISsESt8equal_toISsESaIS9_ISA_SM_EEERKSL_ISsiSO_SQ_SaIS9_ISA_iEEERKSG_INS_9OpReqTypeESaIS11_EERKSt13unordered_setISsSO_SQ_SaISsEEPSG_INS_7NDArrayESaIS1B_EES1E_S1E_PSL_ISsS1B_SO_SQ_SaIS9_ISA_S1B_EEEPS0_+0x194)
 [0x7f359f704d74]
   
   
   ## Minimum reproducible example
   create cutom a test OP
   `
   import mxnet as mx
   class  GradientError(mx.operator.CustomOp):
   def forward(self, is_train, req, in_data, out_data,aux):
   pass
   def backward(self, req, out_grad, in_data, out_data, in_grad, aux):
   pass
   @mx.operator.register('GradientError')
   
   class GradientErrorProp(mx.operator.CustomOpProp):
  def list_arguments(self):
  return ['data', 'label']
   
  def list_outputs(self):
  return ['output']
  def list_auxiliary_states(self):
   # call them 'bias' for zero initialization
  return [ 'aux_bias1', 'aux_weight2']
   
  def infer_shape(self, in_shape):
   data_shape=in_shape[0]
   label_shape = (in_shape[0][0],)
   out_shape = in_shape[0]
   aux_bias1_shape = (in_shape[0][0],)
   aux_weight2_shape = (in_shape[0][0],)
   return [data_shape,label_shape], [out_shape],\
   [aux_bias1_shape,aux_weight2_shape]
  def create_operator(self, ctx, shapes, dtypes):
   return  GradientError()
   `
   simple_bind custom OP. Then it will give above error
   `import mxnet as mx
   from gradientError import *
   data = mx.sym.Variable('data')
   label = mx.sym.Variable('label')
   net = mx.sym.Custom(data=data,label=label,op_type = 'GradientError')
   input_shapes = {'data':(2, 3,32,32 ),'label':(2,)}
   
   xpu = mx.cpu()
   exe = net.simple_bind(ctx = xpu,**input_shapes)
   
   exe

[GitHub] cjolivier01 opened a new pull request #8311: Revert "[CMAKE] Fix windows cmake build"

2017-10-16 Thread git
cjolivier01 opened a new pull request #8311: Revert "[CMAKE] Fix windows cmake 
build"
URL: https://github.com/apache/incubator-mxnet/pull/8311
 
 
   Reverts apache/incubator-mxnet#8227
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] squirrel16 opened a new issue #8310: Bugs in /example/

2017-10-16 Thread git
squirrel16 opened a new issue #8310: Bugs in /example/
URL: https://github.com/apache/incubator-mxnet/issues/8310
 
 
   1. In `/example/ssd/evaluate_net.py`, Line84 DetRecordIter using the default 
RGB means, not the params set in `/example/ssd/evaluate.py`.
   2. In `/example/rcnn/rcnn/symbol/symbol_vgg.py` and `symbol_resnet.py`, all 
`mx.symbol.contrib.Proposal` need to update to `mx.contrib.symbol.Proposal`.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #8309: asnumpy is slowly ,how can I speed up it?

2017-10-16 Thread git
szha commented on issue #8309: asnumpy is slowly ,how can I speed up it?
URL: 
https://github.com/apache/incubator-mxnet/issues/8309#issuecomment-337097361
 
 
   The operations are lazy-evaluated, so it's expected for the collection 
method takes most of the time. You can use the profiler to get the actual time 
for each backend operation. 
https://mxnet.incubator.apache.org/versions/master/how_to/perf.html?highlight=profile#profiler
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhangqianghd opened a new issue #8309: asnumpy is slowly ,how can I speed up it?

2017-10-16 Thread git
zhangqianghd opened a new issue #8309: asnumpy is slowly ,how can I speed up it?
URL: https://github.com/apache/incubator-mxnet/issues/8309
 
 
   I use pretrained mod for prediction.I found asnumpy was slowly can took up 
most of the time.
   
   ```
   import time
   # compute the predict probabilities
   now=time.time()
   mod.forward(Batch([mx.nd.array(data)]))
   mod_output=mod.get_outputs()
   print(time.time()-now)
   
   rlt=mod_output[0]
   print(time.time()-now)
   
   prob = rlt.asnumpy()
   print(time.time()-now)
   ```
   >0.009676933288574219
   >0.010257959365844727
   >1.285344123840332
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pqviet opened a new issue #8308: Error with Python custom operator in distributed learning

2017-10-16 Thread git
pqviet opened a new issue #8308: Error with Python custom operator in 
distributed learning
URL: https://github.com/apache/incubator-mxnet/issues/8308
 
 
   I got the same error "Cannot find custom operator type *"
   when running FasterRCNN-like methods with distributed learning in multiple 
machines
   + R-FCN, Deformable-ConvNets
   + MxNet v0.9.5, v0.11.0
   + Ubuntu
   Such an error did not appear when running in single machine even with 
multiple GPUs.
   The above methods use some python custom operators like proposal, 
proposal_target, ohem...
   Rewriting them in C++ or Cuda may solve the problem but we still not 
understand why
   distributed learning can not deal with the operator registry.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin opened a new pull request #8307: [sparse] Remove usage of arange in FillDnsZerosRspImpl

2017-10-16 Thread git
eric-haibin-lin opened a new pull request #8307: [sparse] Remove usage of 
arange in FillDnsZerosRspImpl
URL: https://github.com/apache/incubator-mxnet/pull/8307
 
 
   ## Description ##
   (Brief description on what this PR is about)
   
   ## Checklist ##
   ### Essentials ###
   - [x] Passed code style checking (`make lint`)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage
   - [x] For user-facing API changes, API doc string has been updated.
   - [x] To my best knowledge, examples are either not affected by this change, 
or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Intersting edge cases to note here
   related issue: https://github.com/apache/incubator-mxnet/issues/8303 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] q2516581 opened a new issue #8306: [R] Bug in mx.nd.one.hot or gan code in example

2017-10-16 Thread git
q2516581 opened a new issue #8306: [R] Bug in mx.nd.one.hot or gan code in 
example
URL: https://github.com/apache/incubator-mxnet/issues/8306
 
 
   I'm R  woker.
   i m testing the GAN net using the R code from the mxnet example, but it has 
something error.
   incubator-mxnet/example/gan/CGAN_mnist_R/CGAN_train.R
   when i test 
   
   digit<- mx.nd.array(rep(9, times=batch_size))
   data<- mx.nd.one.hot(indices = digit, depth = 10)
   #data<- mx.nd.reshape(data = data, shape = c(1,1,-1, batch_size))
   it will back 
   Error in mx.nd.one.hot(indices = digit, depth = 10) : 
 ./base.h:291: Unsupported parameter type object type for argument indices, 
expect integer, logical, or string.
   
   i have tried to change digit as a numeric or array, the data will be a null 
list.
   anyone else have tested this examble? plese help me..
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ptrendx commented on issue #8305: MXNet Build Failure with DEV=1

2017-10-16 Thread git
ptrendx commented on issue #8305: MXNet Build Failure with DEV=1
URL: 
https://github.com/apache/incubator-mxnet/issues/8305#issuecomment-337082130
 
 
   Will look into it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin opened a new issue #8305: MXNet Build Failure with DEV=1

2017-10-16 Thread git
eric-haibin-lin opened a new issue #8305: MXNet Build Failure with DEV=1
URL: https://github.com/apache/incubator-mxnet/issues/8305
 
 
   Note: Providing complete information in the most concise form is the best 
way to get help. This issue template serves as the checklist for essential 
information to most of the technical issues.
   
   If the issue is non-technical, feel free to present the information in what 
you believe is the best form.
   
   ## Description
   (Brief description of the problem in no more than 2 sentences.)
   
   ## Environment info (Required)
   
   ```
   What to do:
   1. Download the diagnosis script from 
https://raw.githubusercontent.com/apache/incubator-mxnet/master/tools/diagnose.py
   2. Run the script using `python diagnose.py` and paste its output here.
   
   ```
   
   Package used (Python/R/Scala/Julia): fg
   
   (I'm using ...)
   
   For Scala user, please provide:
   1. Java version: (`java -version`)
   2. Maven version: (`mvn -version`)
   3. Scala runtime if applicable: (`scala -version`)
   
   For R user, please provide R `sessionInfo()`:
   
   ## Build info (Required if built from source)
   
   Compiler (gcc/clang/mingw/visual studio):
   
   MXNet commit hash:
   (Paste the output of `git rev-parse HEAD` here.) 
65b258700dda06b0c9d1913ff5aa525beb88438b 
   
   Build config:
   (Paste the content of config.mk, or the build command.) 
   
   ## Error Message:
   (Paste the complete error message, including stack trace.)
   ```
   src/io/iter_image_recordio_2.cc: In function ??:
   src/io/iter_image_recordio_2.cc:575:9: error: ?contrast_scaled? may be used 
uninitialized in this function [-Werror=maybe-uninitialized]
ProcessImage<4>(res, &data, is_mirrored, contrast_scaled, 
illumination_scaled);
^
   src/io/iter_image_recordio_2.cc:558:13: note: ?contrast_scaled? was declared 
here
  float contrast_scaled;
^
   src/io/iter_image_recordio_2.cc:575:9: error: ?illumination_scaled? may be 
used uninitialized in this function [-Werror=maybe-uninitialized]
ProcessImage<4>(res, &data, is_mirrored, contrast_scaled, 
illumination_scaled);
^
   src/io/iter_image_recordio_2.cc:559:13: note: ?illumination_scaled? was 
declared here
  float illumination_scaled;
   
   src/io/iter_image_recordio_2.cc: In member function ?void 
mxnet::io::ImageRecordIOParser2::ProcessImage(const cv::Mat&, 
mshadow::Tensor*, bool, float,
float) [with int n_channels = 1; DType = float]?:
   src/io/iter_image_recordio_2.cc:409:13: error: ?RGBA_MEAN[0]? may be used 
uninitialized in this function [-Werror=maybe-uninitialized]
RGBA[k] = (RGBA[k] - RGBA_MEAN[k]) * RGBA_MULT[k] + 
RGBA_BIAS[k];
   ```
   ## Minimum reproducible example
   (If you are using your own code, please provide a short script that 
reproduces the error. Otherwise, please provide link to the existing example.)
   
   ## Steps to reproduce
   (Paste the commands you ran that produced the error.)
   
   1. build with DEV=1 in config.mk
   2.
   
   ## What have you tried to solve it?
   
   1. build with DEV=1 passes for commit 
ffa6e45aad4eeca8e6d27764789cd615d132fcb9 
   @ptrendx seems this is introduced by #7152 
   2.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ptrendx commented on issue #7152: [WIP] New faster version of the RecordIO iterator

2017-10-16 Thread git
ptrendx commented on issue #7152: [WIP] New faster version of the RecordIO 
iterator
URL: https://github.com/apache/incubator-mxnet/pull/7152#issuecomment-337078389
 
 
   Are you talking about the comment that shuffle does not work yet? This was 
when I was still working on it and is no longer true.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #8302: Refactor operators

2017-10-16 Thread git
ape[4] ?
+  (AddPad(dshape[4], param_.pad[2]) - dilated_ksize_x) / param_.stride[2] 
+ 1 : 0;
+SHAPE_ASSIGN_CHECK(*out_shape, 0, ConvertLayout(oshape, kNCDHW, 
param_.layout.value()));
+// Perform incomplete shape inference. Fill in the missing values in data 
shape.
+// 1) We can always fill in the batch_size.
+// 2) We can back-calculate the input depth/height/width if the 
corresponding stride is 1.
+oshape = ConvertLayout((*out_shape)[0].get<5>(), param_.layout.value(), 
kNCDHW);
+dshape[0] = oshape[0];
+if (oshape[2] && param_.stride[0] == 1) {
+  dshape[2] = oshape[2] + dilated_ksize_d - 1 - 2 * param_.pad[0];
+}
+if (oshape[3] && param_.stride[1] == 1) {
+  dshape[3] = oshape[3] + dilated_ksize_y - 1 - 2 * param_.pad[1];
+}
+if (oshape[4] && param_.stride[2] == 1) {
+  dshape[4] = oshape[4] + dilated_ksize_x - 1 - 2 * param_.pad[2];
+}
+SHAPE_ASSIGN_CHECK(*in_shape, conv::kData,
+ConvertLayout(dshape, kNCDHW, param_.layout.value()));
+// Check whether the kernel sizes are valid
+if (dshape[2] != 0) {
+  CHECK_LE(dilated_ksize_d, AddPad(dshape[2], param_.pad[0])) << "kernel 
size exceed input";
+}
+if (dshape[3] != 0) {
+  CHECK_LE(dilated_ksize_y, AddPad(dshape[3], param_.pad[1])) << "kernel 
size exceed input";
+}
+if (dshape[4] != 0) {
+  CHECK_LE(dilated_ksize_x, AddPad(dshape[4], param_.pad[2])) << "kernel 
size exceed input";
+}
+return true;
+  } else {
+LOG(FATAL) << "Unknown convolution type";
+return false;
+  }
+}
+
+static bool ConvolutionType(const nnvm::NodeAttrs& attrs,
 
 Review comment:
   I guess this is just personal preference. I usually try to expose as few 
symbols as possible to avoid symbol collision.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mbaijal commented on issue #8304: remove usage of install command from code gen

2017-10-16 Thread git
mbaijal commented on issue #8304: remove usage of install command from code gen
URL: https://github.com/apache/incubator-mxnet/pull/8304#issuecomment-337071022
 
 
   does this fix the error here - 
http://jenkins-master-elb-1979848568.us-east-1.elb.amazonaws.com/view/Prod/job/NightlyTutorialUbuntu/367/consoleFull
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #7152: [WIP] New faster version of the RecordIO iterator

2017-10-16 Thread git
eric-haibin-lin commented on issue #7152: [WIP] New faster version of the 
RecordIO iterator
URL: https://github.com/apache/incubator-mxnet/pull/7152#issuecomment-337070802
 
 
   Does it print warnings when shuffle is set to True? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sebouh commented on a change in pull request #8301: Preparing for 0.12.0.rc0: Final changes before RC

2017-10-16 Thread git
sebouh commented on a change in pull request #8301: Preparing for 0.12.0.rc0: 
Final changes before RC
URL: https://github.com/apache/incubator-mxnet/pull/8301#discussion_r144992148
 
 

 ##
 File path: NEWS.md
 ##
 @@ -1,34 +1,46 @@
 MXNet Change Log
 
 ## 0.12.0
-### New Features - Sparse Tensor Support
-  - Added limited cpu support for two sparse formats for `Symbol` and 
`NDArray` - `CSRNDArray` and `RowSparseNDArray`
-  - Added a sparse dot product operator and many element-wise sparse operators
-  - Added a data iterator for sparse data input - `LibSVMIter`
-  - Added three optimizers for sparse gradient updates: `Ftrl`, `SGD` and 
`Adam`
-  - Added `push` and `row_sparse_pull` with `RowSparseNDArray` in distributed 
kvstore
-### New Features - Autograd and Gluon
-  - New loss functions added - `SigmoidBinaryCrossEntropyLoss`, `CTCLoss`, 
`HuberLoss`, `HingeLoss`, `SquaredHingeLoss`, `LogisticLoss`, `TripletLoss`
+### Performance
+  - Added full support for NVIDIA Volta GPU Architecture and CUDA 9. Training 
is up to 3.5x faster than Pascal when using float16.
+  - Enabled JIT compilation. Autograd and Gluon hybridize now use less memory 
and has faster speed. Performance is almost the same with old symbolic style 
code.
+  - Improved ImageRecordIO image loading performance and added indexed 
RecordIO support.
+  - Added better openmp thread management to improve CPU performance.
+### New Features - Gluon
+  - Added enhancements to the Gluon package, a high-level interface designed 
to be easy to use while keeping most of the flexibility of low level API. Gluon 
supports both imperative and symbolic programming, making it easy to train 
complex models imperatively with minimal impact on performance. Neural networks 
(and other machine learning models) can be defined and trained with `gluon.nn` 
and `gluon.rnn` packages. 
+  - Added new loss functions - `SigmoidBinaryCrossEntropyLoss`, `CTCLoss`, 
`HuberLoss`, `HingeLoss`, `SquaredHingeLoss`, `LogisticLoss`, `TripletLoss`.
   - `gluon.Trainer` now allows reading and setting learning rate with 
`trainer.learning_rate` property.
-  - Added `mx.autograd.grad` and experimental second order gradient support 
(though most operators don't support second order gradient yet)
-  - Added `ConvLSTM` etc to `gluon.contrib`
+  - Added API `HybridBlock.export` for exporting gluon models to MXNet format.
+  - Added `ConvLSTM` to gluon.contrib.
 
 Review comment:
   what about VariationalDropout? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sebouh commented on a change in pull request #8301: Preparing for 0.12.0.rc0: Final changes before RC

2017-10-16 Thread git
sebouh commented on a change in pull request #8301: Preparing for 0.12.0.rc0: 
Final changes before RC
URL: https://github.com/apache/incubator-mxnet/pull/8301#discussion_r144992148
 
 

 ##
 File path: NEWS.md
 ##
 @@ -1,34 +1,46 @@
 MXNet Change Log
 
 ## 0.12.0
-### New Features - Sparse Tensor Support
-  - Added limited cpu support for two sparse formats for `Symbol` and 
`NDArray` - `CSRNDArray` and `RowSparseNDArray`
-  - Added a sparse dot product operator and many element-wise sparse operators
-  - Added a data iterator for sparse data input - `LibSVMIter`
-  - Added three optimizers for sparse gradient updates: `Ftrl`, `SGD` and 
`Adam`
-  - Added `push` and `row_sparse_pull` with `RowSparseNDArray` in distributed 
kvstore
-### New Features - Autograd and Gluon
-  - New loss functions added - `SigmoidBinaryCrossEntropyLoss`, `CTCLoss`, 
`HuberLoss`, `HingeLoss`, `SquaredHingeLoss`, `LogisticLoss`, `TripletLoss`
+### Performance
+  - Added full support for NVIDIA Volta GPU Architecture and CUDA 9. Training 
is up to 3.5x faster than Pascal when using float16.
+  - Enabled JIT compilation. Autograd and Gluon hybridize now use less memory 
and has faster speed. Performance is almost the same with old symbolic style 
code.
+  - Improved ImageRecordIO image loading performance and added indexed 
RecordIO support.
+  - Added better openmp thread management to improve CPU performance.
+### New Features - Gluon
+  - Added enhancements to the Gluon package, a high-level interface designed 
to be easy to use while keeping most of the flexibility of low level API. Gluon 
supports both imperative and symbolic programming, making it easy to train 
complex models imperatively with minimal impact on performance. Neural networks 
(and other machine learning models) can be defined and trained with `gluon.nn` 
and `gluon.rnn` packages. 
+  - Added new loss functions - `SigmoidBinaryCrossEntropyLoss`, `CTCLoss`, 
`HuberLoss`, `HingeLoss`, `SquaredHingeLoss`, `LogisticLoss`, `TripletLoss`.
   - `gluon.Trainer` now allows reading and setting learning rate with 
`trainer.learning_rate` property.
-  - Added `mx.autograd.grad` and experimental second order gradient support 
(though most operators don't support second order gradient yet)
-  - Added `ConvLSTM` etc to `gluon.contrib`
+  - Added API `HybridBlock.export` for exporting gluon models to MXNet format.
+  - Added `ConvLSTM` to gluon.contrib.
 
 Review comment:
   what about VariationalDropout? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 closed pull request #8293: Added my code signing key

2017-10-16 Thread git
cjolivier01 closed pull request #8293: Added my code signing key
URL: https://github.com/apache/incubator-mxnet/pull/8293
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha opened a new pull request #8304: remove usage of install command from code gen

2017-10-16 Thread git
szha opened a new pull request #8304: remove usage of install command from code 
gen
URL: https://github.com/apache/incubator-mxnet/pull/8304
 
 
   ## Description ##
   This change removes the usage of install command to avoid the side-effect of 
inconsistent behavior than before.
   
   ## Checklist ##
   ### Essentials ###
   - [x] Passed code style checking (`make lint`)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage
   - [x] To my best knowledge, examples are either not affected by this change, 
or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] Revert setup.py and add the code gen logic.
   
   ## Comments ##
   - This change makes the additional assumption that `libmxnet.so` can be 
loaded onto the current system. Before the change, the assumption is only on 
its existence. 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #8303: mshadow::range returns wrong result

2017-10-16 Thread git
eric-haibin-lin commented on issue #8303: mshadow::range returns wrong result
URL: 
https://github.com/apache/incubator-mxnet/issues/8303#issuecomment-337063409
 
 
   arange op is Fixed by #8268, but other operators that uses mshadow::range 
have this issue
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin opened a new issue #8303: mshadow::range returns wrong result

2017-10-16 Thread git
eric-haibin-lin opened a new issue #8303: mshadow::range returns wrong result
URL: https://github.com/apache/incubator-mxnet/issues/8303
 
 
   Note: Providing complete information in the most concise form is the best 
way to get help. This issue template serves as the checklist for essential 
information to most of the technical issues.
   
   If the issue is non-technical, feel free to present the information in what 
you believe is the best form.
   
   ## Description
   (Brief description of the problem in no more than 2 sentences.)
   
   ## Environment info (Required)
   
   ```
   What to do:
   1. Download the diagnosis script from 
https://raw.githubusercontent.com/apache/incubator-mxnet/master/tools/diagnose.py
   2. Run the script using `python diagnose.py` and paste its output here.
   
   ```
   
   Package used (Python/R/Scala/Julia): python2
   (I'm using ...)
   
   For R user, please provide R `sessionInfo()`:
   
   ## Build info (Required if built from source)
   
   Compiler (gcc/clang/mingw/visual studio):
   
   MXNet commit hash: 43234d0f0a9e8dbed81cf0298fe8f5a33f3a552f
   (Paste the output of `git rev-parse HEAD` here.)
   
   Build config:
   (Paste the content of config.mk, or the build command.)
   
   ## Error Message:
   (Paste the complete error message, including stack trace.)
   
   ```
   [22:41:18] /home/ubuntu/haibin-mxnet/dmlc-core/include/dmlc/./logging.h:308: 
[22:41:18] /home/ubuntu/haibin-mxnet/mshadow/mshadow/./tensor_cpu-inl.h:195: 
Check failed: eshape[0] ==
   0 || eshape == dshape Assignment: Shape of Tensors are not consistent with 
target, eshape: (54686456,) dshape:(54686454,)
   
   Stack trace returned 10 entries:
   [bt] (0) 
/home/ubuntu/haibin-mxnet/python/mxnet/../../lib/libmxnet.so(_ZN4dmlc15LogMessageFatalD1Ev+0x3f)
 [0x7f717cd11b4f]
   [bt] (1) 
/home/ubuntu/haibin-mxnet/python/mxnet/../../lib/libmxnet.so(_ZN7mshadow6MapExpINS_2sv6savetoENS_6TensorINS_3cpuELi1EiEELi1EiNS_4expr8RangeExpIiEELi1EEEvPNS_7TRValueIT0_S4$
   XT1_ET2_EERKNS6_3ExpIT3_SB_XT4_EEE+0x1ab) [0x7f717dc00e92]
   [bt] (2) 
/home/ubuntu/haibin-mxnet/python/mxnet/../../lib/libmxnet.so(_ZN7mshadow4expr9ExpEngineINS_2sv6savetoENS_6TensorINS_3cpuELi1EiEEiE4EvalINS0_8RangeExpIivPS6_RKNS0_3ExpI$
   _iLi1EEE+0x23) [0x7f717dbfee0d]
   [bt] (3) 
/home/ubuntu/haibin-mxnet/python/mxnet/../../lib/libmxnet.so(_ZN7mshadow4expr9RValueExpINS_6TensorINS_3cpuELi1EiEEiE8__assignINS0_8RangeExpIiEELi1EEERS4_RKNS0_3ExpIT_iXT0_$
   EE+0x37) [0x7f717dbfe4e9]
   [bt] (4) 
/home/ubuntu/haibin-mxnet/python/mxnet/../../lib/libmxnet.so(_ZN7mshadow6TensorINS_3cpuELi1EiEaSINS_4expr8RangeExpIiEELi1EEERS2_RKNS4_3ExpIT_iXT0_EEE+0x23)
 [0x7f717dbfa5c3$
   [bt] (5) 
/home/ubuntu/haibin-mxnet/python/mxnet/../../lib/libmxnet.so(_ZN5mxnet2op8TopKImplIN7mshadow3cpuEEEvNS_10RunContextENS_8ResourceERKNS_5TBlobERKSt6vectorIS6_SaIS6_EERKNS0_9$
   opKParamE+0xcb5) [0x7f717dc174dc]
   [bt] (6) 
/home/ubuntu/haibin-mxnet/python/mxnet/../../lib/libmxnet.so(_ZN5mxnet2op4TopKIN7mshadow3cpuEEEvRKN4nnvm9NodeAttrsERKNS_9OpContextERKSt6vectorINS_5TBlobESaISC_EERKSB_INS_9$
   pReqTypeESaISH_EESG_+0x18b) [0x7f717dc13412]
   [bt] (7) 
/home/ubuntu/haibin-mxnet/python/mxnet/../../lib/libmxnet.so(_ZNSt17_Function_handlerIFvRKN4nnvm9NodeAttrsERKN5mxnet9OpContextERKSt6vectorINS4_5TBlobESaIS9_EERKS8_INS4_9Op$
   eqTypeESaISE_EESD_EPSJ_E9_M_invokeERKSt9_Any_dataS3_S7_SD_SI_SD_+0x91) 
[0x7f717cd26994]
   [bt] (8) 
/home/ubuntu/haibin-mxnet/python/mxnet/../../lib/libmxnet.so(_ZNKSt8functionIFvRKN4nnvm9NodeAttrsERKN5mxnet9OpContextERKSt6vectorINS4_5TBlobESaIS9_EERKS8_INS4_9OpReqTypeES$
   ISE_EESD_EEclES3_S7_SD_SI_SD_+0xa6) [0x7f717e31ba06]
   [bt] (9) 
/home/ubuntu/haibin-mxnet/python/mxnet/../../lib/libmxnet.so(_ZZN5mxnet10imperative12PushFComputeERKSt8functionIFvRKN4nnvm9NodeAttrsERKNS_9OpContextERKSt6vectorINS_5TBlobE$
   
aISA_EERKS9_INS_9OpReqTypeESaISF_EESE_EEPKNS2_2OpES5_RKNS_7ContextERKS9_IPNS_6engine3VarESaISW_EES10_RKS9_INS_8ResourceESaIS11_EERKS9_IPNS_7NDArrayESaIS17_EES1B_RKS9_IjSaIjEESJ_ENK$
   lNS_10RunContextENSU_18CallbackOnCompleteEE_clES1G_S1H_+0x1f2) 
[0x7f717e315a74]
   ```
   
   ## Minimum reproducible example
   (If you are using your own code, please provide a short script that 
reproduces the error. Otherwise, please provide link to the existing example.)
   
   ```
   >>> a = mx.nd.arange(0, 54686454, step=1, repeat=1)
   >>> a.shape
   (54686454L,)
   >>> a.topk(k=54686454)
   (error)
   ```
   
   ## Steps to reproduce
   (Paste the commands you ran that produced the error.)
   
   1. 
   2.
   
   ## What have you tried to solve it?
   
   1.
   2.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #8303: arange op returns wrong result

2017-10-16 Thread git
eric-haibin-lin commented on issue #8303: arange op returns wrong result
URL: 
https://github.com/apache/incubator-mxnet/issues/8303#issuecomment-337063409
 
 
   Fixed by #8268
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin closed issue #8303: arange op returns wrong result

2017-10-16 Thread git
eric-haibin-lin closed issue #8303: arange op returns wrong result
URL: https://github.com/apache/incubator-mxnet/issues/8303
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on a change in pull request #8301: Preparing for 0.12.0.rc0: Final changes before RC

2017-10-16 Thread git
szha commented on a change in pull request #8301: Preparing for 0.12.0.rc0: 
Final changes before RC
URL: https://github.com/apache/incubator-mxnet/pull/8301#discussion_r144986593
 
 

 ##
 File path: NEWS.md
 ##
 @@ -1,34 +1,46 @@
 MXNet Change Log
 
 ## 0.12.0
-### New Features - Sparse Tensor Support
-  - Added limited cpu support for two sparse formats for `Symbol` and 
`NDArray` - `CSRNDArray` and `RowSparseNDArray`
-  - Added a sparse dot product operator and many element-wise sparse operators
-  - Added a data iterator for sparse data input - `LibSVMIter`
-  - Added three optimizers for sparse gradient updates: `Ftrl`, `SGD` and 
`Adam`
-  - Added `push` and `row_sparse_pull` with `RowSparseNDArray` in distributed 
kvstore
-### New Features - Autograd and Gluon
-  - New loss functions added - `SigmoidBinaryCrossEntropyLoss`, `CTCLoss`, 
`HuberLoss`, `HingeLoss`, `SquaredHingeLoss`, `LogisticLoss`, `TripletLoss`
+### Performance
+  - Added full support for NVIDIA Volta GPU Architecture and CUDA 9. Training 
is up to 3.5x faster than Pascal when using float16.
+  - Enabled JIT compilation. Autograd and Gluon hybridize now use less memory 
and has faster speed. Performance is almost the same with old symbolic style 
code.
+  - Improved ImageRecordIO image loading performance and added indexed 
RecordIO support.
+  - Added better openmp thread management to improve CPU performance.
+### New Features - Gluon
+  - Added enhancements to the Gluon package, a high-level interface designed 
to be easy to use while keeping most of the flexibility of low level API. Gluon 
supports both imperative and symbolic programming, making it easy to train 
complex models imperatively with minimal impact on performance. Neural networks 
(and other machine learning models) can be defined and trained with `gluon.nn` 
and `gluon.rnn` packages. 
+  - Added new loss functions - `SigmoidBinaryCrossEntropyLoss`, `CTCLoss`, 
`HuberLoss`, `HingeLoss`, `SquaredHingeLoss`, `LogisticLoss`, `TripletLoss`.
   - `gluon.Trainer` now allows reading and setting learning rate with 
`trainer.learning_rate` property.
-  - Added `mx.autograd.grad` and experimental second order gradient support 
(though most operators don't support second order gradient yet)
-  - Added `ConvLSTM` etc to `gluon.contrib`
+  - Added API `HybridBlock.export` for exporting gluon models to MXNet format.
+  - Added `ConvLSTM` to gluon.contrib.
 
 Review comment:
   OK. Commented.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] thinksanky commented on a change in pull request #8300: fixed broken links. https was pointing to http for mxnet.io

2017-10-16 Thread git
thinksanky commented on a change in pull request #8300: fixed broken links. 
https was pointing to http for mxnet.io
URL: https://github.com/apache/incubator-mxnet/pull/8300#discussion_r144985863
 
 

 ##
 File path: docs/tutorials/sparse/train.md
 ##
 @@ -99,7 +99,7 @@ f = mx.sym.sparse.elemwise_add(c, c)
 ### Storage Type Inference
 
 What will be the output storage types of sparse symbols? In MXNet, for any 
sparse symbol, the result storage types are inferred based on storage types of 
inputs.
-You can read the [Sparse Symbol 
API](https://mxnet.io/versions/master/api/python/symbol/sparse.html) 
documentation to find what output storage types are. In the example below we 
will try out the storage types introduced in the Row Sparse and Compressed 
Sparse Row tutorials: `default` (dense), `csr`, and `row_sparse`.
+You can read the [Sparse Symbol 
API](http://mxnet.io/versions/master/api/python/symbol/sparse.html) 
documentation to find what output storage types are. In the example below we 
will try out the storage types introduced in the Row Sparse and Compressed 
Sparse Row tutorials: `default` (dense), `csr`, and `row_sparse`.
 
 Review comment:
   It redirects to https://apache.incubator.org... (I tested this)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mbaijal commented on a change in pull request #8301: Preparing for 0.12.0.rc0: Final changes before RC

2017-10-16 Thread git
mbaijal commented on a change in pull request #8301: Preparing for 0.12.0.rc0: 
Final changes before RC
URL: https://github.com/apache/incubator-mxnet/pull/8301#discussion_r144984936
 
 

 ##
 File path: NEWS.md
 ##
 @@ -1,34 +1,46 @@
 MXNet Change Log
 
 ## 0.12.0
-### New Features - Sparse Tensor Support
-  - Added limited cpu support for two sparse formats for `Symbol` and 
`NDArray` - `CSRNDArray` and `RowSparseNDArray`
-  - Added a sparse dot product operator and many element-wise sparse operators
-  - Added a data iterator for sparse data input - `LibSVMIter`
-  - Added three optimizers for sparse gradient updates: `Ftrl`, `SGD` and 
`Adam`
-  - Added `push` and `row_sparse_pull` with `RowSparseNDArray` in distributed 
kvstore
-### New Features - Autograd and Gluon
-  - New loss functions added - `SigmoidBinaryCrossEntropyLoss`, `CTCLoss`, 
`HuberLoss`, `HingeLoss`, `SquaredHingeLoss`, `LogisticLoss`, `TripletLoss`
+### Performance
+  - Added full support for NVIDIA Volta GPU Architecture and CUDA 9. Training 
is up to 3.5x faster than Pascal when using float16.
+  - Enabled JIT compilation. Autograd and Gluon hybridize now use less memory 
and has faster speed. Performance is almost the same with old symbolic style 
code.
+  - Improved ImageRecordIO image loading performance and added indexed 
RecordIO support.
+  - Added better openmp thread management to improve CPU performance.
+### New Features - Gluon
+  - Added enhancements to the Gluon package, a high-level interface designed 
to be easy to use while keeping most of the flexibility of low level API. Gluon 
supports both imperative and symbolic programming, making it easy to train 
complex models imperatively with minimal impact on performance. Neural networks 
(and other machine learning models) can be defined and trained with `gluon.nn` 
and `gluon.rnn` packages. 
+  - Added new loss functions - `SigmoidBinaryCrossEntropyLoss`, `CTCLoss`, 
`HuberLoss`, `HingeLoss`, `SquaredHingeLoss`, `LogisticLoss`, `TripletLoss`.
   - `gluon.Trainer` now allows reading and setting learning rate with 
`trainer.learning_rate` property.
-  - Added `mx.autograd.grad` and experimental second order gradient support 
(though most operators don't support second order gradient yet)
-  - Added `ConvLSTM` etc to `gluon.contrib`
+  - Added API `HybridBlock.export` for exporting gluon models to MXNet format.
+  - Added `ConvLSTM` to gluon.contrib.
 
 Review comment:
   @szha Can you please review this - 
https://cwiki.apache.org/confluence/display/MXNET/MXNet+0.12.0+Release+Notes
   
   I am copying changes from here into NEWS.md. Once release notes are reviewed 
I will update the PR. I do not want to trigger multiple PR builds right now
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on a change in pull request #8301: Preparing for 0.12.0.rc0: Final changes before RC

2017-10-16 Thread git
szha commented on a change in pull request #8301: Preparing for 0.12.0.rc0: 
Final changes before RC
URL: https://github.com/apache/incubator-mxnet/pull/8301#discussion_r144983456
 
 

 ##
 File path: NEWS.md
 ##
 @@ -1,34 +1,46 @@
 MXNet Change Log
 
 ## 0.12.0
-### New Features - Sparse Tensor Support
-  - Added limited cpu support for two sparse formats for `Symbol` and 
`NDArray` - `CSRNDArray` and `RowSparseNDArray`
-  - Added a sparse dot product operator and many element-wise sparse operators
-  - Added a data iterator for sparse data input - `LibSVMIter`
-  - Added three optimizers for sparse gradient updates: `Ftrl`, `SGD` and 
`Adam`
-  - Added `push` and `row_sparse_pull` with `RowSparseNDArray` in distributed 
kvstore
-### New Features - Autograd and Gluon
-  - New loss functions added - `SigmoidBinaryCrossEntropyLoss`, `CTCLoss`, 
`HuberLoss`, `HingeLoss`, `SquaredHingeLoss`, `LogisticLoss`, `TripletLoss`
+### Performance
+  - Added full support for NVIDIA Volta GPU Architecture and CUDA 9. Training 
is up to 3.5x faster than Pascal when using float16.
+  - Enabled JIT compilation. Autograd and Gluon hybridize now use less memory 
and has faster speed. Performance is almost the same with old symbolic style 
code.
+  - Improved ImageRecordIO image loading performance and added indexed 
RecordIO support.
+  - Added better openmp thread management to improve CPU performance.
+### New Features - Gluon
+  - Added enhancements to the Gluon package, a high-level interface designed 
to be easy to use while keeping most of the flexibility of low level API. Gluon 
supports both imperative and symbolic programming, making it easy to train 
complex models imperatively with minimal impact on performance. Neural networks 
(and other machine learning models) can be defined and trained with `gluon.nn` 
and `gluon.rnn` packages. 
+  - Added new loss functions - `SigmoidBinaryCrossEntropyLoss`, `CTCLoss`, 
`HuberLoss`, `HingeLoss`, `SquaredHingeLoss`, `LogisticLoss`, `TripletLoss`.
   - `gluon.Trainer` now allows reading and setting learning rate with 
`trainer.learning_rate` property.
-  - Added `mx.autograd.grad` and experimental second order gradient support 
(though most operators don't support second order gradient yet)
-  - Added `ConvLSTM` etc to `gluon.contrib`
+  - Added API `HybridBlock.export` for exporting gluon models to MXNet format.
+  - Added `ConvLSTM` to gluon.contrib.
 
 Review comment:
   Please update line 14 with the features that I listed. It's not just conv 
lstm.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin opened a new issue #8303: arange op returns wrong result

2017-10-16 Thread git
eric-haibin-lin opened a new issue #8303: arange op returns wrong result
URL: https://github.com/apache/incubator-mxnet/issues/8303
 
 
   Note: Providing complete information in the most concise form is the best 
way to get help. This issue template serves as the checklist for essential 
information to most of the technical issues.
   
   If the issue is non-technical, feel free to present the information in what 
you believe is the best form.
   
   ## Description
   (Brief description of the problem in no more than 2 sentences.)
   
   ## Environment info (Required)
   
   ```
   What to do:
   1. Download the diagnosis script from 
https://raw.githubusercontent.com/apache/incubator-mxnet/master/tools/diagnose.py
   2. Run the script using `python diagnose.py` and paste its output here.
   
   ```
   
   Package used (Python/R/Scala/Julia): python2
   (I'm using ...)
   
   For R user, please provide R `sessionInfo()`:
   
   ## Build info (Required if built from source)
   
   Compiler (gcc/clang/mingw/visual studio):
   
   MXNet commit hash: 43234d0f0a9e8dbed81cf0298fe8f5a33f3a552f
   (Paste the output of `git rev-parse HEAD` here.)
   
   Build config:
   (Paste the content of config.mk, or the build command.)
   
   ## Error Message:
   (Paste the complete error message, including stack trace.)
   
   ```
   # correct shape
   >>> a = mx.nd.arange(0, 54, step=1, repeat=1)
   >>> a.shape
   (54L,)
   # wrong shape
   >>> a = mx.nd.arange(0, 54686454, step=1, repeat=1)
   >>> a.shape
   (54686456L,)
   >>> a[54686430:]
   
   [ 54686432.  54686432.  54686432.  54686432.  54686432.  54686436.
 54686436.  54686436.  54686440.  54686440.  54686440.  54686440.
 54686440.  54686444.  54686444.  54686444.  54686448.  54686448.
 54686448.  54686448.  54686448.  54686452.  54686452.  54686452.
 54686456.  54686456.]
   
   ```
   
   ## Minimum reproducible example
   (If you are using your own code, please provide a short script that 
reproduces the error. Otherwise, please provide link to the existing example.)
   
   ## Steps to reproduce
   (Paste the commands you ran that produced the error.)
   
   1. 
   2.
   
   ## What have you tried to solve it?
   
   1.
   2.
   
 
----
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Ldpe2G commented on issue #8245: Use argmax instead of argmax_channel in Accuracy to keep dimention

2017-10-16 Thread git
Ldpe2G commented on issue #8245: Use argmax instead of argmax_channel in 
Accuracy to keep dimention
URL: https://github.com/apache/incubator-mxnet/pull/8245#issuecomment-337055492
 
 
   Try restart the test and have you test the Accuracy metric on classification 
task after the modification? I am concerned about the potential shape problem.

 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #8302: Refactor operators

2017-10-16 Thread git
ape[4] ?
+  (AddPad(dshape[4], param_.pad[2]) - dilated_ksize_x) / param_.stride[2] 
+ 1 : 0;
+SHAPE_ASSIGN_CHECK(*out_shape, 0, ConvertLayout(oshape, kNCDHW, 
param_.layout.value()));
+// Perform incomplete shape inference. Fill in the missing values in data 
shape.
+// 1) We can always fill in the batch_size.
+// 2) We can back-calculate the input depth/height/width if the 
corresponding stride is 1.
+oshape = ConvertLayout((*out_shape)[0].get<5>(), param_.layout.value(), 
kNCDHW);
+dshape[0] = oshape[0];
+if (oshape[2] && param_.stride[0] == 1) {
+  dshape[2] = oshape[2] + dilated_ksize_d - 1 - 2 * param_.pad[0];
+}
+if (oshape[3] && param_.stride[1] == 1) {
+  dshape[3] = oshape[3] + dilated_ksize_y - 1 - 2 * param_.pad[1];
+}
+if (oshape[4] && param_.stride[2] == 1) {
+  dshape[4] = oshape[4] + dilated_ksize_x - 1 - 2 * param_.pad[2];
+}
+SHAPE_ASSIGN_CHECK(*in_shape, conv::kData,
+ConvertLayout(dshape, kNCDHW, param_.layout.value()));
+// Check whether the kernel sizes are valid
+if (dshape[2] != 0) {
+  CHECK_LE(dilated_ksize_d, AddPad(dshape[2], param_.pad[0])) << "kernel 
size exceed input";
+}
+if (dshape[3] != 0) {
+  CHECK_LE(dilated_ksize_y, AddPad(dshape[3], param_.pad[1])) << "kernel 
size exceed input";
+}
+if (dshape[4] != 0) {
+  CHECK_LE(dilated_ksize_x, AddPad(dshape[4], param_.pad[2])) << "kernel 
size exceed input";
+}
+return true;
+  } else {
+LOG(FATAL) << "Unknown convolution type";
+return false;
+  }
+}
+
+static bool ConvolutionType(const nnvm::NodeAttrs& attrs,
 
 Review comment:
   static seems unnecessary?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on issue #8302: Refactor operators

2017-10-16 Thread git
piiswrong commented on issue #8302: Refactor operators
URL: https://github.com/apache/incubator-mxnet/pull/8302#issuecomment-337048293
 
 
   Please move cudnn_* files into nn/cudnn/.
   also move cudnn_algoreg into nn/cudnn/
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   4   5   6   7   8   9   10   >